id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
11,131,769 | https://en.wikipedia.org/wiki/Long%20Tom%20%28rocket%29 | Long Tom was the first Australian sounding rocket. It was first launched from the Woomera Test Range in October 1957. It was a two-stage rocket developed to test the range's instrumentation for later projects. In the early 1960s it was superseded by the HAD and HAT sounding rockets.
See also
Australian Space Research Institute
References
Sounding rockets of Australia | Long Tom (rocket) | [
"Astronomy"
] | 71 | [
"Rocketry stubs",
"Astronomy stubs"
] |
11,133,153 | https://en.wikipedia.org/wiki/Polarization%20spectroscopy | Polarization spectroscopy comprises a set of spectroscopic techniques based on polarization properties of light (not necessarily visible one; UV, X-ray, infrared, or in any other frequency range of the electromagnetic radiation). By analyzing the polarization properties of light, decisions can be made about the media that emitted the light (or the media the light passes/scatters through). Alternatively, a source of polarized light may be used to probe a media; in this case, the changes in the light polarization (compared to the incidental light) allow inferences about the media's properties.
In general, any kind of anisotropy in the media results in some sort of change in polarization. Such an anisotropy can be either inherent to the media (e.g., in the case of a crystal substance), or imposed externally (e.g., in the presence of magnetic field in plasma or by another laser beam).
See also
Faraday effect
Plasma diagnostics
Stark effect
Zeeman effect
References
Spectroscopy | Polarization spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 215 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
2,280,725 | https://en.wikipedia.org/wiki/Nucleotide%20exchange%20factor | Nucleotide exchange factors (NEFs) are proteins that stimulate the exchange (replacement) of nucleoside diphosphates for nucleoside triphosphates bound to other proteins.
Function
Many cellular proteins cleave (hydrolyze) nucleoside triphosphates – adenosine triphosphate (ATP) or guanosine triphosphate (GTP) – to their diphosphate forms (ADP and GDP) as a source of energy and to drive conformational changes. These changes in turn affect the structural, enzymatic, or signalling properties of the protein.
Nucleotide exchange factors actively assist in the exchange of depleted nucleoside diphosphates for fresh nucleoside triphosphates. NEFs are specific for the nucleotides they exchange (ADP or GDP, but not both) and are often specific to a single protein or class of proteins with which they interact.
See also
Nucleoside-diphosphate kinase
Guanine nucleotide exchange factor
References
External links
Alfred Wittinghofer's Seminar: GTP-Binding Proteins as Molecular Switches
Proteins
Articles containing video clips | Nucleotide exchange factor | [
"Chemistry"
] | 244 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
2,282,614 | https://en.wikipedia.org/wiki/Antonius%20van%20den%20Broek | Antonius Johannes van den Broek (4 May 1870 – 25 October 1926) was a Dutch mathematical economist and amateur physicist, notable for being the first who realized that the position of an element in the periodic table (now called atomic number) corresponds to the charge of its atomic nucleus. This hypothesis was published in 1911 and inspired the experimental work of Henry Moseley, who found good experimental evidence for it by 1913.
Life
Van den Broek was the son of a civil law notary and trained to be a lawyer himself. He studied at Leiden University and at the Sorbonne in Paris, obtaining a degree in 1895 in Leiden. From 1895 to 1900 he held a lawyers office in The Hague until 1900, after which he studied mathematical economy in Vienna and Berlin. However, from 1903 on, his main interest was physics. Much of the time between 1903 and 1911 he lived in France and Germany. Most of his papers he wrote between 1913 and 1916 while living in Gorssel. He married Elisabeth Margaretha Mauve in 1906, with whom he had five children.
Major contribution to science
The idea of the direct correlation of the charge of the atom nucleus and the periodic table was contained in his paper published in Nature on 20 July 1911, just one month after Ernest Rutherford published the results of his experiments that showed the existence of a small charged nucleus in an atom (see Rutherford model). However, Rutherford's original paper noted only that the charge on the nucleus was large, on the order of about half of the atomic weight of the atom, in whole number units of hydrogen mass. Rutherford on this basis made the tentative suggestion that atomic nuclei are composed of numbers of helium nuclei, each with a charge corresponding to half of its atomic weight. This consideration would make the nuclear charge nearly equal to atomic number in smaller atoms, with some deviation from this rule for the largest atoms, such as gold. For example, Rutherford found the charge on gold to be about 100 units and thought perhaps that it might be exactly 98 (which would be close to half its atomic weight). But gold's place in the periodic table (and thus its atomic number) was known to be 79.
Thus Rutherford did not make the proposal that the number of charges in the nucleus of an atom might be exactly equal to its place on the periodic table (atomic number). This hypothesis was put forward by Van den Broek. The number of the place of an element in the periodic table (or atomic number) at that time was not thought by most physicists to be a physical property. It was not until the work of Henry Moseley working with the Bohr model of the atom with the explicit idea of testing Van den Broek's hypothesis, that it was realized that atomic number was indeed a purely physical property (the charge of the nucleus) which could be measured, and that Van den Broek's original guess had been correct, or very close to being correct. Moseley's work actually found (see Moseley's law) the nuclear charge best described by the Bohr equation and a charge of Z-1, where Z is the atomic number.
Henry Moseley, in his paper on atomic number and X-ray emission, mentions only the models of Rutherford and Van den Broek.
References
H. A. M. Snelders (1979) BROEK, Antonius Johannes van den (1870-1926), Biografisch Woordenboek van Nederland 1, The Hague. (in Dutch)
E. R. Scerri (2007) The Periodic Table, Its Story and Its Significance, Oxford University Press
E.R. Scerri (2016) A Tale of Seven Scientists and A New Philosophy of Science, chapter 3, Oxford University Press
External links
1870 births
1926 deaths
20th-century Dutch lawyers
20th-century Dutch physicists
People involved with the periodic table
Leiden University alumni
People from Zoetermeer
University of Paris alumni
Dutch expatriates in France | Antonius van den Broek | [
"Chemistry"
] | 808 | [
"Periodic table",
"People involved with the periodic table"
] |
2,283,222 | https://en.wikipedia.org/wiki/Endohedral%20fullerene | Endohedral fullerenes, also called endofullerenes, are fullerenes that have additional atoms, ions, or clusters enclosed within their inner spheres. The first lanthanum C60 complex called La@C60 was synthesized in 1985. The @ (at sign) in the name reflects the notion of a small molecule trapped inside a shell. Two types of endohedral complexes exist: endohedral metallofullerenes and non-metal doped fullerenes.
Notation
In a traditional chemical formula notation, a buckminsterfullerene (C60) with an atom (M) was simply represented as MC60 regardless of whether M was inside or outside the fullerene. In order to allow for more detailed discussions with minimal loss of information, a more explicit notation was proposed in 1991,
where the atoms listed to the left of the @ sign are situated inside the network composed of the atoms listed to the right. The example above would then be denoted M@C60 if M were inside the carbon network. A more complex example is K2(K@C59B), which denotes "a 60-atom fullerene cage with one boron atom substituted for a carbon in the geodesic network, a single potassium trapped inside, and two potassium atoms adhering to the outside."
The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.
Endohedral metallofullerenes
Doping fullerenes with electropositive metals takes place in an arc reactor or via laser evaporation. The metals can be transition metals like scandium, yttrium as well as lanthanides like lanthanum and cerium. Also possible are endohedral complexes with elements of the alkaline earth metals like barium and strontium, alkali metals like potassium and tetravalent metals like uranium, zirconium and hafnium. The synthesis in the arc reactor is however unspecific. Besides unfilled fullerenes, endohedral metallofullerenes develop with different cage sizes like La@C60 or La@C82 and as different isomer cages. Aside from the dominant presence of mono-metal cages, numerous di-metal endohedral complexes and the tri-metal carbide fullerenes like Sc3C2@C80 were also isolated.
In 1999 a discovery drew large attention. With the synthesis of the Sc3N@C80 by Harry Dorn and coworkers, the inclusion of a molecule fragment in a fullerene cage had succeeded for the first time. This compound can be prepared by arc-vaporization at temperatures up to 1100 °C of graphite rods packed with scandium(III) oxide iron nitride and graphite powder in a K-H generator in a nitrogen atmosphere at 300 Torr.
Endohedral metallofullerenes are characterised by the fact that electrons will transfer from the metal atom to the fullerene cage and that the metal atom takes a position off-center in the cage. The size of the charge transfer is not always simple to determine. In most cases it is between 2 and 3 charge units, in the case of the La2@C80 however it can be even about 6 electrons such as in Sc3N@C80 which is better described as [Sc3N]+6@[C80]−6. These anionic fullerene cages are very stable molecules and do not have the reactivity associated with ordinary empty fullerenes. They are stable in air up to very high temperatures (600 to 850 °C).
The lack of reactivity in Diels-Alder reactions is utilised in a method to purify [C80]−6 compounds from a complex mixture of empty and partly filled fullerenes of different cage size. In this method Merrifield resin is modified as a cyclopentadienyl resin and used as a solid phase against a mobile phase containing the complex mixture in a column chromatography operation. Only very stable fullerenes such as [Sc3N]+6@[C80]−6 pass through the column unreacted.
In Ce2@C80 the two metal atoms exhibit a non-bonded interaction. Since all the six-membered rings in C80-Ih are equal the two encapsulated Ce atoms exhibit a three-dimensional random motion. This is evidenced by the presence of only two signals in the 13C-NMR spectrum. It is possible to force the metal atoms to a standstill at the equator as shown by x-ray crystallography when the fullerene is exahedrally functionalized by an electron donation silyl group in a reaction of Ce2@C80 with 1,1,2,2-tetrakis(2,4,6-trimethylphenyl)-1,2-disilirane.
Gd@C82(OH)22, an endohedral metallofluorenol, can competitively inhibit the WW domain in the oncogene YAP1 from activating. It was originally developed as an MRI contrast agent.
Non-metal doped fullerenes
Endohedral complexes He@C60 and Ne@C60 are prepared by pressurizing C60 to ca. 3 bar in a noble-gas atmosphere. Under these conditions about one out of every 650,000 C60 cages was doped with a helium atom.
The formation of endohedral complexes with helium, neon, argon, krypton and xenon as well as numerous adducts of the He@C60 compound was also demonstrated with pressures of 3 kbars and incorporation of up to 0.1% of the noble gases.
While noble gases are chemically very inert and commonly exist as individual atoms, this is not the case for nitrogen and phosphorus and so the formation of the endohedral complexes N@C60, N@C70 and P@C60 is more surprising.
The nitrogen atom is in its electronic initial state (4S3/2) and is highly reactive. Nevertheless, N@C60 is sufficiently stable that exohedral derivatization from the mono- to the hexa adduct of the malonic acid ethyl ester is possible.
In these compounds no charge transfer of the nitrogen atom in the center to the carbon atoms of the cage takes place. Therefore, 13C-couplings, which are observed very easily with the endohedral metallofullerenes, could only be observed in the case of the N@C60 in a high resolution spectrum as shoulders of the central line.
The central atom in these endohedral complexes is located in the center of the cage. While other atomic traps require complex equipment, e.g. laser cooling or magnetic traps, endohedral fullerenes represent an atomic trap that is stable at room temperature and for an arbitrarily long time. Atomic or ion traps are of great interest since particles are present free from (significant) interaction with their environment, allowing unique quantum mechanical phenomena to be explored. For example, the compression of the atomic wave function as a consequence of the packing in the cage could be observed with ENDOR spectroscopy. The nitrogen atom can be used as a probe, in order to detect the smallest changes of the electronic structure of its environment.
Contrary to the metallo endohedral compounds, these complexes cannot be produced in an arc. Atoms are implanted in the fullerene starting material using gas discharge (nitrogen and phosphorus complexes) or by direct ion implantation. Alternatively, endohedral hydrogen fullerenes can be produced by opening and closing a fullerene by organic chemistry methods.
A recent example of endohedral fullerenes includes single molecules of water encapsulated in C60.
Noble gas endofullerenes are predicted to exhibit unusual polarizability. Thus, calculated values of mean polarizability of Ng@C60 do not equal to the sum of polarizabilities of a fullerene cage and the trapped atom, i.e. exaltation of polarizability occurs. The sign of the Δα polarizability exaltation depends on the number of atoms in a fullerene molecule: for small fullerenes (), it is positive; for the larger ones (), it is negative (depression of polarizability). The following formula, describing the dependence of Δα on n, has been proposed: Δα = αNg(2e−0.06(n – 20)−1). It describes the DFT-calculated mean polarizabilities of Ng@C60 endofullerenes with sufficient accuracy. The calculated data allows using C60 fullerene as a Faraday cage, which isolates the encapsulated atom from the external electric field. The mentioned relations should be typical for the more complicated endohedral structures (e.g., C60@C240 and giant fullerene-containing "onions" ).
Molecular endofullerenes
Closed fullerenes encapsulating small molecules have been synthesized. Representative are the synthesis of the dihydrogen endofullerene H2@C60, the water endofullerene H2O@C60, the hydrogen fluoride endofullerene HF@C60, and the methane endofullerene CH4@C60. The encapsulated molecules display unusual physical properties which have been studied by a variety of physical methods. As shown theoretically, compression of molecular endofullerenes (e.g., H2@C60) may lead to dissociation of the encapsulated molecules and reaction of their fragments with interiors of the fullerene cage. Such reactions should result in endohedral fullerene adducts, which are currently unknown.
See also
Fullerene ligands
Inclusion compounds
References
External links
Movie "Helium atom trapped in fullerene (C60) and dodecahedrane (C20H20)" (Youtube)
Fullerenes
Supramolecular chemistry | Endohedral fullerene | [
"Chemistry",
"Materials_science"
] | 2,124 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
2,285,007 | https://en.wikipedia.org/wiki/Regenerative%20medicine | Regenerative medicine deals with the "process of replacing, engineering or regenerating human or animal cells, tissues or organs to restore or establish normal function". This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.
Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. When the cell source for a regenerated organ is derived from the patient's own tissue or cells, the challenge of organ transplant rejection via immunological mismatch is circumvented. This approach could alleviate the problem of the shortage of organs available for donation.
Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells. Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).
History
The ancient Greeks postulated whether parts of the body could be regenerated in the 700s BC. Skin grafting, invented in the late 19th century, can be thought of as the earliest major attempt to recreate bodily tissue to restore structure and function. Advances in transplanting body parts in the 20th century further pushed the theory that body parts could regenerate and grow new cells. These advances led to tissue engineering, and from this field, the study of regenerative medicine expanded and began to take hold. This began with cellular therapy, which led to the stem cell research that is widely being conducted today.
The first cell therapies were intended to slow the aging process. This began in the 1930s with Paul Niehans, a Swiss doctor who was known to have treated famous historical figures such as Pope Pius XII, Charlie Chaplin, and king Ibn Saud of Saudi Arabia. Niehans would inject cells of young animals (usually lambs or calves) into his patients in an attempt to rejuvenate them. In 1956, a more sophisticated process was created to treat leukemia by inserting bone marrow from a healthy person into a patient with leukemia. This process worked mostly due to both the donor and receiver in this case being identical twins. Nowadays, bone marrow can be taken from people who are similar enough to the patient who needs the cells to prevent rejection.
The term "regenerative medicine" was first used in a 1992 article on hospital administration by Leland Kaiser. Kaiser's paper closes with a series of short paragraphs on future technologies that will impact hospitals. One paragraph had "Regenerative Medicine" as a bold print title and stated, "A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems."
The term was brought into the popular culture in 1999 by William A. Haseltine when he coined the term during a conference on Lake Como, to describe interventions that restore to normal function that which is damaged by disease, injured by trauma, or worn by time. Haseltine was briefed on the project to isolate human embryonic stem cells and embryonic germ cells at Geron Corporation in collaboration with researchers at the University of Wisconsin–Madison and Johns Hopkins School of Medicine. He recognized that these cells' unique ability to differentiate into all the cell types of the human body (pluripotency) had the potential to develop into a new kind of regenerative therapy. Explaining the new class of therapies that such cells could enable, he used the term "regenerative medicine" in the way that it is used today: "an approach to therapy that ... employs human genes, proteins and cells to re-grow, restore or provide mechanical replacements for tissues that have been injured by trauma, damaged by disease or worn by time" and "offers the prospect of curing diseases that cannot be treated effectively today, including those related to aging".
Later, Haseltine would go on to explain that regenerative medicine acknowledges the reality that most people, regardless of which illness they have or which treatment they require, simply want to be restored to normal health. Designed to be applied broadly, the original definition includes cell and stem cell therapies, gene therapy, tissue engineering, genomic medicine, personalized medicine, biomechanical prosthetics, recombinant proteins, and antibody treatments. It also includes more familiar chemical pharmacopeia—in short, any intervention that restores a person to normal health. In addition to functioning as shorthand for a wide range of technologies and treatments, the term "regenerative medicine" is also patient friendly. It solves the problem that confusing or intimidating language discourages patients.
The term regenerative medicine is increasingly conflated with research on stem cell therapies. Some academic programs and departments retain the original broader definition while others use it to describe work on stem cell research.
From 1995 to 1998 Michael D. West, PhD, organized and managed the research between Geron Corporation and its academic collaborators James Thomson at the University of Wisconsin–Madison and John Gearhart of Johns Hopkins University that led to the first isolation of human embryonic stem and human embryonic germ cells, respectively.
In March 2000, Haseltine, Antony Atala, M.D., Michael D. West, Ph.D., and other leading researchers founded E-Biomed: The Journal of Regenerative Medicine. The peer-reviewed journal facilitated discourse around regenerative medicine by publishing innovative research on stem cell therapies, gene therapies, tissue engineering, and biomechanical prosthetics. The Society for Regenerative Medicine, later renamed the Regenerative Medicine and Stem Cell Biology Society, served a similar purpose, creating a community of like-minded experts from around the world.
In June 2008, at the Hospital Clínic de Barcelona, Professor Paolo Macchiarini and his team, of the University of Barcelona, performed the first tissue engineered trachea (wind pipe) transplantation. Adult stem cells were extracted from the patient's bone marrow, grown into a large population, and matured into cartilage cells, or chondrocytes, using an adaptive method originally devised for treating osteoarthritis. The team then seeded the newly grown chondrocytes, as well as epithelial cells, into a decellularised (free of donor cells) tracheal segment that was donated from a 51-year-old transplant donor who had died of cerebral hemorrhage. After four days of seeding, the graft was used to replace the patient's left main bronchus. After one month, a biopsy elicited local bleeding, indicating that the blood vessels had already grown back successfully.
In 2009, the SENS Foundation was launched, with its stated aim as "the application of regenerative medicine – defined to include the repair of living cells and extracellular material in situ – to the diseases and disabilities of ageing". In 2012, Professor Paolo Macchiarini and his team improved upon the 2008 implant by transplanting a laboratory-made trachea seeded with the patient's own cells.
On September 12, 2014, surgeons at the Institute of Biomedical Research and Innovation Hospital in Kobe, Japan, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells, which were differentiated from iPS cells through directed differentiation, into an eye of an elderly woman, who suffers from age-related macular degeneration.
In 2016, Paolo Macchiarini was fired from Karolinska University in Sweden due to falsified test results and lies. The TV-show Experimenten aired on Swedish Television and detailed all the lies and falsified results.
Research
Widespread interest and funding for research on regenerative medicine has prompted institutions in the United States and around the world to establish departments and research institutes that specialize in regenerative medicine including: The Department of Rehabilitation and Regenerative Medicine at Columbia University, the Institute for Stem Cell Biology and Regenerative Medicine at Stanford University, the Center for Regenerative and Nanomedicine at Northwestern University, the Wake Forest Institute for Regenerative Medicine, and the British Heart Foundation Centers of Regenerative Medicine at the University of Oxford. In China, institutes dedicated to regenerative medicine are run by the Chinese Academy of Sciences, Tsinghua University, and the Chinese University of Hong Kong, among others.
In dentistry
Regenerative medicine has been studied by dentists to find ways that damaged teeth can be repaired and restored to obtain natural structure and function. Dental tissues are often damaged due to tooth decay, and are often deemed to be irreplaceable except by synthetic or metal dental fillings or crowns, which requires further damage to be done to the teeth by drilling into them to prevent the loss of an entire tooth.
Researchers from King's College London have created a drug called Tideglusib that claims to have the ability to regrow dentin, the second layer of the tooth beneath the enamel which encases and protects the pulp (often referred to as the nerve).
Animal studies conducted on mice in Japan in 2007 show great possibilities in regenerating an entire tooth. Some mice had a tooth extracted and the cells from bioengineered tooth germs were implanted into them and allowed to grow. The result were perfectly functioning and healthy teeth, complete with all three layers, as well as roots. These teeth also had the necessary ligaments to stay rooted in its socket and allow for natural shifting. They contrast with traditional dental implants, which are restricted to one spot as they are drilled into the jawbone.
A person's baby teeth are known to contain stem cells that can be used for regeneration of the dental pulp after a root canal treatment or injury. These cells can also be used to repair damage from periodontitis, an advanced form of gum disease that causes bone loss and severe gum recession. Research is still being done to see if these stem cells are viable enough to grow into completely new teeth. Some parents even opt to keep their children's baby teeth in special storage with the thought that, when older, the children could use the stem cells within them to treat a condition.
Extracellular matrix
Extracellular matrix materials are commercially available and are used in reconstructive surgery, treatment of chronic wounds, and some orthopedic surgeries; as of January 2017 clinical studies were under way to use them in heart surgery to try to repair damaged heart tissue.
The use of fish skin with its natural constituent of omega 3, has been developed by an Icelandic company Kereceis. Omega 3 is a natural anti-inflammatory, and the fish skin material acts as a scaffold for cell regeneration. In 2016 their product Omega3 Wound was approved by the FDA for the treatment of chronic wounds and burns. In 2021 the FDA gave approval for Omega3 Surgibind to be used in surgical applications including plastic surgery.
Cord blood
Though uses of cord blood beyond blood and immunological disorders is speculative, some research has been done in other areas. Any such potential beyond blood and immunological uses is limited by the fact that cord cells are hematopoietic stem cells (which can differentiate only into blood cells), and not pluripotent stem cells (such as embryonic stem cells, which can differentiate into any type of tissue). Cord blood has been studied as a treatment for diabetes. However, apart from blood disorders, the use of cord blood for other diseases is not a routine clinical modality and remains a major challenge for the stem cell community.
Along with cord blood, Wharton's jelly and the cord lining have been explored as sources for mesenchymal stem cells (MSC), and as of 2015 had been studied in vitro, in animal models, and in early stage clinical trials for cardiovascular diseases, as well as neurological deficits, liver diseases, immune system diseases, diabetes, lung injury, kidney injury, and leukemia.
See also
References
Further reading
Non-technical further reading
Regenerative Medicine, gives more details about Regenerative Stem Cells.
Kevin Strange and Viravuth Yin, "A Shot at Regeneration: A once abandoned drug compound shows an ability to rebuild organs damaged by illness and injury", Scientific American, vol. 320, no. 4 (April 2019), pp. 56–61.
Technical further reading
Vertebrate developmental biology
Regenerative biomedicine
Tissue engineering | Regenerative medicine | [
"Chemistry",
"Engineering",
"Biology"
] | 2,608 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
2,285,143 | https://en.wikipedia.org/wiki/Encryption%20software | Encryption software is software that uses cryptography to prevent unauthorized access to digital information. Cryptography is used to protect digital information on computers as well as the digital information that is sent to other computers over the Internet.
Classification
There are many software products which provide encryption. Software encryption uses a cipher to obscure the content into ciphertext. One way to classify this type of software is the type of cipher used. Ciphers can be divided into two categories: public key ciphers (also known as asymmetric ciphers), and symmetric key ciphers. Encryption software can be based on either public key or symmetric key encryption.
Another way to classify software encryption is to categorize its purpose. Using this approach, software encryption may be classified into software which encrypts "data in transit" and software which encrypts "data at rest". Data in transit generally uses public key ciphers, and data at rest generally uses symmetric key ciphers.
Symmetric key ciphers can be further divided into stream ciphers and block ciphers. Stream ciphers typically encrypt plaintext a bit or byte at a time, and are most commonly used to encrypt real-time communications, such as audio and video information. The key is used to establish the initial state of a keystream generator, and the output of that generator is used to encrypt the plaintext. Block cipher algorithms split the plaintext into fixed-size blocks and encrypt one block at a time. For example, AES processes 16-byte blocks, while its predecessor DES encrypted blocks of eight bytes.
There is also a well-known case where PKI is used for data in transit of data at rest.
Data in transit
Data in transit is data that is being sent over a computer network. When the data is between two endpoints, any confidential information may be vulnerable. The payload (confidential information) can be encrypted to secure its confidentiality, as well as its integrity and validity.
Often, the data in transit is between two entities that do not know each other - such as in the case of visiting a website. As establishing a relationship and securely sharing an encryption key to secure the information that will be exchanged, a set of roles, policies, and procedures to accomplish this has been developed; it is known as the public key infrastructure, or PKI. Once PKI has established a secure connection, a symmetric key can be shared between endpoints. A symmetric key is preferred over the private and public keys as a symmetric cipher is much more efficient (uses fewer CPU cycles) than an asymmetric cipher. There are several methods for encrypting data in transit, such as IPsec, SCP, SFTP, SSH, OpenPGP and HTTPS.
Data at rest
Data at rest refers to data that has been saved to persistent storage. Data at rest is generally encrypted by a symmetric key.
Encryption may be applied at different layers in the storage stack. For example, encryption can be configured at the disk layer, on a subset of a disk called a partition, on a volume, which is a combination of disks or partitions, at the layer of a file system, or within user space applications such as database or other applications that run on the host operating system.
With full disk encryption, the entire disk is encrypted (except for the bits necessary to boot or access the disk when not using an unencrypted boot/preboot partition). As disks can be partitioned into multiple partitions, partition encryption can be used to encrypt individual disk partitions. Volumes, created by combining two or more partitions, can be encrypted using volume encryption. File systems, also composed of one or more partitions, can be encrypted using filesystem-level encryption. Directories are referred to as encrypted when the files within the directory are encrypted. File encryption encrypts a single file. Database encryption acts on the data to be stored, accepting unencrypted information and writing that information to persistent storage only after it has encrypted the data. Device-level encryption, a somewhat vague term that includes encryption-capable tape drives, can be used to offload the encryption tasks from the CPU.
Transit of data at rest
When there is a need to securely transmit data at rest, without the ability to create a secure connection, user space tools have been developed that support this need. These tools rely upon the receiver publishing their public key, and the sender being able to obtain that public key. The sender is then able to create a symmetric key to encrypt the information, and then use the receiver's public key to securely protect the transmission of the information and the symmetric key. This allows secure transmission of information from one party to another.
Performance
The performance of encryption software is measured relative to the speed of the CPU. Thus, cycles per byte (sometimes abbreviated cpb), a unit indicating the number of clock cycles a microprocessor will need per byte of data processed, is the usual unit of measurement. Cycles per byte serve as a partial indicator of real-world performance in cryptographic functions. Applications may offer their own encryption called native encryption, including databases applications such as Microsoft SQL, Oracle, and MongoDB, and commonly rely on direct usage of CPU cycles for performance. This often impacts the desirability of encryption in businesses seeking greater security and ease of satisfying compliance by impacting the speed and scale of how data within organizations through to their partners.
See also
Cryptographic Protocol
Public Key (Asymmetric) Algorithms
Symmetric Algorithms
Transport Layer Security
Comparison of disk encryption software
Defense strategy (computing)
Ransomware: Malicious software using encryption
References
External links | Encryption software | [
"Mathematics"
] | 1,187 | [
"Cryptographic software",
"Mathematical software"
] |
12,104,271 | https://en.wikipedia.org/wiki/Pochhammer%20k-symbol | In the mathematical theory of special functions, the Pochhammer k-symbol and the k-gamma function, introduced by Rafael Díaz and Eddy Pariguan are generalizations of the Pochhammer symbol and gamma function. They differ from the Pochhammer symbol and gamma function in that they can be related to a general arithmetic progression in the same manner as those are related to the sequence of consecutive integers.
Definition
The Pochhammer k-symbol (x)n,k is defined as
and the k-gamma function Γk, with k > 0, is defined as
When k = 1 the standard Pochhammer symbol and gamma function are obtained.
Díaz and Pariguan use these definitions to demonstrate a number of properties of the hypergeometric function. Although Díaz and Pariguan restrict these symbols to k > 0, the Pochhammer k-symbol as they define it is well-defined for all real k, and for negative k gives the falling factorial, while for k = 0 it reduces to the power xn.
The Díaz and Pariguan paper does not address the many analogies between the Pochhammer k-symbol and the power function, such as the fact that the binomial theorem can be extended to Pochhammer k-symbols. It is true, however, that many equations involving the power function xn continue to hold when xn is replaced by (x)n,k.
Continued Fractions, Congruences, and Finite Difference Equations
Jacobi-type J-fractions for the ordinary generating function of the Pochhammer k-symbol, denoted in slightly different notation by for fixed and some indeterminate parameter , are considered in
in the form of the next infinite continued fraction expansion given by
The rational convergent function, , to the full generating function for these products expanded by the last equation is given by
where the component convergent function sequences, and , are given as closed-form sums in terms of the ordinary Pochhammer symbol and the Laguerre polynomials by
The rationality of the convergent functions for all , combined with known enumerative properties of the J-fraction expansions, imply the following finite difference equations both exactly generating for all , and generating the symbol modulo for some fixed integer :
The rationality of also implies the next exact expansions of these products given by
where the formula is expanded in terms of the special zeros of the Laguerre polynomials, or equivalently, of the confluent hypergeometric function, defined as the finite (ordered) set
and where denotes the partial fraction decomposition of the rational convergent function.
Additionally, since the denominator convergent functions, , are expanded exactly through the Laguerre polynomials as above, we can exactly generate the Pochhammer k-symbol as the series coefficients
for any prescribed integer .
Special Cases
Special cases of the Pochhammer k-symbol, , correspond to the following special cases of the falling and rising factorials, including the Pochhammer symbol, and the generalized cases of the multiple factorial functions (multifactorial functions), or the -factorial functions studied in the last two references by Schmidt:
The Pochhammer symbol, or rising factorial function:
The falling factorial function:
The single factorial function:
The double factorial function:
The multifactorial functions defined recursively by for and some offset : and
The expansions of these k-symbol-related products considered termwise with respect to the coefficients of the powers of () for each finite are defined in the article on generalized Stirling numbers of the first kind and generalized Stirling (convolution) polynomials in.
References
Gamma and related functions
Factorial and binomial topics | Pochhammer k-symbol | [
"Mathematics"
] | 756 | [
"Factorial and binomial topics",
"Combinatorics"
] |
12,106,314 | https://en.wikipedia.org/wiki/Jenkins%E2%80%93Traub%20algorithm | The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".
This article describes the complex variant. Given a polynomial P,
with complex coefficients it computes approximations to the n zeros of P(z), one at a time in roughly increasing order of magnitude. After each root is computed, its linear factor is removed from the polynomial. Using this deflation guarantees that each root is computed only once and that all roots are found.
The real variant follows the same pattern, but computes two roots at a time, either two real roots or a pair of conjugate complex roots. By avoiding complex arithmetic, the real variant can be faster (by a factor of 4) than the complex variant. The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type.
Overview
The Jenkins–Traub algorithm calculates all of the roots of a polynomial with complex coefficients. The algorithm starts by checking the polynomial for the occurrence of very large or very small roots. If necessary, the coefficients are rescaled by a rescaling of the variable. In the algorithm, proper roots are found one by one and generally in increasing size. After each root is found, the polynomial is deflated by dividing off the corresponding linear factor. Indeed, the factorization of the polynomial into the linear factor and the remaining deflated polynomial is already a result of the root-finding procedure. The root-finding procedure has three stages that correspond to different variants of the inverse power iteration. See Jenkins and Traub.
A description can also be found in Ralston and Rabinowitz p. 383.
The algorithm is similar in spirit to the two-stage algorithm studied by Traub.
Root-finding procedure
Starting with the current polynomial P(X) of degree n, the aim is to compute the smallest root of P(x). The polynomial can then be split into a linear factor and the remaining polynomial factor Other root-finding methods strive primarily to improve the root and thus the first factor. The main idea of the Jenkins-Traub method is to incrementally improve the second factor.
To that end, a sequence of so-called H polynomials is constructed. These polynomials are all of degree n − 1 and are supposed to converge to the factor of P(X) containing (the linear factors of) all the remaining roots. The sequence of H polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of polynomials that keeps the coefficients in a numerically sensible range.
The construction of the H polynomials is guided by a sequence of complex numbers called shifts. These shifts themselves depend, at least in the third stage, on the previous H polynomials. The H polynomials are defined as the solution to the implicit recursion
and
A direct solution to this implicit equation is
where the polynomial division is exact.
Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at and obtain the quotients at the same time. With the resulting quotients p(X) and h(X) as intermediate results the next H polynomial is obtained as
Since the highest degree coefficient is obtained from P(X), the leading coefficient of is . If this is divided out the normalized H polynomial is
Stage one: no-shift process
For set . Usually M=5 is chosen for polynomials of moderate degrees up to n = 50. This stage is not necessary from theoretical considerations alone, but is useful in practice. It emphasizes in the H polynomials the cofactor(s) (of the linear factor) of the smallest root(s).
Stage two: fixed-shift process
The shift for this stage is determined as some point close to the smallest root of the polynomial. It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation
Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method.
Now choose on the circle of this radius. The sequence of polynomials , , is generated with the fixed shift value . This creates an asymmetry relative to the previous stage which increases the chance that the H polynomial moves towards the cofactor of a single root.
During this iteration, the current approximation for the root
is traced. The second stage is terminated as successful if the conditions
and
are simultaneously met. This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots. If there was no success after some number of iterations, a different random point on the circle is tried. Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures.
Stage three: variable-shift process
The polynomials are now generated using the variable shifts which are generated by
being the last root estimate of the second stage and
where is the normalized H polynomial, that is divided by its leading coefficient.
If the step size in stage three does not fall fast enough to zero, then stage two is restarted using a different random point. If this does not succeed after a small number of restarts, the number of steps in stage two is doubled.
Convergence
It can be shown that, provided L is chosen sufficiently large, sλ always converges to a root of P.
The algorithm converges for any distribution of roots, but may fail to find all roots of the polynomial. Furthermore, the convergence is slightly faster than the quadratic convergence of the Newton–Raphson method, however, it uses one-and-half as many operations per step, two polynomial evaluations for Newton vs. three polynomial evaluations in the third stage.
What gives the algorithm its power?
Compare with the Newton–Raphson iteration
The iteration uses the given P and . In contrast the third-stage of Jenkins–Traub
is precisely a Newton–Raphson iteration performed on certain rational functions. More precisely, Newton–Raphson is being performed on a sequence of rational functions
For sufficiently large,
is as close as desired to a first degree polynomial
where is one of the zeros of . Even though Stage 3 is precisely a Newton–Raphson iteration, differentiation is not performed.
Analysis of the H polynomials
Let be the roots of P(X). The so-called Lagrange factors of P(X) are the cofactors of these roots,
If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most n − 1. By analysis of the recursion procedure one finds that the H polynomials have the coordinate representation
Each Lagrange factor has leading coefficient 1, so that the leading coefficient of the H polynomials is the sum of the coefficients. The normalized H polynomials are thus
Convergence orders
If the condition holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards .
Under the condition that
one gets the asymptotic estimates for
stage 1:
for stage 2, if s is close enough to : and
and for stage 3: and giving rise to a higher than quadratic convergence order of , where is the golden ratio.
Interpretation as inverse power iteration
All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the n-dimensional space of polynomials of degree n − 1 or less. The principal idea of this map is to interpret the factorization
with a root and the remaining factor of degree n − 1 as the eigenvector equation for the multiplication with the variable X, followed by remainder computation with divisor P(X),
This maps polynomials of degree at most n − 1 to polynomials of degree at most n − 1. The eigenvalues of this map are the roots of P(X), since the eigenvector equation reads
which implies that , that is, is a linear factor of P(X). In the monomial basis the linear map is represented by a companion matrix of the polynomial P, as
the resulting transformation matrix is
To this matrix the inverse power iteration is applied in the three variants of no shift, constant shift and generalized Rayleigh shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same.
Real coefficients
The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration. The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order.
A connection with the shifted QR algorithm
There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues. See Dekker and Traub The shifted QR algorithm for Hermitian matrices. Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial.
Software and testing
The software for the Jenkins–Traub algorithm was published as Jenkins and Traub Algorithm 419: Zeros of a Complex Polynomial. The software for the real algorithm was published as Jenkins Algorithm 493: Zeros of a Real Polynomial.
The methods have been extensively tested by many people. As predicted they enjoy faster than quadratic convergence for all distributions of zeros.
However, there are polynomials which can cause loss of precision as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson, p. 64. The original polynomial was of degree 60 and suffered severe deflation instability.
References
External links
A free downloadable Windows application using the Jenkins–Traub Method for polynomials with real and complex coefficients
RPoly++ An SSE-Optimized C++ implementation of the RPOLY algorithm.
Numerical analysis
Polynomial factorization algorithms | Jenkins–Traub algorithm | [
"Mathematics"
] | 2,286 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
12,106,733 | https://en.wikipedia.org/wiki/2-Chloro-9%2C10-bis%28phenylethynyl%29anthracene | 2-Chloro-9,10-bis(phenylethynyl)anthracene is a fluorescent dye used in lightsticks. It emits green light, used in 12-hour low-intensity Cyalume sticks.
See also
9,10-Bis(phenylethynyl)anthracene
1-Chloro-9,10-bis(phenylethynyl)anthracene
Fluorescent dyes
Organic semiconductors
Anthracenes
Alkyne derivatives
Chloroarenes | 2-Chloro-9,10-bis(phenylethynyl)anthracene | [
"Chemistry"
] | 112 | [
"Semiconductor materials",
"Molecular electronics",
"Organic semiconductors"
] |
12,106,984 | https://en.wikipedia.org/wiki/Schaefer%E2%80%93Bergmann%20diffraction | Schaefer–Bergmann diffraction is the resulting diffraction pattern of light interacting with sound waves in transparent crystals or glasses.
See also
IEEE
http://prola.aps.org/abstract/PR/v52/i3/p223_1
DOI.org
References
Diffraction | Schaefer–Bergmann diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 67 | [
"Materials science stubs",
"Spectrum (physical sciences)",
"Crystallography stubs",
"Crystallography",
"Diffraction",
"Spectroscopy"
] |
12,110,212 | https://en.wikipedia.org/wiki/DNA%20database | A DNA database or DNA databank is a database of DNA profiles which can be used in the analysis of genetic diseases, genetic fingerprinting for criminology, or genetic genealogy. DNA databases may be public or private, the largest ones being national DNA databases.
DNA databases are often employed in forensic investigations. When a match is made from a national DNA database to link a crime scene to a person whose DNA profile is stored on a database, that link is often referred to as a cold hit. A cold hit is of particular value in linking a specific person to a crime scene, but is of less evidential value than a DNA match made without the use of a DNA database. Research shows that DNA databases of criminal offenders reduce crime rates.
Types
Forensic
A forensic database is a centralized DNA database for storing DNA profiles of individuals that enables searching and comparing of DNA samples collected from a crime scene against stored profiles. The most important function of the forensic database is to produce matches between the suspected individual and crime scene bio-markers, and then provides evidence to support criminal investigations, and also leads to identify potential suspects in the criminal investigation. Majority of the National DNA databases are used for forensic purposes.
The Interpol DNA database is used in criminal investigations. Interpol maintains an automated DNA database called DNA Gateway that contains DNA profiles submitted by member countries collected from crime scenes, missing persons, and unidentified bodies. The DNA Gateway was established in 2002, and at the end of 2013, it had more than 140,000 DNA profiles from 69 member countries. Unlike other DNA databases, DNA Gateway is only used for information sharing and comparison, it does not link a DNA profile to any individual, and the physical or psychological conditions of an individual are not included in the database.
Genealogical
A national or forensic DNA database is not available for non-police purposes. DNA profiles can also be used for genealogical purposes, so that a separate genetic genealogy database needs to be created that stores DNA profiles of genealogical DNA test results. GenBank is a public genetic genealogy database that stores genome sequences submitted by many genetic genealogists. Until now, GenBank has contained large number of DNA sequences gained from more than 140,000 registered organizations, and is updated every day to ensure a uniform and comprehensive collection of sequence information. These databases are mainly obtained from individual laboratories or large-scale sequencing projects. The files stored in GenBank are divided into different groups, such as BCT (bacterial), VRL (viruses), PRI (primates)...etc. People can access GenBank from NCBI's retrieval system, and then use “BLAST” function to identify a certain sequence within the GenBank or to find the similarities between two sequences.
Medical
A medical DNA database is a DNA database of medically relevant genetic variations. It collects an individual's DNA which can reflect their medical records and lifestyle details. Through recording DNA profiles, scientists may find out the interactions between the genetic environment and occurrence of certain diseases (such as cardiovascular disease or cancer), and thus finding some new drugs or effective treatments in controlling these diseases. It is often collaborated with the National Health Service.
National
A national DNA database is a DNA database maintained by the government for storing DNA profiles of its population. Each DNA profile based on PCR uses STR (Short Tandem Repeats) analysis. They are generally used for forensic purposes, including searching and matching DNA profiles of potential criminal suspects.
In 2009 Interpol reported 54 police national DNA databases in the world and 26 more countries planned to start one. In Europe Interpol reported there were 31 national DNA databases and six more planned. The European Network of Forensic Science Institutes (ENFSI) DNA working group made 33 recommendations in 2014 for DNA database management and guidelines for auditing DNA databases. Other countries have adopted privately developed DNA databases, such as Qatar, which has adopted Bode dbSEARCH.
Typically, a tiny subset of the individual's genome is sampled from 13 or 16 regions that have high individuation.
United Kingdom
The first national DNA database in the United Kingdom was established in April 1995, called National DNA Database (NDNAD). By 2006, it contained 2.7 million DNA profiles (about 5.2% of the UK population), as well as other information from individuals and crime scenes. in 2020 it had 6.6 million profiles (5.6 million individuals excluding duplicates). The information is stored in the form of a digital code, which is based on the nomenclature of each STR. In 1995 the database originally had 6 STR markers for each profile, from 1999 10 markers, and from 2014, 16 core markers and a gender identifier. Scotland has used 21 STR loci, two Y-DNA markers and a gender identifier since 2014. In the UK, police have wide-ranging powers to take DNA samples and retain them if the subject is convicted of a recordable offence. As the large amount of DNA profiles which have been stored in NDNAD, "cold hits" may happen during the DNA matching, which means finding an unexpected match between an individual's DNA profile and an unsolved crime-scene DNA profile. This can introduce a new suspect into the investigation, thus helping to solve the old cases.
In England and Wales, anyone arrested on suspicion of a recordable offence must submit a DNA sample, the profile of which is then stored on the DNA database. Those not charged or not found guilty have their DNA data deleted within a specified period of time. In Scotland, the law similarly requires the DNA profiles of most people who are acquitted be removed from the database.
New Zealand
New Zealand was the second country to set up a DNA database. In 2019 The New Zealand DNA Profile Databank held 40,000 DNA profiles and 200,000 samples.
United States
The United States national DNA database is called Combined DNA Index System (CODIS). It is maintained at three levels: national, state and local. Each level implemented its own DNA index system. The national DNA index system (NDIS) allows DNA profiles to be exchanged and compared between participated laboratories nationally. Each state DNA index system (SDIS) allows DNA profiles to be exchanged and compared between the laboratories of various states and the local DNA index system (LDIS) allows DNA profiles collected at local sites and uploaded to SDIS and NDIS.
CODIS software integrates and connects all the DNA index systems at the three levels. CODIS is installed on each participating laboratory site and uses a standalone network known as Criminal Justice Information Systems Wide Area Network (CJIS WAN) to connect to other laboratories. In order to decrease the number of irrelevant matches at NDIS, the Convicted Offender Index requires all 13 CODIS STRs to be present for a profile upload. Forensic profiles only require 10 of the STRs to be present for an upload.
As of 2011, over 9 million records were held within CODIS. As of March 2011, 361,176 forensic profiles and 9,404,747 offender profiles have been accumulated, making it the largest DNA database in the world. As of the same date, CODIS has produced over 138,700 matches to requests, assisting in more than 133,400 investigations.
The growing public approval of DNA databases has seen the creation and expansion of many states' own DNA databases. Political measures such as California Proposition 69 (2004), which increased the scope of the DNA database, have already met with a significant increase in numbers of investigations aided. Forty-nine states in the USA, all apart from Idaho, store DNA profiles of violent offenders, and many also store profiles of suspects. A 2017 study showed that DNA databases in U.S. states "deter crime by profiled offenders, reduce crime rates, and are more cost-effective than traditional law enforcement tools".
CODIS is also used to help find missing persons and identify human remains. It is connected to the National Missing Persons DNA Database; samples provided by family members are sequenced by the University of North Texas Center for Human Identification, which also runs the National Missing and Unidentified Persons System. UNTCHI can sequence both nuclear and mitochondrial DNA.
The Department of Defense maintains a DNA database to identify the remains of service members. The Department of Defense Serum Repository maintains more than 50,000,000 records, primarily to assist in the identification of human remains. Submission of DNA samples is mandatory for US servicemen, but the database also includes information on military dependents. The National Defense Authorization Act of 2003 provided a means for federal courts or military judges to order the use of the DNA information collected to be made available for the purpose of investigation or prosecution of a felony, or any sexual offense, for which no other source of DNA information is reasonably available.
Australia
The Australian national DNA database is called the National Criminal Investigation DNA Database (NCIDD). By July 2018, it contained 837,000+ DNA profiles. The database used nine STR loci and a sex gene for analysis, and this was increased to 18 core markers in 2013. NCIDD combines all forensic data, including DNA profiles, advanced bio-metrics or cold cases.
Canada
The Canadian national DNA database is called the National DNA Data Bank (NDDB) which was established in 1998 but first used in 2000. The legislation that Parliament enacted to govern the use of this technology within the criminal justice system has been found by Canadian courts to be respectful of the constitutional and privacy rights of suspects, and of persons found guilty of designated offences.
On December 11, 1999, The Canadian Government agreed upon the DNA Identification Act. This would allow a Canadian DNA data bank to be created and amended for the criminal code. This provides a mechanism for judges to request the offender to provide blood, buccal swabs, or hair samples from DNA profiles. This legislation became official on June 29, 2000. Canadian police has been using forensic DNA evidence for over a decade. It has become one of the most powerful tools available to law enforcement agencies for the administration of justice.
NDDB consists of two indexes: the Convicted Offender Index (COI) and National Crime Scene Index (CSI-nat). There is also the Local Crime Scene Index (CSI-loc) which is maintained by local laboratories but not NDDB as local DNA profiles do not meet NDDB collection criteria. Another National Crime Scene Index (CSI-nat) is a collection of three labs operated by Royal Canadian Mounted Police (RCMP), Laboratory Sciences Judiciary Medicine Legal (LSJML) and Center of Forensic Sciences (CFS).
Dubai
In 2017 Dubai announced an initiative called Dubai 10X which was planned to create 'disruptive innovation' into the country. One of the projects in this initiative was a DNA database that would collect the genomes of all 3 million citizens of the country over a 10-year period. It was intended to use the data base for finding genetic causes of diseases and creating personalised medical treatments.
Germany
Germany set up its DNA database for the German Federal Police (BKA) in 1998.
In late 2010, the database contained DNA profiles of over 700,000 individuals and in September 2016 it contained 1,162,304 entries. On 23 May 2011 in the "Stop the DNA Collection Frenzy!" campaign various civil rights and data protection organizations handed an open letter to the German minister of justice Sabine Leutheusser-Schnarrenberger asking her to take action in order to stop the "preventive expansion of DNA data-collection" and the "preemptive use of mere suspicions and of the state apparatus against individuals" and to cancel projects of international exchange of DNA data at the European and transatlantic level.
Israel
The Israeli national DNA database is called the Israel Police DNA Index System (IPDIS) which was established in 2007, and has a collection of more than 135,000 DNA profiles. The collection includes DNA profiles from suspected and accused persons and convicted offenders. The Israeli database also include an “elimination bank” of profiles from laboratory staff and other police personnel who may have contact with the forensic evidence in the course of their work.
In order to handle the high throughput processing and analysis of DNA samples from FTA cards, the Israeli Police DNA database has established a semi-automated program LIMS, which enables a small number of police to finish processing a large number of samples in a relatively small period of time, and it is also responsible for the future tracking of samples.
Kuwait
The Kuwaiti government passed a law in July 2015 requiring all citizens and permanent residents (4.2 million people) to have their DNA taken for a national database. The reason for this law was security concerns after the ISIS suicide bombing of the Imam Sadiq mosque. They planned to finish collecting the DNA by September 2016 which outside observers thought was optimistic. In October 2017 the Kuwait constitutional court struck down the law saying it was an invasion of personal privacy and the project was cancelled.
Brazil
In 1998, the Forensic DNA Research Institute of Federal District Civil Police created DNA databases of sexual assault evidence. In 2012, Brazil approved a national law establishing DNA databases at state and national levels regarding DNA typing of individuals convicted of violent crimes. Following the decree of the Presidency of the Republic of Brazil in 2013, which regulates the 2012 law, Brazil began using CODIS in addition to the DNA databases of sexual assault evidence to solve sexual assault crimes in Brazil.
France
France set up the DNA database called FNAEG in 1998. By December 2009, there were 1.27 million profiles on FNAEG.
Russia
In Russia, scientific DNA testing is being actively carried out in order to study the genetic diversity of the peoples of Russia in the framework of the state task - to learn from DNA to determine the probable territory of human origin based on data on the majority of the peoples of the country. On June 16, 2017, the Council of Ministers of the Union State of Belarus and Russia adopted Resolution No. 26, in which it approved the scientific and technical program of the Union State "Development of innovative genogeographic and genomic technologies for identification of personality and individual characteristics of a person based on the study of gene pools of the regions of the Union State" (DNA - identification).
Within the framework of this program, it is also planned to include the peoples of neighboring countries, which are the main source of migration, into the genogeographic study on the basis of existing collections.
In accordance with the Federal Law of December 3, 2008 No. 242-FZ "On state genomic registration in the Russian Federation", voluntary state genomic registration of citizens of the Russian Federation, as well as foreign citizens and stateless persons living or temporarily staying in the territory of the Russian Federation on the basis of a written application and on a paid basis. Genomic information obtained as a result of state genomic registration is used, among other things, for the purpose of establishing family relationships of wanted (identified) persons. The form of keeping records of data on genomic registration of citizens is the Federal Genomic Information Database (FBDGI).
Articles 10 and 11 of the Federal Law of July 27, 2006 No. 152-FZ "On Personal Data" provide that the processing of special categories of personal data relating to race, nationality, political views, religious or philosophical beliefs, health status, intimate life is allowed if it is necessary in connection with the implementation of international agreements of the Russian Federation on readmission and is carried out in accordance with the legislation of the Russian Federation on citizenship of the Russian Federation. Information characterizing the physiological and biological characteristics of a person, on the basis of which it is possible to establish his identity (biometric personal data), can be processed without the consent of the subject of personal data in connection with the implementation of international agreements of the Russian Federation on readmission, administration of justice and execution of judicial acts, compulsory state fingerprinting registration, as well as in cases stipulated by the legislation of the Russian Federation on defense, security, anti-terrorism, transport security, anti-corruption, operational investigative activities, public service, as well as in cases stipulated by the criminal-executive legislation of Russia, the legislation of Russia on the procedure for leaving the Russian Federation and entering the Russian Federation, citizenship of the Russian Federation and notaries.
Other European countries
In comparison with the other European countries, The Netherlands is the largest collector of DNA profiles of its citizens. At this moment the DNA databank at the Netherlands Forensic Institute contains the DNA profiles of over 316,000 Dutch citizens.
Contrary to the situation in most other European countries, the Dutch police have wide-ranging powers to take and retain DNA samples if a subject is convicted of a recordable offence, except when the conviction only involves paying a fine. If a subject refuses, for example because of privacy concerns, the Dutch police will use force.
In Sweden, only the DNA profiles of criminals who have spent more than two years in prison are stored. In Norway and Germany, court orders are required, and are only available, respectively, for serious offenders and for those convicted of certain offences and who are likely to reoffend. Austria started a criminal DNA database in 1997 and Italy also set one up in 2016 Switzerland started a temporary criminal DNA database in 2000 and confirmed it in law in 2005.
In 2005 the incoming Portuguese government proposed to introduce a DNA database of the entire population of Portugal. However, after informed debate including opinion from the Portuguese Ethics Council the database introduced was of just the criminal population.
Genuity Science (formerly Genomics Medicine Ireland) is an Irish life sciences company that was founded in 2015 to create a scientific platform to perform genomic studies and generate new disease prevention strategies and treatments. The company was founded by a group of life science entrepreneurs, investors and researchers and its scientific platform is based on work by Amgen’s Icelandic subsidiary, deCODE genetics, which has pioneered genomic population health studies. The company is building a genomic database which will include data from about 10 per cent of the Irish population, including patients with various diseases and healthy people. The idea of a private company owning public DNA data has raised concerns, with an Irish Times editorial stating: "To date, Ireland seems to have adopted an entirely commercial approach to genomic medicine. This approach places at risk the free availability of genomic data for scientific research that could benefit patients." The paper's editorial pointed out that this is in stark contrast to the approach the U.K. has taken, which is the publicly and charitably funded 100,000 Genomes Project being carried out by Genomics England.
China
By 2020, Chinese police had collected 80 million DNA profiles. There have been concerns that China may be using DNA data not just for crime solving, but for tracking activists, including Uyghurs.
Chinese have begun a $9 billion program for genetic science studying, Fire-Eye has DNA labs in over 20 countries.
India
India announced it will launch its genomic database by fall 2019. In the first phase of "Genome India" the genomic data of 10,000 Indians will be catalogued. The Department of Biotechnology (DBT) has initiated the project. The first private DNA bank in India is in Lucknow - the capital of Indian State Uttar Pradesh. Unlike a research center, this is available for Public to store their DNA by paying a minimum amount and four drops of blood.
Corporate
Ancestry was reported to have collected 14 million DNA samples as of November 2018.
23andme's DNA database contained genetic information of over nine million people worldwide by 2019. The company explores selling the "anonymous aggregated genetic data" to other researchers and pharmaceutical companies for research purposes if patients give their consent. Ahmad Hariri, professor of psychology and neuroscience at Duke University who has been using 23andMe in his research since 2009 states that the most important aspect of the company's new service is that it makes genetic research accessible and relatively cheap for scientists. A study that identified 15 genome sites linked to depression in 23andMe's database lead to a surge in demands to access the repository with 23andMe fielding nearly 20 requests to access the depression data in the two weeks after publication of the paper.
My Heritage said their database had 2.5 million profiles in 2019.
Family Tree DNA was reported they had about two million people in their database in 2019.
Fire-Eye
Compression
DNA databases occupy more storage when compared to other non DNA databases due to the enormous size of each DNA sequence. Every year DNA databases grow exponentially. This poses a major challenge to the storage, data transfer, retrieval and search of these databases. To address these challenges DNA databases are compressed to save storage space and bandwidth during the data transfers. They are decompressed during search and retrieval. Various compression algorithms are used to compress and decompress. The efficiency of any compression algorithm depends how well and fast it compresses and decompresses, which is generally measured in compression ratio. The greater the compression ratio, the better the efficiency of an algorithm. At the same time, the speed of compression and decompression are also considered for evaluation.
DNA sequences contain palindromic repetitions of A, C, T, G. Compression of these sequences involve locating and encoding these repetitions and decoding them during decompression.
Some approaches used to encode and decode are:
Huffman Encoding
Adaptive Huffman Encoding
Arithmetic coding
Arithmetic coding
Context tree weighting (CTW) method
The compression algorithms listed below may use one of the above encoding approaches to compress and decompress DNA database
Compression using Redundancy of DNA sets (COMRAD)
Relative Lempel-Ziv (RLZ)
GenCompress
BioCompress
DNACompress
CTW+LZ
In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities.
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Medicine
Many countries collect newborn blood samples to screen for diseases mainly with a genetic basis. Mainly these are destroyed soon after testing. In some countries the dried blood (and the DNA) is retained for later testing.
In Denmark the Danish Newborn Screening Biobank at Statens Serum Institut keeps a blood sample from people born after 1981. The purpose is to test for phenylketonuria and other diseases. However, it is also used for DNA profiling to identify deceased and suspected criminals. Parents can request that the blood sample of their newborn be destroyed after the result of the test is known.
Privacy issues
Critics of DNA databases warn that the various uses of the technology can pose a threat to individual civil liberties. Personal information included in genetic material, such as markers that identify various genetic diseases, physical and behavioral traits, could be used for discriminatory profiling and its collection may constitute an invasion of privacy. Also, DNA can be used to establish paternity and whether or not a child is adopted. Nowadays, the privacy and security issues of DNA database has caused huge attention. Some people are afraid that their personal DNA information will be let out easily, others may define their DNA profiles recording in the Databases as a sense of "criminal", and being falsely accused in a crime can lead to having a "criminal" record for the rest of their lives.
UK laws in 2001 and 2003 allowed DNA profiles to be taken immediately after a person was arrested and kept in a Database even if the suspect was later acquitted. In response to public unease at these provisions, the UK later changed this by passing the Protection of Freedoms Act 2012 which required that those suspects not charged or found not guilty would have their DNA data deleted from the Database.
In European countries which have established a DNA database, there are some measures which are being used to protect the privacy of individuals, more specifically, some criteria to help removing the DNA profiles from the databases. Among the 22 European countries which have been analyzed, most of the countries will record the DNA profiles of suspects or those who have committed serious crimes. For some countries (like Belgium and France) may remove the criminal's profile after 30–40 years, because these “criminal investigation” database are no longer needed. Most of the countries will delete the suspect's profile after they are acquitted...etc. All the countries have a completed legislation to largely avoid the privacy issues which may occur during the use of DNA database. Public discussion around the introduction of advanced forensic techniques (such as genetic genealogy using public genealogy databases and DNA phenotyping approaches) has been limited, disjointed, and unfocused, and raises issues of privacy and consent that may warrant additional legal protections to be established.
Privacy issues surrounding DNA databases not only means privacy is threatened in collecting and analyzing DNA samples, it also exists in protecting and storing this important personal information. As the DNA profiles can be stored indefinitely in DNA database, it has raised concerns that these DNA samples can be used for new and unidentified purposes. With the increase of the users who access the DNA database, people are worried about their information being let out or shared inappropriately, for example, their DNA profile may be shared with others such as law enforcement agencies or countries without individual consent.
The application of DNA databases have been expanded into two controversial areas: arrestees and familial searching. An arrestee is a person arrested for a crime and who has not yet been convicted for that offense. Currently, 21 states in the United States have passed legislation that allows law enforcement to take DNA from an arrestee and enter it into the state's CODIS DNA database to see if that person has a criminal record or can be linked to any unsolved crimes. In familial searching, the DNA database is used to look for partial matches that would be expected between close family members. This technology can be used to link crimes to the family members of suspects and thereby help identify a suspect when the perpetrator has no DNA sample in the database.
Furthermore, DNA databases could fall into the wrong hands due to data breaches or data sharing.
DNA collection and human rights
In a judgement in December 2008, the European Court of Human Rights ruled that two British men should not have had their DNA and fingerprints retained by police saying that retention "could not be regarded as necessary in a democratic society".
The DNA fingerprinting pioneer Professor Sir Alec Jeffreys condemned UK government plans to keep the genetic details of hundreds of thousands of innocent people in England and Wales for up to 12 years. Jeffreys said he was "disappointed" with the proposals, which came after a European court ruled that the current policy breaches people's right to privacy. Jefferys said "It seems to be as about as minimal a response to the European court of human rights judgment as one could conceive. There is a presumption not of innocence but of future guilt here … which I find very disturbing indeed".
Effects on crime
A 2021 study found that registration of Danish criminal offenders in a DNA database substantially reduced the probability of re-offending, as well as increased the likelihood that re-offenders were identified if they committed future crimes.
A 2017 study in the American Economic Journal: Applied Economics showed that databases of criminal offenders' DNA profiles in US states "deter crime by profiled offenders, reduce crime rates, and are more cost-effective than traditional law enforcement tools."
Monozygotic twins
Monozygotic twins share around 99.99% of their DNA, while other siblings share around 50%. Some next generation sequencing tools are capable of detecting rare de novo mutations in only one of the twins (detectable in rare single nucleotide polymorphisms). Most DNA testing tools would not detect these rare SNPs in most twins.
Each person's DNA is unique to them to the slight exception of identical (monozygotic and monospermotic) twins, who start out from the identical genetic line of DNA but during the twinning event have incredibly small mutations which can be detected now (for all intents and purposes, compared to all other humans and even to theoretical "clones, [who would not share the same uterus nor experience the same mutations pre-twinning event]" identical twins have more identical DNA than is probably possible to achieve between any other two humans). Tiny differences between identical twins can now (2014) be detected by next generation sequencing. For current fiscally available testing, "identical" twins cannot be easily differentiated by the most common DNA testing, but it has been shown to be possible. While other siblings (including fraternal twins) share about 50% of their DNA, monozygotic twins share virtually 99.99%. Beyond these more recently discovered twinning-event mutation disparities, since 2008 it has been known that people who are identical twins also each have their own set of copy number variants, which can be thought of as the number of copies they each personally exhibit for certain sections of DNA.
See also
Combined DNA Index System (CODIS)
DNA profiling
Forensic Science Service
Government databases
LGC Forensics
UK National DNA Database
References
Biological databases
Government databases
Privacy
Forensic databases
Forensic genetics | DNA database | [
"Biology"
] | 6,003 | [
"Bioinformatics",
"Biological databases"
] |
12,110,323 | https://en.wikipedia.org/wiki/Flood%20control%20channel | Flood control channels are large and empty basins where surface water can flow through but is not retained (except during flooding), or dry channels that run below the street levels of some larger cities, so that if a flash flood occurs the excess water can drain out along these channels into a river or other bodies of water. Flood channels are sometimes built on the former courses of natural waterways as a way to reduce flooding.
Channelization of this sort was commonly done in the 1960s, but is now often being undone, with "rechannelization" through meandering, vegetated, porous paths. This is because channellizing the flow in a concrete chute often made flooding worse.
Water levels during a flood tend to rise, then fall, exponentially. The peak flood level occurs as a very steep, short spike; a quick spurt of water. Anything that slows the surface runoff (marshes, meanders, vegetation, porous materials, turbulent flow, the river spreading over a floodplain) will slow some of the flow more than other parts, spreading the flow over time and blunting the spike. Even slightly blunting the spike significantly decreases the peak flood level. Generally, the higher the peak flood level, the more flood damage is done. Straight, clear, smooth concrete-walled channels speed up flow, and are therefore likely to make flooding downstream worse. Modern flood control seeks to "slow the flow", and deliberately flood some low-lying areas, ideally vegetated, to act as sponges, letting them drain again as the floodwaters go down.
Levees
Flood control channels are not to be confused with watercourses which are simply confined between levees. These structures may be made entirely of concrete, with concrete sides and an exposed bottom, with riprap sides and an exposed bottom, or completely unlined. They often contain grade control sills or weirs to prevent erosion and maintain a level streambed.
Distribution
By definition, flood control channels range from the size of a street gutter to a few hundred or even a few thousand feet wide in some rare cases. Flood control channels are found in most heavily developed areas in the world. One city with many of these channels is Los Angeles, as they became mandatory with the passage of the Flood Control Act of 1941 passed in the wake of the Los Angeles Flood of 1938.
See also
Nullah
Drop structure
Urban runoff
Weir
Levee
References
External links
LA River Flood Control Channel
Guadalupe River Flood Control Channel
Flood control
Rivers | Flood control channel | [
"Chemistry",
"Engineering"
] | 509 | [
"Flood control",
"Environmental engineering"
] |
12,111,508 | https://en.wikipedia.org/wiki/Type%20V%20collagen | Type V collagen is a form of fibrillar collagen associated with classical Ehlers-Danlos syndrome. It is found within the dermal/epidermal junction, placental tissues, as well as in association with tissues containing type I collagen.
Type V collagen is a part of the family of collagen proteins consisting of Collagen I- Collagen XXVIII. Collagen proteins are often associated with the strengthening and support of many tissues including skin, bones, muscles, and ligaments. There are some studies that suggest that Type V collagen is responsible for the formation of other collagen fibrils in different tissues within the body. According to studies, Collagen V regulates the heterotypic fiber diameter. Type V Collagen is considered a regulatory fibril forming collagen. Collagen V is associated with the COL5A1 gene which is the gene which provides instructions to produce Collagen V. Type V Collagen, like other collagens, is made up of procollagen molecules.
Collagen V molecular isoforms are α1(V)α2(V)α3(V), α1(V)3, and α1(V)2 α2(V). These procollagen molecules are made up of three different α -polypeptide chains. These α -polypeptide chains are α1(V), α2(V), and α3(V). Different combinations of these chains form the Type V collagen Isoforms. Procollagen molecules then form mature collagen with the help of enzymes. After the chains are formed, they arrange into thin fibrils. These collagen fibrils then assort with type I collagen fibrils.
Type V collagen is a part of the Extracellular Matrix (ECM). Collagen V is gene expression modulated by TGF-β. Type V collagen has shown that it is resistant to digestion by interstitial collagenases. Denatured collagen V on the other hand, can be degraded by gelatinases as well as metalloproteinases.
Alternative names
Type V collagen has a few alternative names alpha 1 These include: type V collagen preproprotein, CO5A1_HUMAN, and collagen type V alpha. Type V collagen can also be abbreviated to COLV or collagen V.
Diseases associated with type V collagen
Some studies show that a mutation in the gene that codes for Type V collagen is linked as the cause of a form of Ehlers Danlos Syndrome. Ehlers Danlos Syndrome Classical Type is the result of mutations of the COL5A1 or COL5A2 gene which both code for Type V Collagen. This form of the Ehlers Danlos- Syndrome (classical type) is associated with hypermobility, scarring and elasticity of the skin and other tissues. Researchers discovered the cause of this form of Ehlers Danlos is due to the mutation that produces less chains from the three chains that make up Type V Collagen. Over 100 mutations to the gene COL5A1 have been identified. These mutations result in the underproduction of pro-α1(V) chains. With these mutations, Type V Collagen fibrils are not fully developed and disorganized. This results in the different symptoms of Ehlers Danlos Syndrome.
Health
Type V Collagen studies show that Collagen V plays some other roles in different parts of the body. These roles can be both beneficial and harmful.
Beneficial roles that Type V collagen plays in the body are:
Neoepitopes of Type V collagen have shown to be a useful noninvasive serum biomarker for assessing fibrotic progression and resolution in experimental hepatic fibrosis.
Type V Collagens isoform which contains the α3(V) chain is involved in mediating pancreatic islet cell functions.
Type V Collagens will arrange with Type I Collagen and form heterotypic fibrils in the skin dermis and cornea. Together, Collagen V and Collagen I acts as a dominant regulator of collagen fibrillogenesis.
Type V Collagens interacts with matrix collagens and structural proteins. This interaction improves structural integrity to tissue scaffolds.
Harmful roles that Type V collagen can play in the body.
Having a Type V Collagen deficiency has been associated with loss of corneal transparency and classic Ehlers-Danlos syndrome.
Studies have shown that an overexpression of Type V Collagen can lead to harmful responses in the body. Collagen V overexpression has been found in cancer, granulation tissue, inflammation and atherosclerosis. It is also linked to fibrosis of the lungs, skin, kidneys, adipose tissue, and liver.
Increases in Type V Collagen are associated with both early and advanced hepatic fibrosis.
Studies show that increased synthesis of abnormal Type V Collagen is linked to the pathogenesis of Systemic Sclerosis
Autoimmunity against type V collagen is associated with lung transplant failure.
Genes
COL5A1, COL5A2, COL5A3
References
External links
Collagens | Type V collagen | [
"Chemistry"
] | 1,131 | [
"Biochemistry stubs",
"Protein stubs"
] |
12,114,623 | https://en.wikipedia.org/wiki/Petrol%20interceptor | A petrol interceptor is a trap used to filter out hydrocarbon pollutants from rainwater runoff. It is typically used in road construction and on Petrol Station forecourts to prevent fuel contamination of streams carrying away the runoff.
Petrol interceptors work on the premise that some hydrocarbons such as petroleum and diesel float on the top of water. The contaminated water enters the interceptor typically after flowing off roads or forecourts and entering a channel drain before being deposited into the first tank inside the interceptor. The first tank builds up a layer of the hydrocarbon as well as other scum. Typically petrol interceptors have 3 separate tanks each connected with a dip pipe. As more liquid enters the interceptor the water enters into the second tank leaving the majority of the hydrocarbon behind as it cannot enter the dip pipe, whose opening into the second tank is below the surface of the water. However some of the contaminants may by chance enter the second tank. This second tank will not build up as much of the hydrocarbon on its surface. As before, the water is pushed into the third tank, by fluid dynamics, as more water enters the second. The third tank should be practically clear of any hydrocarbon floating on its surface. As a precaution, the outlet pipe is also a dip pipe. When the water leaves the third tank via the outlet pipe it should be contaminant free.
References
Water filters | Petrol interceptor | [
"Chemistry",
"Engineering"
] | 288 | [
"Water filters",
"Water treatment",
"Filters",
"Civil engineering",
"Civil engineering stubs"
] |
12,115,708 | https://en.wikipedia.org/wiki/Quality%20%28physics%29 | In response theory, the quality of an excited system is related to the number of excitation frequencies to which it can respond. In the case of a homogeneous, isotropic system, the quality is proportional to the FWHM.
This sense of the phrase is the precursor of the usage of the word in music theory. In music theory, quality is the number of harmonics of a fundamental frequency of an instrument (the higher the quality, the richer the sound).
See also
Q factor
Physical quantities | Quality (physics) | [
"Physics",
"Mathematics"
] | 104 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
12,117,291 | https://en.wikipedia.org/wiki/Maintenance%20engineering | Maintenance Engineering is the discipline and profession of applying engineering concepts for the optimization of equipment, procedures, and departmental budgets to achieve better maintainability, reliability, and availability of equipment.
Maintenance, and hence maintenance engineering, is increasing in importance due to rising amounts of equipment, systems, machineries and infrastructure. Since the Industrial Revolution, devices, equipment, machinery and structures have grown increasingly complex, requiring a host of personnel, vocations and related systems needed to maintain them. Prior to 2006, the United States spent approximately US$300 billion annually on plant maintenance and operations alone. Maintenance is to ensure a unit is fit for purpose, with maximum availability at minimum costs. A person practicing maintenance engineering is known as a maintenance engineer.
Maintenance engineer's description
A maintenance engineer should possess significant knowledge of statistics, probability, and logistics, and in the fundamentals of the operation of the equipment and machinery he or she is responsible for. A maintenance engineer should also possess high interpersonal, communication, and management skills, as well as the ability to make decisions quickly.
Typical responsibilities include:
Assure optimization of the maintenance organization structure
Analysis of repetitive equipment failures
Estimation of maintenance costs and evaluation of alternatives
Forecasting of spare parts
Assessing the needs for equipment replacements and establish replacement programs when due
Application of scheduling and project management principles to replacement programs
Assessing required maintenance tools and skills required for efficient maintenance of equipment
Assessing required skills for maintenance personnel
Reviewing personnel transfers to and from maintenance organizations
Assessing and reporting safety hazards associated with maintenance of equipment
Maintenance engineering education
Institutions across the world have recognised the need for maintenance engineering. Maintenance engineers usually hold a degree in mechanical engineering, industrial engineering, or other engineering disciplines. In recent years specialised bachelor and master courses have developed. The bachelor degree program in maintenance engineering at the German-Jordanian University in Amman is addressing the need, as well as the master's program in maintenance engineering at Luleå University of Technology. With an increased demand for Chartered Engineers, The University of Central Lancashire in United Kingdom has developed a MSc in maintenance engineering currently under accreditation with the Institution of Engineering and Technology and a top-up Bachelor of Engineering with honour degree for technicians holding a Higher National Diploma and seeking a progression in their professional career.
See also
Aircraft maintenance engineering
Asset management
Auto mechanic
Civil engineer
Computerized maintenance management system
Computer repair technician
Electrician
Electrical Technologist
Industrial Engineering
Marine fuel management
Mechanic
Millwright (machinery maintenance)
Maintenance, repair and operations (MRO)
Reliability centered maintenance (RCM)
Reliability engineering
Preventive maintenance
Product lifecycle management
Stationary engineer
Total productive maintenance (TPM)
Six Sigma for maintenance
Associations
INFORMS
Institute of Industrial Engineers
References
School of Applied Technical Sciences - Maintenance Engineering
Industrial engineering
Industrial occupations
Maintenance
Engineering disciplines
Engineering occupations
Reliability engineering
Product lifecycle management
Mechanical engineering | Maintenance engineering | [
"Physics",
"Engineering"
] | 548 | [
"Systems engineering",
"Applied and interdisciplinary physics",
"Reliability engineering",
"Industrial engineering",
"Maintenance",
"Mechanical engineering",
"nan"
] |
12,118,054 | https://en.wikipedia.org/wiki/Laplace%20expansion%20%28potential%29 | In physics, the Laplace expansion of potentials that are directly proportional to the inverse of the distance (), such as Newton's gravitational potential or Coulomb's electrostatic potential, expresses them in terms of the spherical Legendre polynomials. In quantum mechanical calculations on atoms the expansion is used in the evaluation of integrals of the inter-electronic repulsion.
Formulation
The Laplace expansion is in fact the expansion of the inverse distance between two points. Let the points have position vectors and , then the Laplace expansion is
Here has the spherical polar coordinates and has with homogeneous polynomials of degree . Further r< is min(r, r′) and r> is max(r, r′). The function is a normalized spherical harmonic function. The expansion takes a simpler form when written in terms of solid harmonics,
Derivation
The derivation of this expansion is simple. By the law of cosines,
We find here the generating function of the Legendre polynomials :
Use of the spherical harmonic addition theorem
gives the desired result.
Neumann expansion
A similar equation has been derived by Carl Gottfried Neumann that allows expression of in prolate spheroidal coordinates as a series:
where and are associated Legendre functions of the first and second kind, respectively, defined such that they are real for . In analogy to the spherical coordinate case above, the relative sizes of the radial coordinates are important, as and .
References
Potential theory
Atomic physics
Rotational symmetry | Laplace expansion (potential) | [
"Physics",
"Chemistry",
"Mathematics"
] | 293 | [
"Functions and mappings",
"Mathematical objects",
"Quantum mechanics",
"Potential theory",
"Rotational symmetry",
"Mathematical relations",
"Atomic physics",
" molecular",
"Atomic",
"Symmetry",
" and optical physics"
] |
13,680,698 | https://en.wikipedia.org/wiki/Mass%20attenuation%20coefficient | The mass attenuation coefficient, or mass narrow beam attenuation coefficient of a material is the attenuation coefficient normalized by the density of the material; that is, the attenuation per unit mass (rather than per unit of distance). Thus, it characterizes how easily a mass of material can be penetrated by a beam of light, sound, particles, or other energy or matter. In addition to visible light, mass attenuation coefficients can be defined for other electromagnetic radiation (such as X-rays), sound, or any other beam that can be attenuated. The SI unit of mass attenuation coefficient is the square metre per kilogram (). Other common units include cm2/g (the most common unit for X-ray mass attenuation coefficients) and L⋅g−1⋅cm−1 (sometimes used in solution chemistry). Mass extinction coefficient is an old term for this quantity.
The mass attenuation coefficient can be thought of as a variant of absorption cross section where the effective area is defined per unit mass instead of per particle.
Mathematical definitions
Mass attenuation coefficient is defined as
where
μ is the attenuation coefficient (linear attenuation coefficient);
ρm is the mass density.
When using the mass attenuation coefficient, the Beer–Lambert law is written in alternative form as
where
is the area density known also as mass thickness, and is the length, over which the attenuation takes place.
Mass absorption and scattering coefficients
When a narrow (collimated) beam passes through a volume, the beam will lose intensity to two processes: absorption and scattering.
Mass absorption coefficient, and mass scattering coefficient are defined as
where
μa is the absorption coefficient;
μs is the scattering coefficient.
In solutions
In chemistry, mass attenuation coefficients are often used for a chemical species dissolved in a solution. In that case, the mass attenuation coefficient is defined by the same equation, except that the "density" is the density of only that one chemical species, and the "attenuation" is the attenuation due to only that one chemical species. The actual attenuation coefficient is computed by
where each term in the sum is the mass attenuation coefficient and density of a different component of the solution (the solvent must also be included). This is a convenient concept because the mass attenuation coefficient of a species is approximately independent of its concentration (as long as certain assumptions are fulfilled).
A closely related concept is molar absorptivity. They are quantitatively related by
(mass attenuation coefficient) × (molar mass) = (molar absorptivity).
X-rays
Tables of photon mass attenuation coefficients are essential in radiological physics, radiography (for medical and security purposes), dosimetry, diffraction, interferometry, crystallography, and other branches of physics. The photons can be in form of X-rays, gamma rays, and bremsstrahlung.
The values of mass attenuation coefficients, based on proper values of photon cross section, are dependent upon the absorption and scattering of the incident radiation caused by several different mechanisms such as
Rayleigh scattering (coherent scattering);
Compton scattering (incoherent scattering);
photoelectric absorption;
pair production, electron-positron production in the fields of the nucleus and atomic electrons.
The actual values have been thoroughly examined and are available to the general public through three databases run by National Institute of Standards and Technology (NIST):
XAAMDI database;
XCOM database;
FFAST database.
Calculating the composition of a solution
If several known chemicals are dissolved in a single solution, the concentrations of each can be calculated using a light absorption analysis. First, the mass attenuation coefficients of each individual solute or solvent, ideally across a broad spectrum of wavelengths, must be measured or looked up. Second, the attenuation coefficient of the actual solution must be measured. Finally, using the formula
the spectrum can be fitted using ρ1, ρ2, … as adjustable parameters, since μ and each are functions of wavelength. If there are N solutes or solvents, this procedure requires at least N measured wavelengths to create a solvable system of simultaneous equations, although using more wavelengths gives more reliable data.
See also
Absorption coefficient
Absorption cross section
Attenuation length
Attenuation
Beer–Lambert law
Cargo scanning
Compton edge
Compton scattering
Cross section
High-energy X-rays
Mean free path
Molar attenuation coefficient
Propagation constant
Radiation length
Scattering theory
Transmittance
References
Physical quantities
Radiometry
Mass-specific quantities | Mass attenuation coefficient | [
"Physics",
"Mathematics",
"Engineering"
] | 945 | [
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Mass",
"Intensive quantities",
"Mass-specific quantities",
"Physical properties",
"Matter",
"Radiometry"
] |
13,683,785 | https://en.wikipedia.org/wiki/Top-down%20proteomics | Top-down proteomics is a method of protein identification that either uses an ion trapping mass spectrometer to store an isolated protein ion for mass measurement and tandem mass spectrometry (MS/MS) analysis or other protein purification methods such as two-dimensional gel electrophoresis in conjunction with MS/MS. Top-down proteomics is capable of identifying and quantitating unique proteoforms through the analysis of intact proteins. The name is derived from the similar approach to DNA sequencing. During mass spectrometry intact proteins are typically ionized by electrospray ionization and trapped in a Fourier transform ion cyclotron resonance (Penning trap), quadrupole ion trap (Paul trap) or Orbitrap mass spectrometer. Fragmentation for tandem mass spectrometry is accomplished by electron-capture dissociation or electron-transfer dissociation. Effective fractionation is critical for sample handling before mass-spectrometry-based proteomics. Proteome analysis routinely involves digesting intact proteins followed by inferred protein identification using mass spectrometry (MS). Top-down MS (non-gel) proteomics interrogates protein structure through measurement of an intact mass followed by direct ion dissociation in the gas phase.
Advantages
The main advantages of the top-down approach include the ability to detect degradation products, protein isoforms, sequence variants, combinations of post-translational modifications as well as simplified processes for data normalization and quantitation.
Top-down proteomics, when accompanied with polyacrylamide gel electrophoresis, can help to complement the bottom-up proteomic approach. Top-down proteomic methods can assist in exposing large deviations from predictions and has been very successfully pursued by combining Gel Elution Liquid-based Fractionation Entrapment Electrophoresis fractionation, protein precipitation, and reverse phase HPLC with electrospray ionization and MS/MS.
Characterization of small proteins represents a significant challenge for bottom up proteomics due to the inability to generate sufficient tryptic peptides for analysis. Top-down proteomics allows for low mass protein detection, thus increasing the repertoire of proteins known. While Bottom-up proteomics integrates cleaved products from all proteoforms produced by a gene into a single peptide map of the full-length gene product to tabulate and quantify expressed proteins, a major strength of Top-down proteomics is that it enables researchers to quantitatively track one or more proteoforms from multiple samples and to excise these proteoforms for chemical analysis.
Disadvantages
In the recent past, the top down approach was relegated to analysis of individual proteins or simple mixtures, while complex mixtures and proteins were analyzed by more established methods such as Bottom-up proteomics. Additionally protein identification and proteoform characterization in the TDP (Top-down proteomics) approach can suffer from a dynamic range challenge where the same highly abundant species are repeatedly fragmented.
Although Top-down proteomics can be operated in relatively high output in order to successfully map proteome coverage at a large level, the rate of identifying new proteins after initial rounds reduces quite sharply.
Top-down proteomics interrogation can overcome problems for identifying individual proteins, but has not been achieved on a large scale due to a lack of intact protein fractionation methods that are integrated with tandem mass spectrometry.
Research and uses
Study One: Quantitation and Identification of Thousands of Human Proteoforms below 30 kDa
Researchers performed a study of human proteoforms below 30kDa, used primary IMR90 human fibroblasts containing a Ras function construct that were grown in medium.
Chose to use Top-Down Proteomics to characterize these proteoforms because it is currently the best method for intact proteins, as I discussed Bottom Up digests the protein and does not do a good job of providing a clear image of distinct intact proteoforms.
Top Down Proteomics is capable of identifying and quantitating unique proteoforms through the analysis of intact proteins. The Top-down quantitation yielded changes in abundance of 1038 cytoplasmic proteoforms.
Study Two: Combining high-throughput MALDI-TOF mass spectrometry and isoelectric focusing gel electrophoresis for virtual 2D gel-based proteomics
Researchers used top-down proteomics because could identify the exact proteoforms of intact proteins, rather than the bottom-up approach which gives fragment ions of peptides.
This study used Virtual 2D gel along with Mass Spectrometry in order to separate protein mixtures. MALDI is a computer software that generates the intact masses of the proteins at each isoelectric point. It started with an image of an IPG-IEF (isoelectric focusing) gel selection that was then analyzed by MALDI.
Top-down proteomics MALDI-TOF/TOF-MS is more tolerant to impurities; does not require biomarker extraction, purification, and separation; and can be directly applied to intact microorganisms.
See also
Protein mass spectrometry
Bottom-up proteomics
Shotgun proteomics
Tandem mass spectrometry (MS/MS)
References
Bibliography
Mass spectrometry
Proteomics | Top-down proteomics | [
"Physics",
"Chemistry"
] | 1,121 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
13,685,099 | https://en.wikipedia.org/wiki/Similitude%20of%20ship%20models |
Manned models
Many research workers, hydraulics specialists and engineers have used scale models for over a century, in particular in towing tanks. Manned models are small scale models that can carry and be handled by at least one person on an open expanse of water. They must behave just like real ships, giving the shiphandler the same sensations. Physical conditions such as wind, currents, waves, water depths, channels and berths must be reproduced realistically.
Manned models are used for research (e.g. ship behaviour), engineering (e.g. port layout) and for training in shiphandling (e.g. maritime pilots, masters and officers). They are usually at 1:25 scale.
Similitude of manned models
Worldwide, manned model schools have chosen to apply the similitude law of William Froude (1810-1879) for its manned models. This means that gravity is considered to be preponderant over the other forces acting on the hull (viscosity, capillarity, cavitation, compressibility, etc.).
The different aspects of similitude may thus be defined as follows:
Physical similitude
Similitude of shape: The model has exactly the same geometric shape as the real ship. This means that all the length (L) dimensions of the real ship are divided by the same factor, the scale factor. The designers of Port Revel chose a scale (S) of 1:25, so:
S(L) = 25 (smaller, hence distance is 25 times less)
In this similitude, the proportions are kept (the ratios between the various dimensions of the ship are identical). This is also the case with the block coefficient. Furthermore, the angles are a length ratio, so they are also identical to the original ones. The scale factors of the areas and volumes are deduced from this, i.e.:
S2(L) = 252 = 625
S3(L) = 253 = 15 625
Similitude of mass (M): The model used for shiphandling training must not only resemble the original but also move in the same way as the original when subjected to similar forces. Consequently, the scale factor for the mass (M) and displacement is the same as that for the volumes, i.e.:
S(M) = S3(L) = 253 = 15 625
Similitude of forces (F): If the external forces on the model are in similitude, like the shapes, masses and inertia, the model's movement will be in similitude. It can thus be shown that the forces (F) must be at the same scale as the masses and weights, so:
S(F) = S(M) = 253 = 15 625
Similitude of speed(V): In agreement with Froude's law, the velocity scale is the square root of the length scale, so:
S(V) = S1/2(L) = sqrt(25) = 5 (times slower than in real life)
Similitude of time (T): Time is a distance (L) over speed (V), so:
S(T) = S(L) / S(V) = S1/2(L) = sqrt(25) = 5 (times faster than in real life)
Similitude of power (P): As the power P = F x V, hence S(P) = S(F) x S(V), so:
S(P) = S3(L) x S1/2(L) = S7/2(L) = 257/2 = 78 125
In conclusion, by choosing a scale of 1:25 for the lengths and by complying with Froude's law, the engineers at Sogreah – Port Revel built models 25 times smaller, operating 5 times more slowly, but as the distances are 25 times less, things occur 5 times faster.
The ships are 78 125 times less powerful.
Similitude of manoeuvres
While the models must be in correct similitude, this is not enough. Other factors can affect the correct reproduction of the manoeuvres, such as the field of vision, on-board equipment and wind.
First, manoeuvres on a model require the same pilot's orders as those on a real ship. The only difference is that they are executed five times faster on the model, so there is no time to discuss them (in fact, the rate of operation is such that the captain and helmsman swap roles every hour to avoid fatigue). This encourages responses to become intuitive but based upon a pre-assessed but flexible plan. What is a crisis on Day 1 of a manned model course becomes routine by Days 3+ which has to be a good definition of training.
The captain's position gives him a true field of vision from the bridge. He gives his orders to the helmsman, who is seated in front of him and operates the wheel and engine.
Control panels show the usual information (engine speed, rudder angle, heading, log, wind speed and direction, shackles of chain lowered). This information is shown in real-life values to help the trainee forget as far as possible that he is on a scale model.
The ships are fitted with bow and stern thrusters and perfectly operational anchors. They behave like real ships from this point of view as well.
Tugs are under the captain's orders via remote control, and are handled by a real tug captain.
As far as the wind is concerned, it should be recalled that as the speed scale factor is 1 in 5, a wind of 10 knots on the lake is equivalent to a 50 knot squall in reality. Ripples on the surface of the water and the movement of leaves on the trees are therefore unreliable indicators. The wind and ship speeds displayed on the control panel are therefore very important for trainees. However, the lake is situated in a forest in a region with little wind, so that uncontrollable wind effects are minimised.
40 years' experience have shown that students quickly learn how to control the models just as they do the real ships that they are used to manoeuvring.
Manned model exercises promote good situational and spatial awareness, a lack of which contributes to most accidents and incidents.
Those who have trained on both claim that scale models are complementary to electronic simulators. While manoeuvres with currents, waves, tugs, anchors, bank effects, etc. are reproduced more accurately on scale models, numerical simulators are more realistic when it comes to the bridge environment.
References
External links
Port Revel website
AFCAN website
Marine-Marchande.net website
Model boats
Hydraulic engineering | Similitude of ship models | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,395 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
5,707,971 | https://en.wikipedia.org/wiki/Weierstrass%20product%20inequality | In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ x1, ..., xn ≤ 1 we have
and similarly, for 0 ≤ x1, ..., xn,
where
The inequality is named after the German mathematician Karl Weierstrass.
Proof
The inequality with the subtractions can be proven easily via mathematical induction. The one with the additions is proven identically. We can choose as the base case and see that for this value of we get
which is indeed true. Assuming now that the inequality holds for all natural numbers up to , for we have:
which concludes the proof.
References
Inequalities | Weierstrass product inequality | [
"Mathematics"
] | 138 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
5,710,861 | https://en.wikipedia.org/wiki/Cartan%E2%80%93Kuranishi%20prolongation%20theorem | Given an exterior differential system defined on a manifold M, the Cartan–Kuranishi prolongation theorem says that after a finite number of prolongations the system is either in involution (admits at least one 'large' integral manifold), or is impossible.
History
The theorem is named after Élie Cartan and Masatake Kuranishi. Cartan made several attempts in 1946 to prove the result, but it was in 1957 that Kuranishi provided a proof of Cartan's conjecture.
Applications
This theorem is used in infinite-dimensional Lie theory.
See also
Cartan-Kähler theorem
References
M. Kuranishi, On É. Cartan's prolongation theorem of exterior differential systems, Amer. J. Math., vol. 79, 1957, p. 1–47
Partial differential equations
Theorems in analysis | Cartan–Kuranishi prolongation theorem | [
"Mathematics"
] | 177 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
4,282,921 | https://en.wikipedia.org/wiki/Monochloramine | Monochloramine, often called chloramine, is the chemical compound with the formula NH2Cl. Together with dichloramine (NHCl2) and nitrogen trichloride (NCl3), it is one of the three chloramines of ammonia. It is a colorless liquid at its melting point of , but it is usually handled as a dilute aqueous solution, in which form it is sometimes used as a disinfectant. Chloramine is too unstable to have its boiling point measured.
Water treatment
Chloramine is used as a disinfectant for water. It is less aggressive than chlorine and more stable against light than hypochlorites.
Drinking water disinfection
Chloramine is commonly used in low concentrations as a secondary disinfectant in municipal water distribution systems as an alternative to chlorination. This application is increasing. Chlorine (referred to in water treatment as free chlorine) is being displaced by chloramine—to be specific, monochloramine—which is much less reactive and does not dissipate as rapidly as free chlorine. Chloramine also has a much lower, but still active, tendency than free chlorine to convert organic materials into chlorocarbons such as chloroform and carbon tetrachloride. Such compounds have been identified as carcinogens and in 1979 the United States Environmental Protection Agency (EPA) began regulating their levels in US drinking water.
Some of the unregulated byproducts may possibly pose greater health risks than the regulated chemicals.
Due to its acidic nature, adding chloramine to the water supply may increase exposure to lead in drinking water, especially in areas with older housing; this exposure can result in increased lead levels in the bloodstream, which may pose a significant health risk. Fortunately, water treatment plants can add caustic chemicals at the plant which have the dual purpose of reducing the corrosivity of the water, and stabilizing the disinfectant.
Swimming pool disinfection
In swimming pools, chloramines are formed by the reaction of free chlorine with amine groups present in organic substances, mainly those biological in origin (e.g., urea in sweat and urine). Chloramines, compared to free chlorine, are both less effective as a sanitizer and, if not managed correctly, more irritating to the eyes of swimmers. Chloramines are responsible for the distinctive "chlorine" smell of swimming pools, which is often misattributed to elemental chlorine by the public. Some pool test kits designed for use by homeowners do not distinguish free chlorine and chloramines, which can be misleading and lead to non-optimal levels of chloramines in the pool water.
There is also evidence that exposure to chloramine can contribute to respiratory problems, including asthma, among swimmers. Respiratory problems related to chloramine exposure are common and prevalent among competitive swimmers.
Though chloramine's distinctive smell has been described by some as pleasant and even nostalgic, its formation in pool water as a result of bodily fluids being exposed to chlorine can be minimised by encouraging showering and other hygiene methods prior to entering the pool, as well as refraining from swimming while suffering from digestive illnesses and taking breaks to use the bathroom, instead of simply urinating in the pool.
Safety
US EPA drinking water quality standards limit chloramine concentration for public water systems to 4 parts per million (ppm) based on a running annual average of all samples in the distribution system. In order to meet EPA-regulated limits on halogenated disinfection by-products, many utilities are switching from chlorination to chloramination. While chloramination produces fewer regulated total halogenated disinfection by-products, it can produce greater concentrations of unregulated iodinated disinfection byproducts and N-nitrosodimethylamine. Both iodinated disinfection by-products and N-nitrosodimethylamine have been shown to be genotoxic, causing damage to the genetic information within a cell resulting in mutations which may lead to cancer.
Another newly-identified byproduct of chloramine is chloronitramide anions, whose toxicity has not yet been determined.
Lead poisoning incidents
In the year 2000, Washington, DC, switched from chlorine to monochloramine, causing lead to leach from unreplaced pipes. The number of babies with elevated blood lead levels rose about tenfold, and by one estimate fetal deaths rose between 32% and 63%.
Trenton, Missouri made the same switch, causing about one quarter of tested households to exceed EPA drinking water lead limits in the period from 2017 to 2019. 20 children tested positive for lead poisoning in 2016 alone. In 2023, Virginia Tech Professor Marc Edwards said lead spikes occur in several water utility system switchovers per year, due to lack of sufficient training and lack of removal of lead pipes. Lack of utility awareness that lead pipes are still in use is also part of the problem; the EPA has required all water utilities in the United States to prepare a complete lead pipe inventory by October 16, 2024.
Synthesis and chemical reactions
Chloramine is a highly unstable compound in concentrated form. Pure chloramine decomposes violently above . Gaseous chloramine at low pressures and low concentrations of chloramine in aqueous solution are thermally slightly more stable. Chloramine is readily soluble in water and ether, but less soluble in chloroform and carbon tetrachloride.
Production
In dilute aqueous solution, chloramine is prepared by the reaction of ammonia with sodium hypochlorite:
NH3 + NaOCl → NH2Cl + NaOH
This reaction is also the first step of the Olin Raschig process for hydrazine synthesis. The reaction has to be carried out in a slightly alkaline medium (pH 8.5–11). The acting chlorinating agent in this reaction is hypochlorous acid (HOCl), which has to be generated by protonation of hypochlorite, and then reacts in a nucleophilic substitution of the hydroxyl against the amino group. The reaction occurs quickest at around pH 8. At higher pH values the concentration of hypochlorous acid is lower, at lower pH values ammonia is protonated to form ammonium ions (), which do not react further.
The chloramine solution can be concentrated by vacuum distillation and by passing the vapor through potassium carbonate which absorbs the water. Chloramine can be extracted with ether.
Gaseous chloramine can be obtained from the reaction of gaseous ammonia with chlorine gas (diluted with nitrogen gas):
2 NH3 + Cl2 NH2Cl + NH4Cl
Pure chloramine can be prepared by passing fluoroamine through calcium chloride:
2 NH2F + CaCl2 → 2 NH2Cl + CaF2
Decomposition
The covalent N−Cl bonds of chloramines are readily hydrolyzed with release of hypochlorous acid:
RR′NCl + H2O RR′NH + HOCl
The quantitative hydrolysis constant (K value) is used to express the bactericidal power of chloramines, which depends on their generating hypochlorous acid in water. It is expressed by the equation below, and is generally in the range 10−4 to 10−10 ( for monochloramine):
In aqueous solution, chloramine slowly decomposes to dinitrogen and ammonium chloride in a neutral or mildly alkaline (pH ≤ 11) medium:
3 NH2Cl → N2 + NH4Cl + 2 HCl
However, only a few percent of a 0.1 M chloramine solution in water decomposes according to the formula in several weeks. At pH values above 11, the following reaction with hydroxide ions slowly occurs:
3 NH2Cl + 3 OH− → NH3 + N2 + 3 Cl− + 3 H2O
In an acidic medium at pH values of around 4, chloramine disproportionates to form dichloramine, which in turn disproportionates again at pH values below 3 to form nitrogen trichloride:
2 NH2Cl + H+ NHCl2 +
3 NHCl2 + H+ 2 NCl3 +
At low pH values, nitrogen trichloride dominates and at pH 3–5 dichloramine dominates. These equilibria are disturbed by the irreversible decomposition of both compounds:
NHCl2 + NCl3 + 2 H2O → N2 + 3 HCl + 2 HOCl
Reactions
In water, chloramine is pH-neutral. It is an oxidizing agent (acidic solution: , in basic solution ):
NH2Cl + 2 H+ + 2 e− → + Cl−
Reactions of chloramine include radical, nucleophilic, and electrophilic substitution of chlorine, electrophilic substitution of hydrogen, and oxidative additions.
Chloramine can, like hypochlorous acid, donate positively charged chlorine in reactions with nucleophiles (Nu−):
Nu− + NH3Cl+ → NuCl + NH3
Examples of chlorination reactions include transformations to dichloramine and nitrogen trichloride in acidic medium, as described in the decomposition section.
Chloramine may also aminate nucleophiles (electrophilic amination):
Nu− + NH2Cl → NuNH2 + Cl−
The amination of ammonia with chloramine to form hydrazine is an example of this mechanism seen in the Olin Raschig process:
NH2Cl + NH3 + NaOH → N2H4 + NaCl + H2O
Chloramine electrophilically aminates itself in neutral and alkaline media to start its decomposition:
2 NH2Cl → N2H3Cl + HCl
The chlorohydrazine (N2H3Cl) formed during self-decomposition is unstable and decomposes itself, which leads to the net decomposition reaction:
3 NH2Cl → N2 + NH4Cl + 2 HCl
Monochloramine oxidizes sulfhydryls and disulfides in the same manner as hypochlorous acid, but only possesses 0.4% of the biocidal effect of HClO.
See also
Disinfection
Disinfection by-products
Water treatment
Pathogen
Chloramines
References
External links
"Chlorinated drinking water", IARC Monograph (1991)
EPA Maximum Contaminant Levels
WebBook page for NH2Cl
Chlorine and chloramines in the freshwater aquarium
Drinking water
Inorganic amines
Inorganic chlorine compounds
Nitrogen halides
Water treatment | Monochloramine | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,298 | [
"Inorganic compounds",
"Water treatment",
"Water pollution",
"Inorganic chlorine compounds",
"Environmental engineering",
"Water technology"
] |
4,283,705 | https://en.wikipedia.org/wiki/W3C%20Software%20Notice%20and%20License | The W3C Software Notice and License is a permissive free software license used by software released by the World Wide Web Consortium, like Amaya. The license is a permissive license, compatible with the GNU General Public License.
Software using the License
Arena
Amaya
Libwww
Line Mode Browser
See also
Free software portal
Software using the W3C Software Notice and License (category)
World Wide Web Consortium
References
External links
Text of the license
Free and open-source software licenses | W3C Software Notice and License | [
"Engineering"
] | 99 | [
"Software engineering",
"Software engineering stubs"
] |
4,287,778 | https://en.wikipedia.org/wiki/Hydrometeorology | Hydrometeorology is a branch of meteorology and hydrology that studies the transfer of water and energy between the land surface and the lower atmosphere for academic research, commercial gain or operational forecasting purposes.
Whilst traditionally meteorologists and hydrologists sit within separate organisations, hydrometeorlogists may work in joint project teams, virtual teams, deal with specific incidents or be permanently co-located to deliver specific objectives. Hydrometeorlogists typically have a foundation in one or other discipline before undertaking additional training and specialist forecaster training depending on requirements. The cross over skills and knowledge between the two disciplines can bring organisational benefits in terms of efficiencies in terms of using tools and data available, and provide benefits in terms of enhanced lead times ahead of hydrometeological hazards occurring.
UNESCO has several programs and activities in place that deal with the study of natural hazards of hydrometeorological origin and the mitigation of their effects. Among these hazards are the results of natural processes and atmospheric, hydrological, or oceanographic phenomena such as floods, tropical cyclones, drought, and desertification. Many countries have established an operational hydrometeorological capability to assist with forecasting, warning, and informing the public of these developing hazards.
Hydrometeorological forecasting
One of the more significant aspects of hydrometeorology involves predictions about and attempts to mitigate the effects of high precipitation events. There are three primary ways to model meteorological phenomena in weather forecasting, including nowcasting, numerical weather prediction, and statistical techniques. Nowcasting is good for predicting events a few hours out, utilizing observations and live radar data to combine them with numerical weather prediction models. The primary technique used to forecast weather, numerical weather prediction uses mathematical models to account for the atmosphere, ocean, and many other variables when producing forecasts. These forecasts are generally used to predict events days or weeks out. Finally, statistical techniques use regressions and other statistical methods to create long-term projections that go out weeks and months at a time. These models allow scientists to visualize how a multitude of different variables interact with one another, and they illustrate one grand picture of how the Earth's climate interacts with itself.
Risk assessment
A major component of hydrometeorology is mitigating the risk associated with flooding and other hydrological threats. First, there has to be knowledge of the possible hydrological threats that are expected within a specific region. After analyzing the possible threats, warning systems are put in place to quickly alert people and communicate to them the identity and magnitude of the threat. Many nations have their own specific regional hydrometeorological centers that communicate threats to the public. Finally, there must be proper response protocols in place to protect the public during a dangerous event.
Operational hydrometeorology in practice
Countries with a current operational hydrometeorological service include, among others:
Australia (Bureau of Meteorology)
Brazil (National Center for Natural Disaster Monitoring and Alerts)
Canada (Environment Canada)
England and Wales (Flood Forecasting Centre)
France
Germany
India
Scotland (Flood Forecasting Service)
Serbia (Republic Hydrometeorological Service of Serbia)
Russia (Hydrometeorological Centre of Russia)
United States (Hydrometeorological Prediction Center, known as the Weather Prediction Center since 2013)
References
External links
World Meteorological Organization – List of national hydrological and hydrometeorological services
Hydrology
Branches of meteorology
Oceanography | Hydrometeorology | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 705 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Environmental engineering"
] |
3,146,707 | https://en.wikipedia.org/wiki/Local-density%20approximation | Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space (and not, for example, derivatives of the density or the Kohn–Sham orbitals). Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems (molecules and solids).
In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as
where ρ is the electronic density and єxc is the exchange-correlation energy per particle of a homogeneous electron gas of charge density ρ. The exchange-correlation energy is decomposed into exchange and correlation terms linearly,
so that separate expressions for Ex and Ec are sought. The exchange term takes on a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approximations for єc.
Local-density approximations are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradient approximations (GGA) or hybrid functionals, as a desirable property of any approximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA's are often an explicit component of such functionals.
The local-density approximation was first introduced by Walter Kohn and Lu Jeu Sham in 1965.
Applications
Local density approximations, as with GGAs are employed extensively by solid state physicists in ab-initio DFT studies to interpret electronic and magnetic interactions in semiconductor materials including semiconducting oxides and spintronics. The importance of these computational studies stems from the system complexities which bring about high sensitivity to synthesis parameters necessitating first-principles based analysis. The prediction of Fermi level and band structure in doped semiconducting oxides is often carried out using LDA incorporated into simulation packages such as CASTEP and DMol3. However an underestimation in Band gap values often associated with LDA and GGA approximations may lead to false predictions of impurity mediated conductivity and/or carrier mediated magnetism in such systems. Starting in 1998, the application of the Rayleigh theorem for eigenvalues has led to mostly accurate, calculated band gaps of materials, using LDA potentials. A misunderstanding of the second theorem of DFT appears to explain most of the underestimation of band gap by LDA and GGA calculations, as explained in the description of density functional theory, in connection with the statements of the two theorems of DFT.
Homogeneous electron gas
Approximation for єxc depending only upon the density can be developed in numerous ways. The most successful approach is based on the homogeneous electron gas. This is constructed by placing N interacting electrons in to a volume, V, with a positive background charge keeping the system neutral. N and V are then taken to infinity in the manner that keeps the density (ρ = N / V) finite. This is a useful approximation as the total energy consists of contributions only from the kinetic energy, electrostatic interaction energy and exchange-correlation energy, and that the wavefunction is expressible in terms of planewaves. In particular, for a constant density ρ, the exchange energy density is proportional to ρ⅓.
Exchange functional
The exchange-energy density of a HEG is known analytically. The LDA for exchange employs this expression under the approximation that the exchange-energy in a system where the density is not homogeneous, is obtained by applying the HEG results pointwise, yielding the expression
Correlation functional
Analytic expressions for the correlation energy of the HEG are available in the high- and low-density limits corresponding to infinitely-weak and infinitely-strong correlation. For a HEG with density ρ, the high-density limit of the correlation energy density is
and the low limit
where the Wigner-Seitz parameter is dimensionless. It is defined as the radius of a sphere which encompasses exactly one electron, divided by the Bohr radius. The Wigner-Seitz parameter is related to the density as
An analytical expression for the full range of densities has been proposed based on the many-body perturbation theory. The calculated correlation energies are in agreement with the results from quantum Monte Carlo simulation to within 2 milli-Hartree.
Accurate quantum Monte Carlo simulations for the energy of the HEG have been performed for several intermediate values of the density, in turn providing accurate values of the correlation energy density.
Spin polarization
The extension of density functionals to spin-polarized systems is straightforward for exchange, where the exact spin-scaling is known, but for correlation further approximations must be employed. A spin polarized system in DFT employs two spin-densities, ρα and ρβ with ρ = ρα + ρβ, and the form of the local-spin-density approximation (LSDA) is
For the exchange energy, the exact result (not just for local density approximations) is known in terms of the spin-unpolarized functional:
The spin-dependence of the correlation energy density is approached by introducing the relative spin-polarization:
corresponds to the diamagnetic spin-unpolarized situation with equal
and spin densities whereas corresponds to the ferromagnetic situation where one spin density vanishes. The spin correlation energy density for a given values of the total density and relative polarization, єc(ρ,ς), is constructed so to interpolate the extreme values. Several forms have been developed in conjunction with LDA correlation functionals.
Exchange-correlation potential
The exchange-correlation potential corresponding to the exchange-correlation energy for a local density approximation is given by
In finite systems, the LDA potential decays asymptotically with an exponential form. This result is in error; the true exchange-correlation potential decays much slower in a Coulombic manner. The artificially rapid decay manifests itself in the number of Kohn–Sham orbitals the potential can bind (that is, how many orbitals have energy less than zero). The LDA potential can not support a Rydberg series and those states it does bind are too high in energy. This results in the highest occupied molecular orbital (HOMO) energy being too high in energy, so that any predictions for the ionization potential based on Koopmans' theorem are poor. Further, the LDA provides a poor description of electron-rich species such as anions where it is often unable to bind an additional electron, erroneously predicating species to be unstable. In the case of spin polarization, the exchange-correlation potential acquires spin indices. However, if one only considers the exchange part of the exchange-correlation, one obtains a potential that is diagonal in spin indices:
References
Density functional theory | Local-density approximation | [
"Physics",
"Chemistry"
] | 1,470 | [
"Density functional theory",
"Quantum chemistry",
"Quantum mechanics"
] |
3,147,062 | https://en.wikipedia.org/wiki/Schur%20polynomial | In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.
Definition (Jacobi's bialternant formula)
Schur polynomials are indexed by integer partitions. Given a partition ,
where , and each is a non-negative integer, the functions
are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables.
Since they are alternating, they are all divisible by the Vandermonde determinant
The Schur polynomials are defined as the ratio
This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula.
This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant.
Properties
The degree Schur polynomials in variables are a linear basis for the space of homogeneous degree symmetric polynomials in variables.
For a partition , the Schur polynomial is a sum of monomials,
where the summation is over all semistandard Young tableaux of shape . The exponents give the weight of , in other words each counts the occurrences of the number in . This can be shown to be equivalent to the definition from the first Giambelli formula using the Lindström–Gessel–Viennot lemma (as outlined on that page).
Schur polynomials can be expressed as linear combinations of monomial symmetric functions with non-negative integer coefficients called Kostka numbers,
The Kostka numbers are given by the number of semi-standard Young tableaux of shape λ and weight μ.
Jacobi−Trudi identities
The first Jacobi−Trudi formula expresses the Schur polynomial as a determinant
in terms of the complete homogeneous symmetric polynomials,
where .
The second Jacobi-Trudi formula expresses the Schur polynomial as
a determinant in terms of the elementary symmetric polynomials,
where
and is the conjugate partition to .
In both identities, functions with negative subscripts are defined to be zero.
The Giambelli identity
Another determinantal identity is Giambelli's formula, which expresses the Schur function for an arbitrary partition in terms of those for the hook partitions contained within the Young diagram. In Frobenius' notation, the partition is denoted
where, for each diagonal element in position , denotes the number of boxes to the right in the same row and denotes the number of boxes beneath it in the same column (the arm and leg lengths, respectively).
The Giambelli identity expresses the Schur function corresponding to this partition as the determinant
of those for hook partitions.
The Cauchy identity
The Cauchy identity for Schur functions (now in infinitely many variables), and its dual state that
and
where the sum is taken over all partitions λ, and , denote the complete symmetric functions and elementary symmetric functions, respectively. If the sum is taken over products of Schur polynomials in variables , the sum includes only partitions of length since otherwise the Schur polynomials vanish.
There are many generalizations of these identities to other families of symmetric functions.
For example, Macdonald polynomials, Schubert polynomials and Grothendieck polynomials admit Cauchy-like identities.
Further identities
The Schur polynomial can also be computed via a specialization of a formula for Hall–Littlewood polynomials,
where is the subgroup of permutations such that
for all i, and w acts on variables by permuting indices.
The Murnaghan−Nakayama rule
The Murnaghan–Nakayama rule expresses a product of a power-sum symmetric function with a Schur polynomial, in terms of Schur polynomials:
where the sum is over all partitions μ such that μ/λ is a rim-hook of size r and ht(μ/λ) is the number of rows in the diagram μ/λ.
The Littlewood–Richardson rule and Pieri's formula
The Littlewood–Richardson coefficients depend on three partitions, say , of which and describe the Schur functions being multiplied, and gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients such that
The Littlewood–Richardson rule states that is equal to the number of Littlewood–Richardson tableaux of skew shape and of weight .
Pieri's formula is a special case of the Littlewood-Richardson rule, which expresses the product in terms of Schur polynomials. The dual version expresses in terms of Schur polynomials.
Specializations
Evaluating the Schur polynomial in gives the number of semi-standard Young tableaux of shape with entries in .
One can show, by using the Weyl character formula for example, that
In this formula, , the tuple indicating the width of each row of the Young diagram, is implicitly extended with zeros until it has length . The sum of the elements is .
See also the Hook length formula which computes the same quantity for fixed λ.
Example
The following extended example should help clarify these ideas. Consider the case n = 3, d = 4. Using Ferrers diagrams or some other method, we find that there are just four partitions of 4 into at most three parts. We have
and so on, where is the Vandermonde determinant . Summarizing:
Every homogeneous degree-four symmetric polynomial in three variables can be expressed as a unique linear combination of these four Schur polynomials, and this combination can again be found using a Gröbner basis for an appropriate elimination order. For example,
is obviously a symmetric polynomial which is homogeneous of degree four, and we have
Relation to representation theory
The Schur polynomials occur in the representation theory of the symmetric groups, general linear groups, and unitary groups. The Weyl character formula implies that the Schur polynomials are the characters of finite-dimensional irreducible representations of the general linear groups, and helps to generalize Schur's work to other compact and semisimple Lie groups.
Several expressions arise for this relation, one of the most important being the expansion of the Schur functions sλ in terms of the symmetric power functions . If we write χ for the character of the representation of the symmetric group indexed by the partition λ evaluated at elements of cycle type indexed by the partition ρ, then
where ρ = (1r1, 2r2, 3r3, ...) means that the partition ρ has rk parts of length k.
A proof of this can be found in R. Stanley's Enumerative Combinatorics Volume 2, Corollary 7.17.5.
The integers χ can be computed using the Murnaghan–Nakayama rule.
Schur positivity
Due to the connection with representation theory, a symmetric function which expands positively in Schur functions are of
particular interest. For example, the skew Schur functions expand positively in the ordinary Schur functions,
and the coefficients are Littlewood–Richardson coefficients.
A special case of this is the expansion of the complete homogeneous symmetric functions hλ in Schur functions.
This decomposition reflects how a permutation module is decomposed into irreducible representations.
Methods for proving Schur positivity
There are several approaches to prove Schur positivity of a given symmetric function F.
If F is described in a combinatorial manner, a direct approach is to produce a bijection with semi-standard Young tableaux.
The Edelman–Greene correspondence and the Robinson–Schensted–Knuth correspondence are examples of such bijections.
A bijection with more structure is a proof using so called crystals. This method can be described as defining a certain graph structure described with local rules on the underlying combinatorial objects.
A similar idea is the notion of dual equivalence. This approach also uses a graph structure, but on the objects representing the expansion in the fundamental quasisymmetric basis. It is closely related to the RSK-correspondence.
Generalizations
Skew Schur functions
Skew Schur functions sλ/μ depend on two partitions λ and μ, and can be defined by the property
Here, the inner product is the Hall inner product, for which the Schur polynomials form an orthonormal basis.
Similar to the ordinary Schur polynomials, there are numerous ways to compute these. The corresponding Jacobi-Trudi identities are
There is also a combinatorial interpretation of the skew Schur polynomials,
namely it is a sum over all semi-standard Young tableaux (or column-strict tableaux) of the skew shape .
The skew Schur polynomials expands positively in Schur polynomials. A rule for the coefficients is
given by the Littlewood-Richardson rule.
Double Schur polynomials
The double Schur polynomials can be seen as a generalization of the shifted Schur polynomials.
These polynomials are also closely related to the factorial Schur polynomials.
Given a partition , and a sequence
one can define the double Schur polynomial as
where the sum is taken over all reverse semi-standard Young tableaux of shape , and integer entries
in . Here denotes the value in the box in and is the content of the box.
A combinatorial rule for the Littlewood-Richardson coefficients (depending on the sequence a) was given by A.I Molev. In particular, this implies that the shifted Schur polynomials have non-negative Littlewood-Richardson coefficients.
The shifted Schur polynomials can be obtained from the double Schur polynomials by specializing and .
The double Schur polynomials are special cases of the double Schubert polynomials.
Factorial Schur polynomials
The factorial Schur polynomials may be defined as follows.
Given a partition λ, and a doubly infinite sequence ...,a−1, a0, a1, ...
one can define the factorial Schur polynomial sλ(x|a) as
where the sum is taken over all semi-standard Young tableaux T of shape λ, and integer entries
in 1, ..., n. Here T(α) denotes the value in the box α in T and c(α) is the content
of the box.
There is also a determinant formula,
where (y|a)k = (y − a1) ... (y − ak). It is clear that if we let for all i,
we recover the usual Schur polynomial sλ.
The double Schur polynomials and the factorial Schur polynomials in n variables are related via the identity
sλ(x||a) = sλ(x|u) where an−i+1 = ui.
Other generalizations
There are numerous generalizations of Schur polynomials:
Hall–Littlewood polynomials
Shifted Schur polynomials
Flagged Schur polynomials
Schubert polynomials
Stanley symmetric functions (also known as stable Schubert polynomials)
Key polynomials (also known as Demazure characters)
Quasi-symmetric Schur polynomials
Row-strict Schur polynomials
Jack polynomials
Modular Schur polynomials
Loop Schur functions
Macdonald polynomials
Schur polynomials for the symplectic and orthogonal group.
k-Schur functions
Grothendieck polynomials (K-theoretical analogue of Schur polynomials)
LLT polynomials
See also
Schur functor
Littlewood–Richardson rule, where one finds some identities involving Schur polynomials.
References
Homogeneous polynomials
Invariant theory
Representation theory of finite groups
Symmetric functions
Orthogonal polynomials
Issai Schur | Schur polynomial | [
"Physics",
"Mathematics"
] | 2,454 | [
"Symmetry",
"Group actions",
"Symmetric functions",
"Invariant theory",
"Algebra"
] |
3,147,292 | https://en.wikipedia.org/wiki/Alacrite | Alacrite (also known as Alloy L-605, Cobalt L-605, Haynes 25, and occasionally F90) is a family of cobalt-based alloys. The alloy exhibits useful mechanical properties and is oxidation- and sulfidation-resistant.
One member of the family, XSH Alacrite, is described as "a non-magnetic, stainless super-alloy whose high surface hardness enables one to achieve a mirror quality polish." The Institut National de Métrologie in France has also used the material as a kilogram mass standard.
Composition and standardization
L-605 is composed primarily of cobalt (Co), with a specified mixture of chromium (Cr), tungsten (W), nickel (Ni), iron (Fe) and carbon (C), as well as small amounts of manganese (Mn), silicon (Si), and phosphorus (P). The tungsten and nickel improve the alloy's machinability, while chromium contributes to its solid-solution strengthening. The following tolerances must be met to be considered an L-605 alloy:
Properties and Applications
The alloy was originally developed for application in aircraft, including combustion chambers, liners, afterburners and the hot section of gas turbines. It has also been used in aerospace components and turbine engines as well as drug-eluting and other kinds of stents due to its biocompatibility. When used for implantable medical devices, the ASTM F90-09 and ISO 5832-5:2005 specifications dictate how L-605 is manufactured and tested.
References
Biomaterials
Cobalt alloys | Alacrite | [
"Physics",
"Chemistry",
"Biology"
] | 336 | [
"Biomaterials",
"Alloy stubs",
"Materials",
"Alloys",
"Medical technology",
"Matter",
"Cobalt alloys"
] |
3,147,900 | https://en.wikipedia.org/wiki/Code-mixing | Code-mixing is the mixing of two or more languages or language varieties in speech.
Some scholars use the terms "code-mixing" and "code-switching" interchangeably, especially in studies of syntax, morphology, and other formal aspects of language. Others assume more specific definitions of code-mixing, but these specific definitions may be different in different subfields of linguistics, education theory, communications etc.
Code-mixing is similar to the use or creation of pidgins, but while a pidgin is created across groups that do not share a common language, code-mixing may occur within a multilingual setting where speakers share more than one language.
As code-switching
Some linguists use the terms code-mixing and code-switching more or less interchangeably. Especially in formal studies of syntax, morphology, etc., both terms are used to refer to utterances that draw from elements of two or more grammatical systems. These studies are often interested in the alignment of elements from distinct systems, or on constraints that limit switching.
Some work defines code-mixing as the placing or mixing of various linguistic units (affixes, words, phrases, clauses) from two different grammatical systems within the same sentence and speech context, while code-switching is the placing or mixing of units (words, phrases, sentences) from two codes within the same speech context. The structural difference between code-switching and code-mixing is the position of the altered elements—for code-switching, the modification of the codes occurs intersententially, while for code-mixing, it occurs intrasententially.
In other work the term code-switching emphasizes a multilingual speaker's movement from one grammatical system to another, while the term code-mixing suggests a hybrid form, drawing from distinct grammars. In other words, code-mixing emphasizes the formal aspects of language structures or linguistic competence, while code-switching emphasizes linguistic performance.
While many linguists have worked to describe the difference between code-switching and borrowing of words or phrases, the term code-mixing may be used to encompass both types of language behavior.
In sociolinguistics
While linguists who are primarily interested in the structure or form of code-mixing may have relatively little interest to separate code-mixing from code-switching, some sociolinguists have gone to great lengths to differentiate the two phenomena. For these scholars, code-switching is associated with particular pragmatic effects, discourse functions, or associations with group identity. In this tradition, the terms code-mixing or language alternation are used to describe more stable situations in which multiple languages are used without such pragmatic effects. See also Code-mixing as fused lect, below.
In language acquisition
In studies of bilingual language acquisition, code-mixing refers to a developmental stage during which children mix elements of more than one language. Nearly all bilingual children go through a period in which they move from one language to another without apparent discrimination. This differs from code-switching, which is understood as the socially and grammatically appropriate use of multiple varieties.
Beginning at the babbling stage, young children in bilingual or multilingual environments produce utterances that combine elements of both (or all) of their developing languages. Some linguists suggest that this code-mixing reflects a lack of control or ability to differentiate the languages. Others argue that it is a product of limited vocabulary; very young children may know a word in one language but not in another. More recent studies argue that this early code-mixing is a demonstration of a developing ability to code-switch in socially appropriate ways.
For young bilingual children, code-mixing may be dependent on the linguistic context, cognitive task demands, and interlocutor. Code-mixing may also function to fill gaps in their lexical knowledge. Some forms of code-mixing by young children may indicate risk for language impairment.
In psychology and psycholinguistics
In psychology and in psycholinguistics the label code-mixing is used in theories that draw on studies of language alternation or code-switching to describe the cognitive structures underlying bilingualism. During the 1950s and 1960s, psychologists and linguists treated bilingual speakers as, in Grosjean's terms, "two monolinguals in one person". This "fractional view" supposed that a bilingual speaker carried two separate mental grammars that were more or less identical to the mental grammars of monolinguals and that were ideally kept separate and used separately. Studies since the 1970s, however, have shown that bilinguals regularly combine elements from "separate" languages. These findings have led to studies of code-mixing in psychology and psycholinguistics.
Sridhar and Sridhar define code-mixing as "the transition from using linguistic units (words, phrases, clauses, etc.) of one language to using those of another within a single sentence". They note that this is distinct from code-switching in that it occurs in a single sentence (sometimes known as intrasentential switching) and in that it does not fulfill the pragmatic or discourse-oriented functions described by sociolinguists. (See Code-mixing in sociolinguistics above.) The practice of code-mixing, which draws from competence in two languages at the same time suggests that these competencies are not stored or processed separately. Code-mixing among bilinguals is therefore studied in order to explore the mental structures underlying language abilities.
As fused lect
A mixed language or a fused lect is a relatively stable mixture of two or more languages. What some linguists have described as "codeswitching as unmarked choice" or "frequent codeswitching" has more recently been described as "language mixing", or in the case of the most strictly grammaticalized forms as "fused lects".
In areas where code-switching among two or more languages is very common, it may become normal for words from both languages to be used together in everyday speech. Unlike code-switching, where a switch tends to occur at semantically or sociolinguistically meaningful junctures, this code-mixing has no specific meaning in the local context. A fused lect is identical to a mixed language in terms of semantics and pragmatics, but fused lects allow less variation since they are fully grammaticalized. In other words, there are grammatical structures of the fused lect that determine which source-language elements may occur.
A mixed language is different from a creole language. Creoles are thought to develop from pidgins as they become nativized. Mixed languages develop from situations of code-switching. (See the distinction between code-mixing and pidgin above.)
Local names
There are many names for specific mixed languages or fused lects. These names are often used facetiously or carry a pejorative sense. Named varieties include the following, among others.
Benglish
Bisalog
Bislish
Chinglish
Denglisch
Dunglish
Franglais
Franponais
Greeklish
Hinglish
Hokaglish
Konglish
Manglish
Maltenglish
Poglish
Porglish
Portuñol
Singlish
Spanglish
Svorsk
Tanglish
Taglish
Tenglish
Turklish
Notes
References
Syntax
Linguistic morphology
Education theory
Human communication
Sociolinguistics
Psycholinguistics
Language acquisition | Code-mixing | [
"Biology"
] | 1,482 | [
"Human communication",
"Behavior",
"Human behavior"
] |
3,147,940 | https://en.wikipedia.org/wiki/Quantum%20heterostructure | A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. This leads to the formation of a set of discrete energy levels at which the carriers can exist. Quantum heterostructures have sharper density of states than structures of more conventional sizes.
Quantum heterostructures are important for fabrication of short-wavelength light-emitting diodes and diode lasers, and for other optoelectronic applications, e.g. high-efficiency photovoltaic cells.
Examples of quantum heterostructures confining the carriers in quasi-two, -one and -zero dimensions are:
Quantum wells
Quantum wires
Quantum dots
References
See also
http://www.ecse.rpi.edu/~schubert/Light-Emitting-Diodes-dot-org/chap04/chap04.htm
Kitaev's periodic table
Quantum electronics
Nanomaterials
Semiconductor structures | Quantum heterostructure | [
"Physics",
"Materials_science"
] | 217 | [
"Quantum electronics",
"Quantum mechanics",
"Condensed matter physics",
"Nanotechnology",
"Nanomaterials",
"Quantum physics stubs"
] |
8,964,423 | https://en.wikipedia.org/wiki/Transfer-matrix%20method%20%28statistical%20mechanics%29 | In statistical mechanics, the transfer-matrix method is a mathematical technique which is used to write the partition function into a simpler form. It was introduced in 1941 by Hans Kramers and Gregory Wannier. In many one dimensional lattice models, the partition function is first written as an n-fold summation over each possible microstate, and also contains an additional summation of each component's contribution to the energy of the system within each microstate.
Overview
Higher-dimensional models contain even more summations. For systems with more than a few particles, such expressions can quickly become too complex to work out directly, even by computer.
Instead, the partition function can be rewritten in an equivalent way. The basic idea is to write the partition function in the form
where v0 and vN+1 are vectors of dimension p and the p × p matrices Wk are the so-called transfer matrices. In some cases, particularly for systems with periodic boundary conditions, the partition function may be written more simply as
where "tr" denotes the matrix trace. In either case, the partition function may be solved exactly using eigenanalysis. If the matrices are all the same matrix W, the partition function may be approximated as the Nth power of the largest eigenvalue of W, since the trace is the sum of the eigenvalues and the eigenvalues of the product of two diagonal matrices equals the product of their individual eigenvalues.
The transfer-matrix method is used when the total system can be broken into a sequence of subsystems that interact only with adjacent subsystems. For example, a three-dimensional cubical lattice of spins in an Ising model can be decomposed into a sequence of two-dimensional planar lattices of spins that interact only adjacently. The dimension p of the p × p transfer matrix equals the number of states the subsystem may have; the transfer matrix itself Wk encodes the statistical weight associated with a particular state of subsystem k − 1 being next to another state of subsystem k.
Importantly, transfer matrix methods allow to tackle probabilistic lattice models from an algebraic perspective, allowing for instance the use of results from representation theory.
As an example of observables that can be calculated from this method, the probability of a particular state occurring at position x is given by:
Where is the projection matrix for state , having elements
Transfer-matrix methods have been critical for many exact solutions of problems in statistical mechanics, including the Zimm–Bragg and Lifson–Roig models of the helix-coil transition, transfer matrix models for protein-DNA binding, as well as the famous exact solution of the two-dimensional Ising model by Lars Onsager.
See also
Transfer operator
References
Notes
Statistical mechanics
Mathematical physics
Lattice models | Transfer-matrix method (statistical mechanics) | [
"Physics",
"Materials_science",
"Mathematics"
] | 579 | [
"Applied mathematics",
"Theoretical physics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Mathematical physics"
] |
8,964,614 | https://en.wikipedia.org/wiki/Evaporation%20%28deposition%29 | Evaporation is a common method of thin-film deposition. The source material is evaporated in a vacuum. The vacuum allows vapor particles to travel directly to the target object (substrate), where they condense back to a solid state. Evaporation is used in microfabrication, and to make macro-scale products such as metallized plastic film.
History
Evaporation deposition was first observed in incandescent light bulbs during the late nineteenth century. The problem of bulb blackening was one of the main obstacles to making bulbs with long life, and received a great amount of study by Thomas Edison and his General Electric company, as well as many others working on their own lightbulbs. The phenomenon was first adapted to a process of vacuum deposition by Pohl and Pringsheim in 1912. However, it found little use until the 1930s, when people began experimenting with ways to make aluminum-coated mirrors for use in telescopes. Aluminum was far too reactive to be used in chemical wet deposition or electroplating methods. John D. Strong was successful in making the first aluminum telescope-mirrors in the 1930s using evaporation deposition. Because it produces an amorphous (glassy) coating rather than a crystalline one, with high uniformity and precise control of thickness, thereafter it became a common process for producing thin-film optical coatings from a variety of materials, both metal and non-metal (dielectric), and has been adopted for many other uses, such as coating plastic toys and automobile parts, the production of semiconductors and microchips, and Mylar films with uses ranging from capacitors to spacecraft thermal control.
Physical principle
Evaporation involves two basic processes: a hot source evaporates a material and it condenses on a colder substrate that is below its melting point. It resembles the familiar process by which liquid water appears on the lid of a boiling pot. However, the gaseous environment and heat source (see "Equipment" below) are different. Liquids such as water cannot exist in a vacuum, because they require some level of external pressure to hold the atoms and molecules together. In a vacuum, materials sublimate (vaporize), expand outward, and upon contact with a surface condense back into a solid (deposit) without ever passing through a liquid state. Thus, in comparison to water, the process is more like frost forming on a window.
Evaporation takes place in a vacuum, i.e. vapors other than the source material are almost entirely removed before the process begins. In high vacuum (with a long mean free path), evaporated particles can travel directly to the deposition target without colliding with the background gas. (By contrast, in the boiling pot example, the water vapor pushes the air out of the pot before it can reach the lid.) At a typical pressure of 10−4 Pa, a 0.4-nm particle has a mean free path of 60 m. Hot objects in the evaporation chamber, such as heating filaments, produce unwanted vapors that limit the quality of the vacuum.
Evaporated atoms that collide with foreign particles may react with them; for instance, if aluminium is deposited in the presence of oxygen, it will form aluminium oxide. They also reduce the amount of vapor that reaches the substrate, which makes the thickness difficult to control.
Evaporated materials deposit nonuniformly if the substrate has a rough surface (as integrated circuits often do). Because the evaporated material attacks the substrate mostly from a single direction, protruding features block the evaporated material from some areas. This phenomenon is called "shadowing" or "step coverage."
When evaporation is performed in poor vacuum or close to atmospheric pressure, the resulting deposition is generally non-uniform and tends not to be a continuous or smooth film. Rather, the deposition will appear fuzzy.
Equipment
Any evaporation system includes a vacuum pump. It also includes an energy source that evaporates the material to be deposited. Many different energy sources exist:
In the thermal method, metal material (in the form of wire, pellets, shot) is fed onto heated semimetal (ceramic) evaporators known as "boats" due to their shape. A pool of melted metal forms in the boat cavity and evaporates into a cloud above the source. Alternatively the source material is placed in a crucible, which is radiatively heated by an electric filament, or the source material may be hung from the filament itself (filament evaporation).
Molecular beam epitaxy is an advanced form of thermal evaporation.
In the electron-beam method, the source is heated by an electron beam with an energy up to 15 keV.
In flash evaporation, a fine wire or powder of source material is fed continuously onto a hot ceramic or metallic bar, and evaporates on contact.
Resistive evaporation is accomplished by passing a large current through a resistive wire or foil containing the material to be deposited. The heating element is often referred to as an "evaporation source". Wire type evaporation sources are made from tungsten wire and can be formed into filaments, baskets, heaters or looped shaped point sources. Boat type evaporation sources are made from tungsten, tantalum, molybdenum or ceramic type materials capable of withstanding high temperatures.
Induction heating evaporation involves the heating of a source material using an induction heater.
Some systems mount the substrate on an out-of-plane planetary mechanism. The mechanism rotates the substrate simultaneously around two axes, to reduce shadowing.
Optimization
Purity of the deposited film depends on the quality of the vacuum, and on the purity of the source material.
At a given vacuum pressure the film purity will be higher at higher deposition rates as this minimises the relative rate of gaseous impurity inclusion.
The thickness of the film will vary due to the geometry of the evaporation chamber. Collisions with residual gases aggravate nonuniformity of thickness.
Wire filaments for evaporation cannot deposit thick films, because the size of the filament limits the amount of material that can be deposited. Evaporation boats and crucibles offer higher volumes for thicker coatings. Thermal evaporation offers faster evaporation rates than sputtering. Flash evaporation and other methods that use crucibles can deposit thick films.
In order to deposit a material, the evaporation system must be able to vaporize it. This makes refractory materials such as tungsten hard to deposit by methods that do not use electron-beam heating.
Electron-beam evaporation allows tight control of the evaporation rate. Thus, an electron-beam system with multiple beams and multiple sources can deposit a chemical compound or composite material of known composition.
Step coverage
Applications
An important example of an evaporative process is the production of aluminized PET film packaging film in a roll-to-roll web system. Often, the aluminum layer in this material is not thick enough to be entirely opaque since a thinner layer can be deposited more cheaply than a thick one. The main purpose of the aluminum is to isolate the product from the external environment by creating a barrier to the passage of light, oxygen, or water vapor.
Evaporation is commonly used in microfabrication to deposit metal films.
Comparison to other deposition methods
Alternatives to evaporation, such as sputtering and chemical vapor deposition, have better step coverage. This may be an advantage or disadvantage, depending on the desired result.
Sputtering tends to deposit material more slowly than evaporation.
Sputtering uses a plasma, which produces many high-speed atoms that bombard the substrate and may damage it. Evaporated atoms have a Maxwellian energy distribution, determined by the temperature of the source, which reduces the number of high-speed atoms. However, electron beams tend to produce X-rays (Bremsstrahlung) and stray electrons, each of which can also damage the substrate.
References
Semiconductor Devices: Physics and Technology, by S.M. Sze, , has an especially detailed discussion of film deposition by evaporation.
R. D. Mathis Company Evaporation Sources Catalog, by R. D. Mathis Company, pages 1 through 7 and page 12, 1992.
External links
Thin film evaporation reference - properties of common materials
Web-page of Society of Vacuum Coaters (Society of Vacuum Coaters)
Examples of evaporation sources
Physical vapor deposition techniques
Thin film deposition
Semiconductor device fabrication | Evaporation (deposition) | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,773 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Planes (geometry)",
"Solid state engineering"
] |
8,966,784 | https://en.wikipedia.org/wiki/Relative%20permeability | In multiphase flow in porous media, the relative permeability of a phase is a dimensionless measure of the effective permeability of that phase. It is the ratio of the effective permeability of that phase to the absolute permeability. It can be viewed as an adaptation of Darcy's law to multiphase flow.
For two-phase flow in porous media given steady-state conditions, we can write
where is the flux, is the pressure drop, is the viscosity. The subscript indicates that the parameters are for phase .
is here the phase permeability (i.e., the effective permeability of phase ), as observed through the equation above.
Relative permeability, , for phase is then defined from , as
where is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. Relative permeability must be between zero and one.
In applications, relative permeability is often represented as a function of water saturation; however, owing to capillary hysteresis one often resorts to a function or curve measured under drainage and another measured under imbibition.
Under this approach, the flow of each phase is inhibited by the presence of the other phases. Thus the sum of relative permeabilities over all phases is less than 1. However, apparent relative permeabilities larger than 1 have been obtained since the Darcean approach disregards the viscous coupling effects derived from momentum transfer between the phases (see assumptions below). This coupling could enhance the flow instead of inhibit it. This has been observed in heavy oil petroleum reservoirs when the gas phase flows as bubbles or patches (disconnected).
Modelling assumptions
The above form for Darcy's law is sometimes also called Darcy's extended law, formulated for horizontal, one-dimensional, immiscible multiphase flow in homogeneous and isotropic porous media. The interactions between the fluids are neglected, so this model assumes that the solid porous media and the other fluids form a new porous matrix through which a phase can flow, implying that the fluid-fluid interfaces remain static in steady-state flow, which is not true, but this approximation has proven useful anyway.
Each of the phase saturations must be larger than the irreducible saturation, and each phase is assumed continuous within the porous medium.
Based on data from special core analysis laboratory (SCAL) experiments, simplified models of relative permeability as a function of saturation (e.g. water saturation) can be constructed. This article will focus on an oil-water system.
Saturation scaling
The water saturation is the fraction of the pore volume that is filled with water, and similarly for the oil saturation . Thus, saturations are themselves scaled properties or variables. This gives the constraint
The model functions or correlations for relative permeabilities in an oil-water system are therefore usually written as functions of only water saturation, and this makes it natural to select water saturation as the horizontal axis in graphical presentations. Let (also denoted and sometimes ) be the irreducible (or minimal or connate) water saturation, and let be the residual (minimal) oil saturation after water flooding (imbibition). The flowing water saturation window in a water invasion / injection / imbibition process is bounded by a minimum value and a maximum value . In mathematical terms the flowing saturation window is written as
By scaling the water saturation to the flowing saturation window, we get a (new or another) normalized water saturation value
and a normalized oil saturation value
Endpoints
Let be oil relative permeability, and let be water relative permeability. There are two ways of scaling phase permeability (i.e. effective permeability of the phase). If we scale phase permeability w.r.t. absolute water permeability (i.e. ), we get an endpoint parameter for both oil and water relative permeability. If we scale phase permeability w.r.t. oil permeability with irreducible water saturation present,
endpoint is one, and we are left with only the endpoint parameter. In order to satisfy both options in the mathematical model, it is common to use two endpoint symbols in the model for two-phase relative permeability.
The endpoints / endpoint parameters of oil and water relative permeabilities are
These symbols have their merits and limits. The symbol emphasize that it represents the top point of . It occurs at irreducible water saturation, and it is the largest value of that can occur for initial water saturation. The competing endpoint symbol occurs in imbibition flow in oil-gas systems. If the permeability basis is oil with irreducible water present, then . The symbol emphasizes that it is occurring at the residual oil saturation. An alternative symbol to is which emphasizes that the reference permeability is oil permeability with irreducible water present.
The oil and water relative permeability models are then written as
The functions and are called normalised relative permeabilities or shape functions for oil and water, respectively. The endpoint parameters and (which is a simplification of ) are physical properties that are obtained either before or together with the optimization of shape parameters present in the shape functions.
There are often many symbols in articles that discuss relative permeability models and modelling. A number of busy core analysts, reservoir engineers and scientists often skip using tedious and time-consuming subscripts, and write e.g. Krow instead of or or krow or oil relative permeability. A variety of symbols are therefore to be expected, and accepted as long as they are explained or defined.
The effects that slip or no-slip boundary conditions in pore flow have on endpoint parameters, are discussed by Berg et alios.
Corey-model
An often used approximation of relative permeability is the Corey correlation
which is a power law in saturation. The Corey correlations of the relative permeability for oil and water are then
If the permeability basis is normal oil with irreducible water present, then .
The empirical parameters and are called curve shape parameters or simply shape parameters, and they can be obtained from measured data either by analytical interpretation of measured data, or by optimization using a core flow numerical simulator to match the experiment (often called history matching). is sometimes appropriate. The physical properties and are obtained either before or together with the optimizing of and .
In case of gas-water system or gas-oil system there are Corey correlations similar to the oil-water relative permeabilities correlations shown above.
LET-model
The Corey-correlation or Corey model has only one degree of freedom for the shape of each relative permeability curve, the shape parameter N.
The LET-correlation
adds more degrees of freedom in order to accommodate the shape of relative permeability curves in SCAL experiments and in 3D reservoir models that are adjusted to match historic production. These adjustments frequently includes relative permeability curves and endpoints.
The LET-type approximation is described by 3 parameters L, E, T. The correlation for water and oil relative permeability with water injection is thus
and
written using the same normalization as for Corey.
Only , , , and have direct physical meaning, while the parameters L, E and T are empirical. The parameter L describes the lower part of the curve, and by similarity and experience the L-values are comparable to the appropriate Corey parameter. The parameter T describes the upper part (or the top part) of the curve in a similar way that the L-parameter describes the lower part of the curve. The parameter E describes the position of the slope (or the elevation) of the curve. A value of one is a neutral value, and the position of the slope is governed by the L- and T-parameters. Increasing the value of the E-parameter pushes the slope towards the high end of the curve. Decreasing the value of the E-parameter pushes the slope towards the lower end of the curve. Experience using the LET correlation indicates the following reasonable ranges for the parameters L, E, and T: L ≥ 0.1, E > 0 and T ≥ 0.1.
In case of gas-water system or gas-oil system there are LET correlations similar to the oil-water relative permeabilities correlations shown above.
Evaluations
After Morris Muskat et alios established the concept of relative permeability in late 1930'ies, the number of correlations, i.e. models, for relative permeability has steadily increased. This creates a need for evaluation of the most common correlations at the current time. Two of the latest (per 2019) and most thorough evaluations are done by Moghadasi et alios and by Sakhaei et alios. Moghadasi et alios
evaluated Corey, Chierici and LET correlations for oil/water relative permeability using a sophisticated method that takes into account the number of uncertain model parameters. They found that LET, with the largest number (three) of uncertain parameters, was clearly the best one for both oil and water relative permeability. Sakhaei et alios
evaluated 10 common and widely used relative permeability correlations for gas/oil and gas/condensate systems, and found that LET showed best agreement with experimental values for both gas and oil/condensate relative permeability.
See also
TEM-function
Permeability (earth sciences)
Capillary pressure
Imbibition
Drainage
Buckley–Leverett equation
References
External links
Relative Permeability Curves
Fluid dynamics
Porous media | Relative permeability | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,021 | [
"Physical phenomena",
"Physical quantities",
"Porous media",
"Quantity",
"Chemical engineering",
"Materials science",
"Piping",
"Physical properties",
"Fluid dynamics"
] |
8,967,165 | https://en.wikipedia.org/wiki/Multibody%20system | Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements.
Introduction
The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints.
Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum.
Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies.
Applications
While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas:
Aerospace engineering (helicopter, landing gears, behavior of machines under different gravity conditions)
Biomechanics
Combustion engine, gears and transmissions, chain drive, belt drive
Dynamic simulation
Hoist, conveyor, paper mill
Military applications
Particle simulation (granular media, sand, molecules)
Physics engine
Robotics
Vehicle simulation (vehicle dynamics, rapid prototyping of vehicles, improvement of stability, comfort optimization, improvement of efficiency, ...)
Example
The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system.
The motion of the mechanism can be viewed in the following gif animation:
Concept
A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are:
cardan joint or Universal Joint; 4 kinematical constraints
prismatic joint; relative displacement along one axis is allowed, constrains relative rotation; implies 5 kinematical constraints
revolute joint; only one relative rotation is allowed; implies 5 kinematical constraints; see the example above
spherical joint; constrains relative displacements in one point, relative rotation is allowed; implies 3 kinematical constraints
There are two important terms in multibody systems: degree of freedom and
constraint condition.
Degree of freedom
The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space.
A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom.
The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis.
Constraint condition
A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non-holonomic. This is the case for the general rolling constraint.
In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies.
Equations of motion
The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations.
The motion of rigid bodies is described by means of
(1)
(2)
These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by , the mass matrix is represented by which may depend on the generalized coordinates.
represents the constraint conditions and the matrix (sometimes termed the Jacobian) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces to the according equations of the bodies. The components of the vector are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts,
where represents translations and describes the rotations.
Quadratic velocity vector
In the case of rigid bodies, the so-called quadratic velocity vector is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body.
Lagrange multipliers
The Lagrange multiplier is related to a constraint condition and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body.
Minimal coordinates
The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible.
Flexible multibody
There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms.
Flexibility could be take in account in different way. There are three main approaches:
Discrete flexible multibody, the flexible body is divided into a set of rigid bodies connected by elastic stiffnesses representative of the body's elasticity
Modal condensation, in which elasticity is described through a finite number of modes of vibration of the body by exploiting the degrees of freedom linked to the amplitude of the mode
Full flex, all the flexibility of the body is taken into account by discretize body in sub elements with singles displacement linked from elastic material properties
See also
Dynamic simulation
Multibody simulation (solution techniques)
Physics engine
References
J. Wittenburg, Dynamics of Systems of Rigid Bodies, Teubner, Stuttgart (1977).
J. Wittenburg, Dynamics of Multibody Systems, Berlin, Springer (2008).
K. Magnus, Dynamics of multibody systems, Springer Verlag, Berlin (1978).
P.E. Nikravesh, Computer-Aided Analysis of Mechanical Systems, Prentice-Hall (1988).
E.J. Haug, Computer-Aided Kinematics and Dynamics of Mechanical Systems, Allyn and Bacon, Boston (1989).
H. Bremer and F. Pfeiffer, Elastische Mehrkörpersysteme, B. G. Teubner, Stuttgart, Germany (1992).
J. García de Jalón, E. Bayo, Kinematic and Dynamic Simulation of Multibody Systems - The Real-Time Challenge, Springer-Verlag, New York (1994).
A.A. Shabana, Dynamics of multibody systems, Second Edition, John Wiley & Sons (1998).
M. Géradin, A. Cardona, Flexible multibody dynamics – A finite element approach, Wiley, New York (2001).
E. Eich-Soellner, C. Führer, Numerical Methods in Multibody Dynamics, Teubner, Stuttgart, 1998 (reprint Lund, 2008).
T. Wasfy and A. Noor, "Computational strategies for flexible multibody systems," ASME. Appl. Mech. Rev. 2003;56(6):553-613. .
External links
http://real.uwaterloo.ca/~mbody/ Collected links of John McPhee
Mechanics
Dynamical systems | Multibody system | [
"Physics",
"Mathematics",
"Engineering"
] | 2,238 | [
"Mechanics",
"Mechanical engineering",
"Dynamical systems"
] |
8,967,705 | https://en.wikipedia.org/wiki/Sympathetic%20resonance | Sympathetic resonance or sympathetic vibration is a harmonic phenomenon wherein a passive string or vibratory body responds to external vibrations to which it has a harmonic likeness. The classic example is demonstrated with two similarly-tuned tuning forks. When one fork is struck and held near the other, vibrations are induced in the unstruck fork, even though there is no physical contact between them. In similar fashion, strings will respond to the vibrations of a tuning fork when sufficient harmonic relations exist between them. The effect is most noticeable when the two bodies are tuned in unison or an octave apart (corresponding to the first and second harmonics, integer multiples of the inducing frequency), as there is the greatest similarity in vibrational frequency. Sympathetic resonance is an example of injection locking occurring between coupled oscillators, in this case coupled through vibrating air. In musical instruments, sympathetic resonance can produce both desirable and undesirable effects.
According to The New Grove Dictionary of Music and Musicians:
Sympathetic resonance in music instruments
Sympathetic resonance has been applied to musical instruments from many cultures and time periods, and to string instruments in particular. In instruments with undamped strings (e.g. harps, guitars and kotos), strings will resonate at their fundamental or overtone frequencies when other nearby strings are sounded. For example, an A string at 440 Hz will cause an E string at 330 Hz to resonate, because they share an overtone of 1320 Hz (the third harmonic of A and fourth harmonic of E). Sympathetic resonance is a factor in the timbre of a string instrument. Tailed bridge guitars like the Fender Jaguar differ in timbre from guitars with short bridges, due to the resonance that occurs in their extended floating bridge.
Certain instruments are built with sympathetic strings, auxiliary strings which are not directly played but sympathetically produce sound in response to tones played on the main strings. Sympathetic strings can be found on Indian musical instruments such as the sitar, Western Baroque instruments such as the viola d'amore and folk instruments such as the hurdy-gurdy and Hardanger fiddle. Some pianos are built with sympathetic strings, a practice known as aliquot stringing. Sympathetic resonance is sometimes an unwanted effect that must be mitigated when designing an instrument. For example, to dampen resonance in the headstock, some electric guitars use string trees near their tuning pegs. Similarly, the string length behind the bridge must be made as short as possible to dampen resonance.
Historical mentions
The phenomenon is described by the jewish scholar R. Isaac Arama (died 1494) in his book "Akeydat Yitzchak" as a metaphor to the bi-lateral influence between the human being and the world. Every thing a person does resonates with the entire world and thus causes similar acts everywhere. The human is the active string, the one that is being struck, and the world is the passive instrument that resonate to the same frequencies that the human activate in himself.
References
Acoustics
Resonance | Sympathetic resonance | [
"Physics",
"Chemistry"
] | 609 | [
"Resonance",
"Physical phenomena",
"Classical mechanics",
"Acoustics",
"Waves",
"Scattering"
] |
8,970,644 | https://en.wikipedia.org/wiki/Dynamic%20load%20testing | Dynamic load testing (or dynamic loading) is a method to assess a pile's bearing capacity by applying a dynamic load to the pile head (a falling mass) while recording acceleration and strain on the pile head. Dynamic load testing is a high strain dynamic test which can be applied after pile installation for concrete piles. For steel or timber piles, dynamic load testing can be done during installation or after installation.
The procedure is standardized by ASTM D4945-00 Standard Test Method for High Strain Dynamic Testing of Piles. It may be performed on all piles, regardless of their installation method. In addition to bearing capacity, Dynamic Load Testing gives information on resistance distribution (shaft resistance and end bearing) and evaluates the shape and integrity of the foundation element.
The foundation bearing capacity results obtained with dynamic load tests correlate well with the results of static load tests performed on the same foundation element.
See also
Pile integrity test
References
Rausche, F., Moses, F., Goble, G. G., September, 1972. Soil Resistance Predictions From Pile Dynamics. Journal of the Soil Mechanics and Foundations Division, American Society of Civil Engineers. Reprinted in Current Practices and Future Trends in Deep Foundations, Geotechnical Special Publication No. 125, DiMaggio, J. A., and Hussein, M. H., Eds, August, 2004. American Society of Civil Engineers: Reston, VA; 418–440.
Rausche, F., Goble, G.G. and Likins, G.E., Jr. (1985). Dynamic Determination of Pile Capacity. Journal of the Geotechnical Engineering Division, 111(3), 367–383.
Salgado, R. (2008). The Engineering of Foundations. New York:McGraw-Hill, Chapter 14 (pp. 669-713).
Scanlan, R.H., and Tomko, J.J., 1960, "Dynamic Prediction of Pile Static Bearing Capacity", Journal of the Soil Mechanics and Foundations Division, American Society of Civil Engineers, Vol. 86, No. SM4; 35-61
External links
Instrumentation and Pictures of Dynamic Load Test of Piles
In situ foundation tests
Deep foundations | Dynamic load testing | [
"Engineering"
] | 454 | [
"Civil engineering",
"Civil engineering stubs"
] |
17,637,008 | https://en.wikipedia.org/wiki/Monohydrogen%20phosphate | Hydrogen phosphate or monohydrogen phosphate (systematic name) is the inorganic ion with the formula [HPO4]2-. Its formula can also be written as [PO3(OH)]2-. Together with dihydrogen phosphate, hydrogenphosphate occurs widely in natural systems. Their salts are used in fertilizers and in cooking. Most hydrogenphosphate salts are colorless, water soluble, and nontoxic.
It is a conjugate acid of phosphate [PO4]3- and a conjugate base of dihydrogen phosphate [H2PO4]−.
It is formed when a pyrophosphate anion reacts with water by hydrolysis, which can give hydrogenphosphate:
+ H2O 2
Acid-base equilibria
Hydrogenphosphate is an intermediate in the multistep conversion of phosphoric acid to phosphate:
Examples
Diammonium phosphate, (NH4)2HPO4
Disodium phosphate, Na2HPO4, with varying amounts of water of hydration
References
Anions
Phosphates | Monohydrogen phosphate | [
"Physics",
"Chemistry"
] | 230 | [
"Matter",
"Anions",
"Salts",
"Phosphates",
"Ions"
] |
17,638,001 | https://en.wikipedia.org/wiki/Stokes%20stream%20function | In fluid dynamics, the Stokes stream function is used to describe the streamlines and flow velocity in a three-dimensional incompressible flow with axisymmetry. A surface with a constant value of the Stokes stream function encloses a streamtube, everywhere tangential to the flow velocity vectors. Further, the volume flux within this streamtube is constant, and all the streamlines of the flow are located on this surface. The velocity field associated with the Stokes stream function is solenoidal—it has zero divergence. This stream function is named in honor of George Gabriel Stokes.
Cylindrical coordinates
Consider a cylindrical coordinate system ( ρ , φ , z ), with the z–axis the line around which the incompressible flow is axisymmetrical, φ the azimuthal angle and ρ the distance to the z–axis. Then the flow velocity components uρ and uz can be expressed in terms of the Stokes stream function by:
The azimuthal velocity component uφ does not depend on the stream function. Due to the axisymmetry, all three velocity components ( uρ , uφ , uz ) only depend on ρ and z and not on the azimuth φ.
The volume flux, through the surface bounded by a constant value ψ of the Stokes stream function, is equal to 2π ψ.
Spherical coordinates
In spherical coordinates ( r , θ , φ ), r is the radial distance from the origin, θ is the zenith angle and φ is the azimuthal angle. In axisymmetric flow, with θ = 0 the rotational symmetry axis, the quantities describing the flow are again independent of the azimuth φ. The flow velocity components ur and uθ are related to the Stokes stream function through:
Again, the azimuthal velocity component uφ is not a function of the Stokes stream function ψ. The volume flux through a stream tube, bounded by a surface of constant ψ, equals 2π ψ, as before.
Vorticity
The vorticity is defined as:
, where
with the unit vector in the –direction.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of vorticity using a Stokes stream function
|-
|Consider the vorticity as defined by
From the definition of the curl in spherical coordinates:
First notice that the and components are equal to 0. Secondly substitute and into The result is:
Next the following algebra is performed:
|}
As a result, from the calculation the vorticity vector is found to be equal to:
Comparison with cylindrical
The cylindrical and spherical coordinate systems are related through
and
Alternative definition with opposite sign
As explained in the general stream function article, definitions using an opposite sign convention – for the relationship between the Stokes stream function and flow velocity – are also in use.
Zero divergence
In cylindrical coordinates, the divergence of the velocity field u becomes:
as expected for an incompressible flow.
And in spherical coordinates:
Streamlines as curves of constant stream function
From calculus it is known that the gradient vector is normal to the curve (see e.g. Level set#Level sets versus the gradient). If it is shown that everywhere using the formula for in terms of then this proves that level curves of are streamlines.
Cylindrical coordinates
In cylindrical coordinates,
.
and
So that
Spherical coordinates
And in spherical coordinates
and
So that
Notes
References
Originally published in 1879, the 6th extended edition appeared first in 1932.
Reprinted in:
Fluid dynamics | Stokes stream function | [
"Chemistry",
"Engineering"
] | 722 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
17,642,425 | https://en.wikipedia.org/wiki/Scherrer%20equation | The Scherrer equation, in X-ray diffraction and crystallography, is a formula that relates the size of sub-micrometre crystallites in a solid to the broadening of a peak in a diffraction pattern. It is often referred to, incorrectly, as a formula for particle size measurement or analysis. It is named after Paul Scherrer. It is used in the determination of size of crystals in the form of powder.
The Scherrer equation can be written as:
where:
is the mean size of the ordered (crystalline) domains, which may be smaller or equal to the grain size, which may be smaller or equal to the particle size;
is a dimensionless shape factor, with a value close to unity. The shape factor has a typical value of about 0.9, but varies with the actual shape of the crystallite;
is the X-ray wavelength;
is the line broadening at half the maximum intensity (FWHM), after subtracting the instrumental line broadening, in radians. This quantity is also sometimes denoted as ;
is the Bragg angle.
Applicability
The Scherrer equation is limited to nano-scale crystallites, or more-strictly, the coherently scattering domain size, which can be smaller than the crystallite size (due to factors mentioned below). It is not applicable to grains larger than about 0.1 to 0.2 μm, which precludes those observed in most metallographic and ceramographic microstructures.
The Scherrer equation provides a lower bound on the coherently scattering domain size, referred to here as the crystallite size for readability. The reason for this is that a variety of factors can contribute to the width of a diffraction peak besides instrumental effects and crystallite size; the most important of these are usually inhomogeneous strain and crystal lattice imperfections. The following sources of peak broadening are dislocations, stacking faults, twinning, microstresses, grain boundaries, sub-boundaries, coherency strain, chemical heterogeneities, and crystallite smallness. These and other imperfections may also result in peak shift, peak asymmetry, anisotropic peak broadening, or other peak shape effects.
If all of these other contributions to the peak width, including instrumental broadening, were zero, then the peak width would be determined solely by the crystallite size and the Scherrer equation would apply. If the other contributions to the width are non-zero, then the crystallite size can be larger than that predicted by the Scherrer equation, with the "extra" peak width coming from the other factors. The concept of crystallinity can be used to collectively describe the effect of crystal size and imperfections on peak broadening.
Although "particle size" is often used in reference to crystallite size, this term should not be used in association with the Scherrer method because particles are often agglomerations of many crystallites, and XRD gives no information on the particle size. Other techniques, such as sieving, image analysis, or visible light scattering do directly measure particle size. The crystallite size can be thought of as a lower limit of particle size.
Derivation for a simple stack of planes
To see where the Scherrer equation comes from, it is useful to consider the simplest possible example: a set of N planes separated by the distance, a. The derivation for this simple, effectively one-dimensional case, is straightforward. First, the structure factor for this case is derived, and then an expression for the peak widths is determined.
Structure factor for a set of N equally spaced planes
This system, effectively a one dimensional perfect crystal, has a structure factor or scattering function S(q):
where for N planes, :
each sum is a simple geometric series, defining , , and the other series analogously gives:
which is further simplified by converting to trigonometric functions:
and finally:
which gives a set of peaks at , all with heights .
Determination of the profile near the peak, and hence the peak width
From the definition of FWHM, for a peak at and with a FWHM of , , as the peak height is N. If we take the plus sign (peak is symmetric so either sign will do)
and
if N is not too small. If is small , then , and we can write the equation as a single non-linear equation , for . The solution to this equation is . Therefore, the size of the set of planes is related to the FWHM in q by
To convert to an expression for crystal size in terms of the peak width in the scattering angle used in X-ray powder diffraction, we note that the scattering vector , where the here is the angle between the incident wavevector and the scattered wavevector, which is different from the in the scan. Then the peak width in the variable is approximately , and so
which is the Scherrer equation with K = 0.88.
This only applies to a perfect 1D set of planes. In the experimentally relevant 3D case, the form of and hence the peaks, depends on the crystal lattice type, and the size and shape of the nanocrystallite. The underlying mathematics becomes more involved than in this simple illustrative example. However, for simple lattices and shapes, expressions have been obtained for the FWHM, for example by Patterson. Just as in 1D, the FWHM varies as the inverse of the characteristic size. For example, for a spherical crystallite with a cubic lattice, the factor of 5.56 simply becomes 6.96, when the size is the diameter D, i.e., the diameter of a spherical nanocrystal is related to the peak FWHM by
or in :
Peak broadening due to disorder of the second kind
The finite size of a crystal is not the only possible reason for broadened peaks in X-ray diffraction. Fluctuations of atoms about the ideal lattice positions that preserve the long-range order of the lattice only give rise to the Debye-Waller factor, which reduces peak heights but does not broaden them. However, fluctuations that cause the correlations between nearby atoms to decrease as their separation increases, does broaden peaks. This can be studied and quantified using the same simple one-dimensional stack of planes as above. The derivation follows that in chapter 9 of Guinier's textbook. This model was pioneered by and applied to a number of materials by Hosemann and collaborators over a number of years. They termed this disorder of the second kind, and referred to this imperfect crystalline ordering as paracrystalline ordering. Disorder of the first kind is the source of the Debye-Waller factor.
To derive the model we start with the definition of the structure factor
but now we want to consider, for simplicity an infinite crystal, i.e., , and we want to consider pairs of lattice sites. For large , for each of these planes, there are two neighbours planes away, so the above double sum becomes a single sum over pairs of neighbours either side of an atom, at positions and lattice spacings away, times . So, then
where is the probability density function for the separation of a pair of planes, lattice spacings apart. For the separation of neighbouring planes we assume for simplicity that the fluctuations around the mean neighbour spacing of a are Gaussian, i.e., that
and we also assume that the fluctuations between a plane and its neighbour, and between this neighbour and the next plane, are independent. Then is just the convolution of two s, etc. As the convolution of two Gaussians is just another Gaussian, we have that
The sum in is then just a sum of Fourier Transforms of Gaussians, and so
for . The sum is just the real part of the sum and so the structure factor of the infinite but disordered crystal is
This has peaks at maxima , where. These peaks have heights
i.e., the height of successive peaks drop off as the order of the peak (and so ) squared. Unlike finite-size effects that broaden peaks but do not decrease their height, disorder lowers peak heights. Note that here we assuming that the disorder is relatively weak, so that we still have relatively well defined peaks. This is the limit , where . In this limit, near a peak we can approximate , with and obtain
which is a Lorentzian or Cauchy function, of FWHM , i.e., the FWHM increases as the square of the order of peak, and so as the square of the wavevector at the peak. Finally, the product of the peak height and the FWHM is constant and equals , in the limit. For the first few peaks where is not large, this is just the limit.
Thus finite-size and this type of disorder both cause peak broadening, but there are qualitative differences. Finite-size effects broadens all peaks equally, and does not affect peak heights, while this type of disorder both reduces peak heights and broadens peaks by an amount that increases as . This, in principle, allows the two effects to be distinguished. Also, it means that the Scherrer equation is best applied to the first peak, as disorder of this type affects the first peak the least.
Coherence length
Within this model the degree of correlation between a pair of planes decreases as the distance between these planes increases, i.e., a pair of planes 10 planes apart have positions that are more weakly correlated than a pair of planes that are nearest neighbours. The correlation is given by , for a pair of planes m planes apart. For sufficiently large m the pair of planes are essentially uncorrelated, in the sense that the uncertainty in their relative positions is so large that it is comparable to the lattice spacing, a. This defines a correlation length, , defined as the separation when the width of , which is equals a. This gives
which is in effect an order-of-magnitude estimate for the size of domains of coherent crystalline lattices. Note that the FWHM of the first peak scales as , so the coherence length is approximately 1/FWHM for the first peak.
Further reading
B.D. Cullity & S.R. Stock, Elements of X-Ray Diffraction, 3rd Ed., Prentice-Hall Inc., 2001, p 96-102, .
R. Jenkins & R.L. Snyder, Introduction to X-ray Powder Diffractometry, John Wiley & Sons Inc., 1996, p 89-91, .
H.P. Klug & L.E. Alexander, X-Ray Diffraction Procedures, 2nd Ed., John Wiley & Sons Inc., 1974, p 687-703, .
B.E. Warren, X-Ray Diffraction, Addison-Wesley Publishing Co., 1969, p 251-254, .
References
Diffraction | Scherrer equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,281 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
17,644,838 | https://en.wikipedia.org/wiki/Fourier%E2%80%93Bros%E2%80%93Iagolnitzer%20transform | In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions (or distributions) on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).
Definitions
The Fourier transform of a Schwartz function f in S(Rn) is defined by
The FBI transform of f is defined for a ≥ 0 by
Thus, when a = 0, it essentially coincides with the Fourier transform.
The same formulas can be used to define the Fourier and FBI transforms of tempered distributions in
S(Rn).
Inversion formula
The Fourier inversion formula
allows a function f to be recovered from its Fourier transform.
In particular
Similarly, at a positive value of a, f(0) can be recovered from the FBI transform of f(x) by the inversion formula
Criterion for local analyticity
Bros and Iagolnitzer showed that a distribution f is locally equal to a real analytic function at y, in the direction ξ
if and only if its FBI transform satisfies an inequality of the form
for |ξ| sufficiently large.
Holmgren's uniqueness theorem
A simple consequence of the Bros and Iagolnitzer characterisation of local analyticity is the following regularity result of Lars Hörmander and Mikio Sato ().Theorem. Let P be an elliptic partial differential operator with analytic coefficients defined on an open subset
X of Rn. If Pf is analytic in X, then so too is f.
When "analytic" is replaced by "smooth" in this theorem, the result is just Hermann Weyl's classical lemma on elliptic regularity, usually proved using Sobolev spaces (Warner 1983). It is a special case of more general results involving the analytic wave front set (see below), which imply Holmgren's classical strengthening of the Cauchy–Kowalevski theorem on linear partial differential equations with real analytic coefficients. In modern language, Holmgren's uniquess theorem states that any distributional solution of such a system of equations must be analytic and therefore unique, by the Cauchy–Kowalevski theorem.
The analytic wave front set
The analytic wave front set or singular spectrum WFA(f) of a distribution f (or more generally of a hyperfunction) can be defined in terms of the FBI transform () as the complement of the conical set of points (x, λ ξ) (λ > 0) such that the FBI transform satisfies the Bros–Iagolnitzer inequality
for y the point at which one would like to test for analyticity, and |ξ| sufficiently large and pointing in the direction one would like to look for the wave front, that is, the direction at which the singularity at y, if it exists, propagates. J.M. Bony (, ) proved that this definition coincided with other definitions introduced independently by Sato, Kashiwara and Kawai and by Hörmander. If P is an mth order linear differential operator having analytic coefficients
with principal symboland characteristic variety'''then
In particular, when P is elliptic, char P = ø, so that
WFA(Pf) = WFA(f'').
This is a strengthening of the analytic version of elliptic regularity mentioned
above.
References
(Chapter 9.6, The Analytic Wavefront Set.)
. 2nd ed., Birkhäuser (2002), .
(Chapter 9, FBI Transform in a Hypo-Analytic Manifold.)
Fourier analysis
Transforms
Generalized functions
Mathematical physics | Fourier–Bros–Iagolnitzer transform | [
"Physics",
"Mathematics"
] | 831 | [
"Functions and mappings",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Mathematical relations",
"Transforms",
"Mathematical physics"
] |
14,793,522 | https://en.wikipedia.org/wiki/Superstatistics | Superstatistics is a branch of statistical mechanics or statistical physics devoted to the study of non-linear and non-equilibrium systems. It is characterized by using the superposition of multiple differing statistical models to achieve the desired non-linearity. In terms of ordinary statistical ideas, this is equivalent to compounding the distributions of random variables and it may be considered a simple case of a doubly stochastic model.
Consider an extended thermodynamical system which is locally in equilibrium and has a Boltzmann distribution, that is the probability of finding the system in a state with energy is proportional to . Here is the local inverse temperature. A non-equilibrium thermodynamical system is modeled by considering macroscopic fluctuations of the local inverse temperature. These fluctuations happen on time scales which are much larger than the microscopic relaxation times to the Boltzmann distribution. If the fluctuations of are characterized by a distribution , the superstatistical Boltzmann factor of the system is given by
This defines the superstatistical partition function
for system that can assume discrete energy states . The probability of finding the system in state is then given by
Modeling the fluctuations of leads to a description in terms of statistics of Boltzmann statistics, or "superstatistics". For example, if follows a Gamma distribution, the resulting superstatistics corresponds to Tsallis statistics. Superstatistics can also lead to other statistics such as power-law distributions or stretched exponentials. One needs to note here that the word super here is short for superposition of the statistics.
This branch is highly related to the exponential family and Mixing. These concepts are used in many approximation approaches, like particle filtering (where the distribution is approximated by delta functions) for example.
See also
Maxwell–Boltzmann statistics
E.G.D. Cohen
References
Statistical mechanics
Nonlinear systems | Superstatistics | [
"Physics",
"Mathematics"
] | 381 | [
"Statistical mechanics stubs",
"Nonlinear systems",
"Statistical mechanics",
"Dynamical systems"
] |
14,794,097 | https://en.wikipedia.org/wiki/CLNS1A | Methylosome subunit pICln is a protein that in humans is encoded by the CLNS1A gene.
Interactions
CLNS1A has been shown to interact with:
ITGA2B,
PRMT5,
SNRPD1, and
SNRPD3.
See also
Chloride channel
References
Further reading
External links
Ion channels | CLNS1A | [
"Chemistry"
] | 67 | [
"Neurochemistry",
"Ion channels"
] |
14,794,105 | https://en.wikipedia.org/wiki/Cyclic%20nucleotide-gated%20channel%20alpha%203 | Cyclic nucleotide-gated cation channel alpha-3 is a protein that in humans is encoded by the CNGA3 gene.
Function
This gene encodes a member of the cyclic nucleotide-gated cation channel protein family, which is required for normal vision and olfactory signal transduction. CNGA3 is expressed in cone photoreceptors and is necessary for color vision. Missense mutations in this gene are associated with rod monochromacy and segregate in an autosomal recessive pattern. Two alternatively-spliced transcripts encoding different isoforms have been described.
Clinical relevance
Variants in this gene have been shown to cause achromatopsia and colour blindness.
See also
Cyclic nucleotide-gated ion channel
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Achromatopsia
OMIM entries on Achromatopsia
Ion channels | Cyclic nucleotide-gated channel alpha 3 | [
"Chemistry"
] | 194 | [
"Neurochemistry",
"Ion channels"
] |
14,794,194 | https://en.wikipedia.org/wiki/DLX3 | Homeobox protein DLX-3 is a protein that in humans is encoded by the DLX3 gene.
Function
Dlx3 is a crucial regulator of hair follicle differentiation and cycling. Dlx3 transcription is mediated through Wnt, and colocalization of Dlx3 with phospho-SMAD1/5/8 is involved in the regulation of transcription by BMP signaling. Dlx3 transcription is also induced by BMP-2 through transactivation with SMAD1 and SMAD4.
Many vertebrate homeo box-containing genes have been identified on the basis of their sequence similarity with Drosophila developmental genes. Members of the Dlx gene family contain a homeobox that is related to that of Distal-less (Dll), a gene expressed in the head and limbs of the developing fruit fly. The Distal-less (Dlx) family of genes comprises at least 6 different members, DLX1-DLX6. This gene is located in a tail-to-tail configuration with another member of the gene family on the long arm of chromosome 17.
Clinical significance
Mutations in this gene have been associated with the autosomal dominant conditions trichodentoosseous syndrome (TDO) and amelogenesis imperfecta with taurodontism.
References
Further reading
External links
Transcription factors | DLX3 | [
"Chemistry",
"Biology"
] | 276 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,794,239 | https://en.wikipedia.org/wiki/E2F5 | Transcription factor E2F5 is a protein that in humans is encoded by the E2F5 gene.
Function
The protein encoded by this gene is a member of the E2F family of transcription factors. The E2F family plays a crucial role in the control of cell cycle and action of tumor suppressor proteins and is also a target of the transforming proteins of small DNA tumor viruses. The E2F proteins contain several evolutionarily conserved domains that are present in most members of the family. These domains include a DNA binding domain, a dimerization domain which determines interaction with the differentiation regulated transcription factor proteins (DP), a transactivation domain enriched in acidic amino acids, and a tumor suppressor protein association domain which is embedded within the transactivation domain. This protein is differentially phosphorylated and is expressed in a wide variety of human tissues. It has higher identity to E2F4 than to other family members. Both this protein and E2F4 interact with tumor suppressor proteins p130 and p107, but not with pRB. Alternative splicing results in multiple variants encoding different isoforms.
Interactions
E2F5 has been shown to interact with TFDP1.
See also
E2F
References
Further reading
External links
Transcription factors | E2F5 | [
"Chemistry",
"Biology"
] | 264 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,794,869 | https://en.wikipedia.org/wiki/HEY2 | Hairy/enhancer-of-split related with YRPW motif protein 2 (HEY2) also known as cardiovascular helix-loop-helix factor 1 (CHF1) is a protein that in humans is encoded by the HEY2 gene.
This protein is a type of transcription factor that belongs to the hairy and enhancer of split-related (HESR) family of basic helix-loop-helix (bHLH)-type transcription factors. It forms homo- or hetero-dimers that localize to the nucleus and interact with a histone deacetylase complex to repress transcription. During embryonic development, this mechanism is used to control the number of cells that develop into cardiac progenitor cells and myocardial cells. The relationship is inversely related, so as the number of cells that express the Hey2 gene increases, the more CHF1 is present to repress transcription and the number of cells that take on a myocardial fate decreases.
Expression
The expression of the Hey2 gene is induced by the Notch signaling pathway. In this mechanism, adjacent cells bind via transmembrane notch receptors. Two similar and redundant genes in mouse are required for embryonic cardiovascular development, and are also implicated in neurogenesis and somitogenesis. Alternatively spliced transcript variants have been found, but their biological validity has not been determined.
Knockout studies
The Hey2 gene is involved with the formation of the cardiovascular system and especially the heart itself. Although studies have not been conducted about the effects of a malfunction in Hey2 expression in humans, experiments done with mice suggest this gene could be responsible for a number of heart defects. Using a gene knockout technique, scientists inactivated both the Hey1 and Hey2 genes of mice. The loss of these two genes resulted in death of the embryo 9.5 days after conception. It was found that the developing hearts of these embryos lacked most structural formations which resulted in massive hemorrhage. When only the Hey1 gene was knocked out, no apparent phenotypic changes occurred, suggesting that these two genes carry similar and redundant information for the development of the heart.
Clinical significance
Common variants of SCN5A, SCN10A, and HEY2 (this gene) are associated with Brugada syndrome.
Interactions
HEY2 has been shown to interact with Sirtuin 1 and Nuclear receptor co-repressor 1.
References
Further reading
External links
Transcription factors | HEY2 | [
"Chemistry",
"Biology"
] | 497 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,795,013 | https://en.wikipedia.org/wiki/PHOX2A | Paired mesoderm homeobox protein 2A is a protein that in humans is encoded by the PHOX2A gene.
Function
The protein encoded by this gene contains a paired-like homeodomain most similar to that of the Drosophila aristaless gene product. This protein is expressed specifically in noradrenergic cell types. It regulates the expression of tyrosine hydroxylase and dopamine beta-hydroxylase, two catecholaminergic biosynthetic enzymes essential for the differentiation and maintenance of noradrenergic phenotype. Mutations in this gene have been associated with autosomal recessive congenital fibrosis of the extraocular muscles (CFEOM2).
Interactions
PHOX2A has been shown to interact with HAND2.
References
Further reading
External links
Engle Laboratory CFEOM page
GeneReviews/NCBI/NIH/UW entry on Congenital Fibrosis of the Extraocular Muscles
OMIM entries on Congenital Fibrosis of the Extraocular Muscles
Transcription factors | PHOX2A | [
"Chemistry",
"Biology"
] | 217 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,795,559 | https://en.wikipedia.org/wiki/Condominial%20sewerage | Condominial sewerage is the application of simplified sewerage coupled with consultations and ongoing interactions between users and agencies during planning and implementation. The term is used primarily in Latin America, particularly in Brazil, and is derived from the term condominio, which means housing block.
From a pure engineering perspective there is no difference between designing a regular sewage system and a condominial one. However, bureaucratically a condominial system includes the participation of the individuals and owners who will be served and can often result in lower costs due to shorter runs of piping. This is achieved by local concentration of sewage from a single "housing block". Thus a number of dwellings are grouped into a "block" known as a condominium. The condominium may share no other aspects of ownership or relation except geographic proximity. In addition, individuals and owners may share a role in the maintenance of the sewers at the block level.
References
Sewerage
Environmental engineering | Condominial sewerage | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 194 | [
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
14,796,489 | https://en.wikipedia.org/wiki/DLX4 | Homeobox protein DLX-4 is a protein that in humans is encoded by the DLX4 gene.
Function
Many vertebrate homeobox-containing genes have been identified on the basis of their sequence similarity with Drosophila developmental genes. Members of the Dlx gene family contain a homeobox that is related to that of Distal-less (Dll), a gene expressed in the head and limbs of the developing fruit fly. The Distal-less (Dlx) family of genes comprises at least 6 different members, DLX1-DLX6. The DLX proteins are postulated to play a role in forebrain and craniofacial development. Three transcript variants have been described for this gene, however, the full length nature of one variant has not been described. Studies of the two splice variants revealed that one encoded isoform (BP1) functions as a repressor of the beta-globin gene while the other isoform lacks that function.
References
Further reading
External links
Transcription factors | DLX4 | [
"Chemistry",
"Biology"
] | 213 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,796,829 | https://en.wikipedia.org/wiki/PRPF6 | Pre-mRNA-processing factor 6 is a protein that in humans is encoded by the PRPF6 gene.
The protein encoded by this gene appears to be involved in pre-mRNA splicing, possibly acting as a bridging factor between U5 and U4/U6 snRNPs in formation of the spliceosome. The encoded protein also can bind androgen receptor, providing a link between transcriptional activation and splicing.
Interactions
PRPF6 has been shown to interact with TXNL4B, ARAF and Androgen receptor.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Retinitis Pigmentosa Overview
Spliceosome | PRPF6 | [
"Chemistry"
] | 146 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,009 | https://en.wikipedia.org/wiki/HOXD9 | Homeobox protein Hox-D9 is a protein that in humans is encoded by the HOXD9 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, located on different chromosomes, consisting of 9 to 11 genes arranged in tandem. This gene is one of several homeobox HOXD genes located at 2q31-2q37 chromosome regions. Deletions that removed the entire HOXD gene cluster or 5' end of this cluster have been associated with severe limb and genital abnormalities. The exact role of this gene has not been determined.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXD9 | [
"Chemistry",
"Biology"
] | 188 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,589 | https://en.wikipedia.org/wiki/IL36G | Interleukin-36 gamma previously known as interleukin-1 family member 9 (IL1F9) is a protein that in humans is encoded by the IL36G gene.
Expression
IL36G is well-expressed in the epithelium of the skin, gut, and lung. In the skin IL36G is predominantly expressed in epidermal granular layer keratinocytes with little to no expression in basal layer keratinocytes.
Function
The protein encoded by this gene is a member of the interleukin-1 cytokine family. This gene and eight other interleukin-1 family genes form a cytokine gene cluster on chromosome 2. The activity of this cytokine is mediated via the interleukin-1 receptor-like 2 (IL1RL2/IL1R-rp2/IL-36 receptor), and is specifically inhibited by interleukin-36 receptor antagonist, (IL-36RA/IL1F5/IL-1 delta). Interferon-gamma, tumor necrosis factor-alpha and interleukin-1 β (IL-1β) are reported to stimulate the expression of this cytokine in keratinocytes. The expression of this cytokine in keratinocytes can also be induced by a multiple Pathogen-Associated Molecular Patterns (PAMPs). Both IL-36γ mRNA and protein have been linked to psoriasis lesions and has been used as a biomarker for differentiating between eczema and psoriasis. As with many other interleukin-1 family cytokines IL-36γ requires proteolytic cleavage of its N-terminus for full biological activity. However, unlike IL-1β the activation of IL-36γ is inflammasome-independent. IL-36γ is specifically cleaved by the endogenous protease cathepsin S as well exogenous proteases derived from fungal and bacterial pathogens.
References
Biomarkers
Further reading | IL36G | [
"Biology"
] | 428 | [
"Biomarkers"
] |
14,798,403 | https://en.wikipedia.org/wiki/HOXB13 | Homeobox protein Hox-B13 is a protein that in humans is encoded by the HOXB13 gene.
Function
This gene encodes a transcription factor that belongs to the homeobox gene family. Genes of this family are highly conserved among vertebrates and essential for vertebrate embryonic development. This gene has been implicated in fetal skin development and cutaneous regeneration. In mice, a similar gene was shown to exhibit temporal and spatial colinearity in the main body axis of the embryo, but was not expressed in the secondary axes, which suggests functions in body patterning along the axis. This gene and other HOXB genes form a gene cluster on chromosome 17 in the 17q21.32 region.
Men who inherit a rare (<0.1% in a selected group of patients without clinical signs of prostate cancer) genetic variant in HOXB13 (G84E or rs138213197) have a 10-20-fold increased risk of prostate cancer.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXB13 | [
"Chemistry",
"Biology"
] | 219 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
181,503 | https://en.wikipedia.org/wiki/Culmination | In observational astronomy, culmination is the passage of a celestial object (such as the Sun, the Moon, a planet, a star, constellation or a deep-sky object) across the observer's local meridian. These events are also known as meridian transits, used in timekeeping and navigation, and measured precisely using a transit telescope.
During each day, every celestial object appears to move along a circular path on the celestial sphere due to the Earth's rotation creating two moments when it crosses the meridian. Except at the geographic poles, any celestial object passing through the meridian has an upper culmination, when it reaches its highest point (the moment when it is nearest to the zenith), and nearly twelve hours later, is followed by a lower culmination, when it reaches its lowest point (nearest to the nadir). The time of culmination (when the object culminates) is often used to mean upper culmination.
An object's altitude (A) in degrees at its upper culmination is equal to 90 minus the observer's latitude (L) plus the object's declination (δ):
.
Cases
Three cases are dependent on the observer's latitude (L) and the declination (δ) of the celestial object:
The object is above the horizon even at its lower culmination; i.e. if (i.e. if in absolute value the declination is more than the colatitude, in the corresponding hemisphere)
The object is below the horizon even at its upper culmination; i.e. if (i.e. if in absolute value the declination is more than the colatitude, in the opposite hemisphere)
The upper culmination is above and the lower below the horizon, so the body is observed to rise and set daily; in the other cases (i.e. if in absolute value the declination is less than the colatitude)
The third case applies for objects in a part of the full sky equal to the cosine of the latitude (at the equator it applies for all objects, because the sky turns around the horizontal north–south line; at the poles it applies for none, because the sky turns around the vertical line). The first and second case each apply for half of the remaining sky.
Period of time
The period between a culmination and the next is a sidereal day, which is exactly 24 sidereal hours and 4 minutes less than 24 common solar hours, while the period between an upper culmination and a lower one is 12 sidereal hours. The period between successive day to day (rotational) culminations is effected mainly by Earth's orbital proper motion, which produces the different lengths between the solar day (the interval between culminations of the Sun) and the sidereal day (the interval between culminations of any reference star) or the slightly more precise, precession unaffected, stellar day. This results in culminations occurring every solar day at different times, taking a sidereal year (366.3 days), a year that is one day longer than the solar year, for a culmination to reoccur. Therefore, only once every 366.3 solar days the culmination reoccurs at the same time of a solar day, while reoccurring every sidereal day. The remaining small changes in the culmination period time from sidereal year to sidereal year is on the other hand mainly caused by nutation (with a 18.6 years cycle), resulting in the longer time scale axial precession of Earth (with a 26,000 years cycle), while apsidal precession and other mechanics have a much smaller impact on sidereal observation, impacting Earth's climate through the Milankovitch cycles significantly more. Though at such timescales stars themself change position, particularly those stars which have, as viewed from the Solar System, a high proper motion.
Stellar parallax appears to be a similar motion like all these apparent movements, but has only from non-averaged sidereal day to sidereal day a slight effect, returning to its original apparent position, completing a cycle every orbit, with a slight additional lasting change to the position due to the precessions. This phenomenon results from Earth changing position on its orbital path.
The Sun
From the tropics and middle latitudes, the Sun is visible in the sky at its upper culmination (at solar noon) and invisible (below the horizon) at its lower culmination (at solar midnight). When viewed from the region within either polar circle around the winter solstice of that hemisphere (the December solstice in the Arctic and the June solstice in the Antarctic), the Sun is below the horizon at both of its culminations.
Earth's subsolar point occurs at the point where the upper culmination of the Sun reaches the point's zenith. At this point, which moves around the tropics throughout the year, the Sun is perceived to be directly overhead.
We apply the previous equation, , in the following examples.
Supposing that the declination of the Sun is +20° when it crosses the local meridian, then the complementary angle of 70° (from the Sun to the pole) is added to and subtracted from the observer's latitude to find the solar altitudes at upper and lower culminations, respectively.
From 52° north, the upper culmination is at 58° above the horizon due south, while the lower is at 18° below the horizon due north. This is calculated as 52° + 70° = 122° (the supplementary angle being 58°) for the upper, and 52° − 70° = −18° for the lower.
From 80° north, the upper culmination is at 30° above the horizon due south, while the lower is at 10° above the horizon (midnight sun) due north.
Circumpolar stars
From most of the Northern Hemisphere, Polaris (the North Star) and the other stars of the constellation Ursa Minor circles counterclockwise around the north celestial pole and remain visible at both culminations (as long as the sky is clear and dark enough). In the Southern Hemisphere there is no bright pole star, but the constellation Octans circles clockwise around the south celestial pole and remains visible at both culminations.
Any astronomical objects that always remain above the local horizon, as viewed from the observer's latitude, are described as circumpolar.
See also
Celestial sphere
Meridian (astronomy)
Nadir
Satellite pass
Zenith
References
Celestial mechanics
Spherical astronomy | Culmination | [
"Physics"
] | 1,381 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
181,983 | https://en.wikipedia.org/wiki/Fermi%20gas | A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi.
This physical model is useful for certain systems with many fermions. Some key examples are the behaviour of charge carriers in a metal, nucleons in an atomic nucleus, neutrons in a neutron star, and electrons in a white dwarf.
Description
An ideal Fermi gas or free Fermi gas is a physical model assuming a collection of non-interacting fermions in a constant potential well. Fermions are elementary or composite particles with half-integer spin, thus follow Fermi–Dirac statistics. The equivalent model for integer spin particles is called the Bose gas (an ensemble of non-interacting bosons). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas.
By the Pauli exclusion principle, no quantum state can be occupied by more than one fermion with an identical set of quantum numbers. Thus a non-interacting Fermi gas, unlike a Bose gas, concentrates a small number of particles per energy. Thus a Fermi gas is prohibited from condensing into a Bose–Einstein condensate, although weakly-interacting Fermi gases might form a Cooper pair and condensate (also known as BCS-BEC crossover regime). The total energy of the Fermi gas at absolute zero is larger than the sum of the single-particle ground states because the Pauli principle implies a sort of interaction or pressure that keeps fermions separated and moving. For this reason, the pressure of a Fermi gas is non-zero even at zero temperature, in contrast to that of a classical ideal gas. For example, this so-called degeneracy pressure stabilizes a neutron star (a Fermi gas of neutrons) or a white dwarf star (a Fermi gas of electrons) against the inward pull of gravity, which would ostensibly collapse the star into a black hole. Only when a star is sufficiently massive to overcome the degeneracy pressure can it collapse into a singularity.
It is possible to define a Fermi temperature below which the gas can be considered degenerate (its pressure derives almost exclusively from the Pauli principle). This temperature depends on the mass of the fermions and the density of energy states.
The main assumption of the free electron model to describe the delocalized electrons in a metal can be derived from the Fermi gas. Since interactions are neglected due to screening effect, the problem of treating the equilibrium properties and dynamics of an ideal Fermi gas reduces to the study of the behaviour of single independent particles. In these systems the Fermi temperature is generally many thousands of kelvins, so in human applications the electron gas can be considered degenerate. The maximum energy of the fermions at zero temperature is called the Fermi energy. The Fermi energy surface in reciprocal space is known as the Fermi surface.
The nearly free electron model adapts the Fermi gas model to consider the crystal structure of metals and semiconductors, where electrons in a crystal lattice are substituted by Bloch electrons with a corresponding crystal momentum. As such, periodic systems are still relatively tractable and the model forms the starting point for more advanced theories that deal with interactions, e.g. using the perturbation theory.
1D uniform gas
The one-dimensional infinite square well of length L is a model for a one-dimensional box with the potential energy:
It is a standard model-system in quantum mechanics for which the solution for a single particle is well known. Since the potential inside the box is uniform, this model is referred to as 1D uniform gas, even though the actual number density profile of the gas can have nodes and anti-nodes when the total number of particles is small.
The levels are labelled by a single quantum number n and the energies are given by:
where is the zero-point energy (which can be chosen arbitrarily as a form of gauge fixing), the mass of a single fermion, and is the reduced Planck constant.
For N fermions with spin- in the box, no more than two particles can have the same energy, i.e., two particles can have the energy of , two other particles can have energy and so forth. The two particles of the same energy have spin (spin up) or − (spin down), leading to two states for each energy level. In the configuration for which the total energy is lowest (the ground state), all the energy levels up to n = N/2 are occupied and all the higher levels are empty.
Defining the reference for the Fermi energy to be , the Fermi energy is therefore given by
where is the floor function evaluated at n = N/2.
Thermodynamic limit
In the thermodynamic limit, the total number of particles N are so large that the quantum number n may be treated as a continuous variable. In this case, the overall number density profile in the box is indeed uniform.
The number of quantum states in the range is:
Without loss of generality, the zero-point energy is chosen to be zero, with the following result:
Therefore, in the range:
the number of quantum states is:
Here, the degree of degeneracy is:
And the density of states is:
In modern literature, the above is sometimes also called the "density of states". However, differs from by a factor of the system's volume (which is in this 1D case).
Based on the following formula:
the Fermi energy in the thermodynamic limit can be calculated to be:
3D uniform gas
The three-dimensional isotropic and non-relativistic uniform Fermi gas case is known as the Fermi sphere.
A three-dimensional infinite square well, (i.e. a cubical box that has a side length L) has the potential energy
The states are now labelled by three quantum numbers nx, ny, and nz. The single particle energies are
where nx, ny, nz are positive integers. In this case, multiple states have the same energy (known as degenerate energy levels), for example .
Thermodynamic limit
When the box contains N non-interacting fermions of spin-, it is interesting to calculate the energy in the thermodynamic limit, where N is so large that the quantum numbers nx, ny, nz can be treated as continuous variables.
With the vector , each quantum state corresponds to a point in 'n-space' with energy
With denoting the square of the usual Euclidean length .
The number of states with energy less than EF + E0 is equal to the number of states that lie within a sphere of radius in the region of n-space where nx, ny, nz are positive. In the ground state this number equals the number of fermions in the system:
The factor of two expresses the two spin states, and the factor of 1/8 expresses the fraction of the sphere that lies in the region where all n are positive.
The Fermi energy is given by
Which results in a relationship between the Fermi energy and the number of particles per volume (when L2 is replaced with V2/3):
This is also the energy of the highest-energy particle (the th particle), above the zero point energy . The th particle has an energy of
The total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere) is given by:
Therefore, the average energy per particle is given by:
Density of states
For the 3D uniform Fermi gas, with fermions of spin-, the number of particles as a function of the energy is obtained by substituting the Fermi energy by a variable energy :
from which the density of states (number of energy states per energy per volume) can be obtained. It can be calculated by differentiating the number of particles with respect to the energy:
This result provides an alternative way to calculate the total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere):
Thermodynamic quantities
Degeneracy pressure
By using the first law of thermodynamics, this internal energy can be expressed as a pressure, that is
where this expression remains valid for temperatures much smaller than the Fermi temperature. This pressure is known as the degeneracy pressure. In this sense, systems composed of fermions are also referred as degenerate matter.
Standard stars avoid collapse by balancing thermal pressure (plasma and radiation) against gravitational forces. At the end of the star lifetime, when thermal processes are weaker, some stars may become white dwarfs, which are only sustained against gravity by electron degeneracy pressure. Using the Fermi gas as a model, it is possible to calculate the Chandrasekhar limit, i.e. the maximum mass any star may acquire (without significant thermally generated pressure) before collapsing into a black hole or a neutron star. The latter, is a star mainly composed of neutrons, where the collapse is also avoided by neutron degeneracy pressure.
For the case of metals, the electron degeneracy pressure contributes to the compressibility or bulk modulus of the material.
Chemical potential
Assuming that the concentration of fermions does not change with temperature, then the total chemical potential μ (Fermi level) of the three-dimensional ideal Fermi gas is related to the zero temperature Fermi energy EF by a Sommerfeld expansion (assuming ):
where T is the temperature.
Hence, the internal chemical potential, μ-E0, is approximately equal to the Fermi energy at temperatures that are much lower than the characteristic Fermi temperature TF. This characteristic temperature is on the order of 105 K for a metal, hence at room temperature (300 K), the Fermi energy and internal chemical potential are essentially equivalent.
Typical values
Metals
Under the free electron model, the electrons in a metal can be considered to form a uniform Fermi gas. The number density of conduction electrons in metals ranges between approximately 1028 and 1029 electrons per m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order:
where me is the electron rest mass. This Fermi energy corresponds to a Fermi temperature of the order of 106 kelvins, much higher than the temperature of the Sun's surface. Any metal will boil before reaching this temperature under atmospheric pressure. Thus for any practical purpose, a metal can be considered as a Fermi gas at zero temperature as a first approximation (normal temperatures are small compared to TF).
White dwarfs
Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. The number density of electrons in a white dwarf is of the order of 1036 electrons/m3. This means their Fermi energy is:
Nucleus
Another typical example is that of the particles in a nucleus of an atom. The radius of the nucleus is roughly:
where A is the number of nucleons.
The number density of nucleons in a nucleus is therefore:
This density must be divided by two, because the Fermi energy only applies to fermions of the same type. The presence of neutrons does not affect the Fermi energy of the protons in the nucleus, and vice versa.
The Fermi energy of a nucleus is approximately:
where mp is the proton mass.
The radius of the nucleus admits deviations around the value mentioned above, so a typical value for the Fermi energy is usually given as 38 MeV.
Arbitrary-dimensional uniform gas
Density of states
Using a volume integral on dimensions, the density of states is:
The Fermi energy is obtained by looking for the number density of particles:
To get:
where is the corresponding d-dimensional volume, is the dimension for the internal Hilbert space. For the case of spin-, every energy is twice-degenerate, so in this case .
A particular result is obtained for , where the density of states becomes a constant (does not depend on the energy):
Fermi gas in harmonic trap
The harmonic trap potential:
is a model system with many applications in modern physics. The density of states (or more accurately, the degree of degeneracy) for a given spin species is:
where is the harmonic oscillation frequency.
The Fermi energy for a given spin species is:
Related Fermi quantities
Related to the Fermi energy, a few useful quantities also occur often in modern literature.
The Fermi temperature is defined as , where is the Boltzmann constant. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature. Other quantities defined in this context are Fermi momentum , and Fermi velocity , which are the momentum and group velocity, respectively, of a fermion at the Fermi surface. The Fermi momentum can also be described as , where is the radius of the Fermi sphere and is called the Fermi wave vector.
Note that these quantities are not well-defined in cases where the Fermi surface is non-spherical.
Treatment at finite temperature
Grand canonical ensemble
Most of the calculations above are exact at zero temperature, yet remain as good approximations for temperatures lower than the Fermi temperature. For other thermodynamics variables it is necessary to write a thermodynamic potential. For an ensemble of identical fermions, the best way to derive a potential is from the grand canonical ensemble with fixed temperature, volume and chemical potential μ. The reason is due to Pauli exclusion principle, as the occupation numbers of each quantum state are given by either 1 or 0 (either there is an electron occupying the state or not), so the (grand) partition function can be written as
where , indexes the ensembles of all possible microstates that give the same total energy and number of particles , is the single particle energy of the state (it counts twice if the energy of the state is degenerate) and , its occupancy. Thus the grand potential is written as
The same result can be obtained in the canonical and microcanonical ensemble, as the result of every ensemble must give the same value at thermodynamic limit . The grand canonical ensemble is recommended here as it avoids the use of combinatorics and factorials.
As explored in previous sections, in the macroscopic limit we may use a continuous approximation (Thomas–Fermi approximation) to convert this sum to an integral:
where is the total density of states.
Relation to Fermi–Dirac distribution
The grand potential is related to the number of particles at finite temperature in the following way
where the derivative is taken at fixed temperature and volume, and it appears
also known as the Fermi–Dirac distribution.
Similarly, the total internal energy is
Exact solution for power-law density-of-states
Many systems of interest have a total density of states with the power-law form:
for some values of , , . The results of preceding sections generalize to dimensions, giving a power law with:
for non-relativistic particles in a -dimensional box,
for non-relativistic particles in a -dimensional harmonic potential well,
for hyper-relativistic particles in a -dimensional box.
For such a power-law density of states, the grand potential integral evaluates exactly to:
where is the complete Fermi–Dirac integral (related to the polylogarithm). From this grand potential and its derivatives, all thermodynamic quantities of interest can be recovered.
Extensions to the model
Relativistic Fermi gas
The article has only treated the case in which particles have a parabolic relation between energy and momentum, as is the case in non-relativistic mechanics. For particles with energies close to their respective rest mass, the equations of special relativity are applicable. Where single-particle energy is given by:
For this system, the Fermi energy is given by:
where the equality is only valid in the ultrarelativistic limit, and
The relativistic Fermi gas model is also used for the description of massive white dwarfs which are close to the Chandrasekhar limit. For the ultrarelativistic case, the degeneracy pressure is proportional to .
Fermi liquid
In 1956, Lev Landau developed the Fermi liquid theory, where he treated the case of a Fermi liquid, i.e., a system with repulsive, not necessarily small, interactions between fermions. The theory shows that the thermodynamic properties of an ideal Fermi gas and a Fermi liquid do not differ that much. It can be shown that the Fermi liquid is equivalent to a Fermi gas composed of collective excitations or quasiparticles, each with a different effective mass and magnetic moment.
See also
Bose gas
Fermionic condensate
Gas in a box
Jellium
Two-dimensional electron gas
References
Further reading
Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976)
Charles Kittel, Introduction to Solid State Physics, 1st ed. 1953 – 8th ed. 2005,
Quantum models
Fermi–Dirac statistics
Ideal gas
Phases of matter | Fermi gas | [
"Physics",
"Chemistry"
] | 3,687 | [
"Thermodynamic systems",
"Phases of matter",
"Quantum mechanics",
"Physical systems",
"Quantum models",
"Ideal gas",
"Matter"
] |
182,079 | https://en.wikipedia.org/wiki/Hydrogen%20carrier | A hydrogen carrier is an organic macromolecule that transports atoms of hydrogen from one place to another inside a cell or from cell to cell for use in various metabolical processes. Examples include NADPH, NADH, and FADH. The main role of these is to transport hydrogen atom to electron transport chain which will change ADP to ATP by adding one phosphate during metabolic processes (e.g. photosynthesis and respiration). Hydrogen carrier participates in an oxidation-reduction reaction by getting reduced due to the acceptance of a Hydrogen. The enzyme used in Glycolysis, Dehydrogenase is used to attach the hydrogen to one of the hydrogen carrier.
See also
Electron carrier
Light reactions
Photosynthesis
Cellular respiration
References
External links
http://www.biology-online.org/1/3_respiration.htm
https://web.archive.org/web/20100727214925/http://student.ccbcmd.edu/~gkaiser/biotutorials/energy/oxphos.html
Hydrogen biology | Hydrogen carrier | [
"Chemistry",
"Biology"
] | 225 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
182,146 | https://en.wikipedia.org/wiki/Orbital%20mechanics | Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Orbital mechanics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
History
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1605. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
Practical techniques
Rules of thumb
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and actually slow down relative to the leading craft, missing the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
Laws of astrodynamics
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
Escape velocity
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
where v is its Velocity;
and so the total specific orbital energy is
Since energy is conserved, cannot depend on the distance, , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite only if this quantity is nonnegative, which implies
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
Formulae for free orbits
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
is called the gravitational parameter. and are the masses of objects 1 and 2, and is the specific angular momentum of object 2 with respect to object 1. The parameter is known as the true anomaly, is the semi-latus rectum, while is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
Circular orbits
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
Therefore,
where is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example, must be in kilograms, and must be in meters. The answer will be in meters per second.
The quantity is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by :
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
Elliptical orbits
If , then the denominator of the equation of free orbits varies with the true anomaly , but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis , which is given by:
The maximum value is reached when . This point is called the apoapsis, and its radial coordinate, denoted , is
Let be the distance measured along the apse line from periapsis to apoapsis , as illustrated in the equation below:
Substituting the equations above, we get:
a is the semimajor axis of the ellipse. Solving for , and substituting the result in the conic section curve formula above, we get:
Orbital period
Under standard assumptions the orbital period () of a body traveling along an elliptic orbit can be computed as:
where:
is the standard gravitational parameter,
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
Velocity
Under standard assumptions the orbital speed () of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
where:
is the standard gravitational parameter,
is the distance between the orbiting bodies.
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is .
Energy
Under standard assumptions, specific orbital energy () of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
where:
is the speed of the orbiting body,
is the distance of the orbiting body from the center of mass of the central body,
is the semi-major axis,
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
the time-average of is
the time-average of the specific kinetic energy is equal to
Parabolic orbits
If the eccentricity equals 1, then the orbit equation becomes:
where:
is the radial distance of the orbiting body from the mass center of the central body,
is specific angular momentum of the orbiting body,
is the true anomaly of the orbiting body,
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
where:
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
Hyperbolic orbits
If , the orbit formula,
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when . we denote this value of true anomaly
since the radial distance approaches infinity as the true anomaly approaches , known as the true anomaly of the asymptote. Observe that lies between 90° and 180°. From the trigonometric identity it follows that:
Energy
Under standard assumptions, specific orbital energy () of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
where:
is the orbital velocity of orbiting body,
is the radial distance of orbiting body from central body,
is the negative semi-major axis of the orbit's hyperbola,
is standard gravitational parameter.
Hyperbolic excess velocity
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at infinity an orbital velocity called hyperbolic excess velocity () that can be computed as:
where:
is standard gravitational parameter,
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
Calculating trajectories
Kepler's equation
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
.
where M is the mean anomaly, E is the eccentric anomaly, and is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of from periapsis is broken into two steps:
Compute the eccentric anomaly from true anomaly
Compute the time-of-flight from the eccentric anomaly
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in , meaning it cannot be solved for algebraically. Kepler's equation can be solved for analytically by inversion.
A solution of Kepler's equation, valid for all real values of is:
Evaluating this yields:
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of and solve for time-of-flight; then adjust as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity is nearly 1, and substituting into the formula for mean anomaly, , we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
Conic orbits
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
The patched conic approximation
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius :
where is the semimajor axis of the planet's orbit relative to the Sun; and are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
The universal variable formulation
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
Perturbations
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors and at a given epoch . In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as and the velocity element as , indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions and .
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
Orbital maneuver
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
Orbital transfer
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
Gravity assist and the Oberth effect
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's Third Law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
Interplanetary Transport Network and fuzzy orbits
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's Trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun point and returned using very little propellant.
See also
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
References
Further reading
Many of the options, procedures, and supporting theory are covered in standard works such as:
External links
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Orbital mechanics | [
"Engineering"
] | 5,270 | [
"Astrodynamics",
"Aerospace engineering"
] |
182,208 | https://en.wikipedia.org/wiki/Heat%20treating | Heat treating (or heat treatment) is a group of industrial, thermal and metalworking processes used to alter the physical, and sometimes chemical, properties of a material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials, such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve the desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case hardening, precipitation strengthening, tempering, carburizing, normalizing and quenching. Although the term heat treatment applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding.
Physical processes
Metallic materials consist of a microstructure of small crystals called "grains" or crystallites. The nature of the grains (i.e. grain size and composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within the microstructure. Heat treating is often used to alter the mechanical properties of a metallic alloy, manipulating properties such as the hardness, strength, toughness, ductility, and elasticity.
There are two mechanisms that may change an alloy's properties during heat treatment: the formation of martensite causes the crystals to deform intrinsically, and the diffusion mechanism causes changes in the homogeneity of the alloy.
The crystal structure consists of atoms that are grouped in a very specific arrangement, called a lattice. In most elements, this order will rearrange itself, depending on conditions like temperature and pressure. This rearrangement called allotropy or polymorphism, may occur several times, at many different temperatures for a particular metal. In alloys, this rearrangement may cause an element that will not normally dissolve into the base metal to suddenly become soluble, while a reversal of the allotropy will make the elements either partially or completely insoluble.
When in the soluble state, the process of diffusion causes the atoms of the dissolved element to spread out, attempting to form a homogenous distribution within the crystals of the base metal. If the alloy is cooled to an insoluble state, the atoms of the dissolved constituents (solutes) may migrate out of the solution. This type of diffusion, called precipitation, leads to nucleation, where the migrating atoms group together at the grain-boundaries. This forms a microstructure generally consisting of two or more distinct phases. For instance, steel that has been heated above the austenizing temperature (red to orange-hot, or around to depending on carbon content), and then cooled slowly, forms a laminated structure composed of alternating layers of ferrite and cementite, becoming soft pearlite. After heating the steel to the austenite phase and then quenching it in water, the microstructure will be in the martensitic phase. This is due to the fact that the steel will change from the austenite phase to the martensite phase after quenching. Some pearlite or ferrite may be present if the quench did not rapidly cool off all the steel.
Unlike iron-based alloys, most heat-treatable alloys do not experience a ferrite transformation. In these alloys, the nucleation at the grain-boundaries often reinforces the structure of the crystal matrix. These metals harden by precipitation. Typically a slow process, depending on temperature, this is often referred to as "age hardening".
Many metals and non-metals exhibit a martensite transformation when cooled quickly (with external media like oil, polymer, water, etc.). When a metal is cooled very quickly, the insoluble atoms may not be able to migrate out of the solution in time. This is called a "diffusionless transformation." When the crystal matrix changes to its low-temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms prevent the crystal matrix from completely changing into its low-temperature allotrope, creating shearing stresses within the lattice. When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum, the alloy becomes softer.
Effects of composition
The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will usually form simultaneously. A hypo eutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid solution contains more.
Eutectoid alloys
A eutectoid (eutectic-like) alloy is similar in behavior to a eutectic alloy. A eutectic alloy is characterized by having a single melting point. This melting point is lower than that of any of the constituents, and no change in the mixture will lower the melting point any further. When a molten eutectic alloy is cooled, all of the constituents will crystallize into their respective phases at the same temperature.
A eutectoid alloy is similar, but the phase change occurs, not from a liquid, but from a solid solution. Upon cooling a eutectoid alloy from the solution temperature, the constituents will separate into different crystal phases, forming a single microstructure. A eutectoid steel, for example, contains 0.77% carbon. Upon cooling slowly, the solution of iron and carbon (a single phase called austenite) will separate into platelets of the phases ferrite and cementite. This forms a layered microstructure called pearlite.
Since pearlite is harder than iron, the degree of softness achievable is typically limited to that produced by the pearlite. Similarly, the hardenability is limited by the continuous martensitic microstructure formed when cooled very fast.
Hypoeutectoid alloys
A hypoeutectic alloy has two separate melting points. Both are above the eutectic melting point for the system but are below the melting points of any constituent forming the system. Between these two melting points, the alloy will exist as part solid and part liquid. The constituent with the higher melting point will solidify first. When completely solidified, a hypoeutectic alloy will often be in a solid solution.
Similarly, a hypoeutectoid alloy has two critical temperatures, called "arrests". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the "pro eutectoid phase". These two temperatures are called the upper (A3) and lower (A1) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to "crystallize-out", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.
For example, a hypoeutectoid steel contains less than 0.77% carbon. Upon cooling a hypoeutectoid steel from the austenite transformation temperature, small islands of proeutectoid-ferrite will form. These will continue to grow and the carbon will recede until the eutectoid concentration in the rest of the steel is reached. This eutectoid mixture will then crystallize as a microstructure of pearlite. Since ferrite is softer than pearlite, the two microstructures combine to increase the ductility of the alloy. Consequently, the hardenability of the alloy is lowered.
Hypereutectoid alloys
A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
A hypereutectoid steel contains more than 0.77% carbon. When slowly cooling hypereutectoid steel, the cementite will begin to crystallize first. When the remaining steel becomes eutectoid in composition, it will crystallize into pearlite. Since cementite is much harder than pearlite, the alloy has greater hardenability at a cost in ductility.
Effects of time and temperature
Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate.
With the exception of stress-relieving, tempering, and aging, most heat treatments begin by heating an alloy beyond a certain transformation, or arrest (A), temperature. This temperature is referred to as an "arrest" because at the A temperature the metal experiences a period of hysteresis. At this point, all of the heat energy is used to cause the crystal change, so the temperature stops rising for a short time (arrests) and then continues climbing once the change is complete. Therefore, the alloy must be heated above the critical temperature for a transformation to occur. The alloy will usually be held at this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Iron, for example, has four critical-temperatures, depending on carbon content. Pure iron in its alpha (room temperature) state changes to nonmagnetic gamma-iron at its A2 temperature, and weldable delta-iron at its A4 temperature. However, as carbon is added, becoming steel, the A2 temperature splits into the A3 temperature, also called the austenizing temperature (all phases become austenite, a solution of gamma iron and carbon) and its A1 temperature (austenite changes into pearlite upon cooling). Between these upper and lower temperatures the pro eutectoid phase forms upon cooling.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage.
The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound.
When austenite is cooled but kept above the martensite start temperature Ms so that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as "sphereoidite". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form, with more complete bainite transformation occurring depending on the time held above martensite start Ms. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time.
Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked. This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation.
Types of heat treatment
Complex heat treating schedules, or "cycles", are often devised by metallurgists to optimize an alloy's mechanical properties. In the aerospace industry, a superalloy may undergo five or more different heat treating operations to develop the desired properties. This can lead to quality problems depending on the accuracy of the furnace's temperature controls and timer. These operations can usually be divided into several basic techniques.
Annealing
Annealing consists of heating a metal to a specific temperature and then cooling at a rate that will produce a refined microstructure, either fully or partially separating the constituents. The rate of cooling is generally slow. Annealing is most often used to soften a metal for cold working, to improve machinability, or to enhance properties like electrical conductivity.
In ferrous alloys, annealing is usually accomplished by heating the metal beyond the upper critical temperature and then cooling very slowly, resulting in the formation of pearlite. In both pure metals and many alloys that cannot be heat treated, annealing is used to remove the hardness caused by cold working. The metal is heated to a temperature where recrystallization can occur, thereby repairing the defects caused by plastic deformation. In these metals, the rate of cooling will usually have little effect. Most non-ferrous alloys that are heat-treatable are also annealed to relieve the hardness of cold working. These may be slowly cooled to allow full precipitation of the constituents and produce a refined microstructure.
Ferrous alloys are usually either "full annealed" or "process annealed". Full annealing requires very slow cooling rates, in order to form coarse pearlite. In process annealing, the cooling rate may be faster; up to, and including normalizing. The main goal of process annealing is to produce a uniform microstructure. Non-ferrous alloys are often subjected to a variety of annealing techniques, including "recrystallization annealing", "partial annealing", "full annealing", and "final annealing". Not all annealing techniques involve recrystallization, such as stress relieving.
Normalizing
Normalizing is a technique used to provide uniformity in grain size and composition (equiaxed crystals) throughout an alloy. The term is often used for ferrous alloys that have been austenitized and then cooled in the open air. Normalizing not only produces pearlite but also martensite and sometimes bainite, which gives harder and stronger steel but with less ductility for the same composition than full annealing.
In the normalizing process the steel is heated to about 40 degrees Celsius above its upper critical temperature limit, held at this temperature for some time, and then cooled in air.
Stress relieving
Stress-relieving is a technique to remove or reduce the internal stresses created in metal. These stresses may be caused in a number of ways, ranging from cold working to non-uniform cooling. Stress-relieving is usually accomplished by heating a metal below the lower critical temperature and then cooling uniformly. Stress relieving is commonly used on items like air tanks, boilers and other pressure vessels, to remove a portion of the stresses created during the welding process.
Aging
Some metals are classified as precipitation hardening metals. When a precipitation hardening alloy is quenched, its alloying elements will be trapped in solution, resulting in a soft metal. Aging a "solutionized" metal will allow the alloying elements to diffuse through the microstructure and form intermetallic particles. These intermetallic particles will nucleate and fall out of the solution and act as a reinforcing phase, thereby increasing the strength of the alloy. Alloys may age " naturally" meaning that the precipitates form at room temperature, or they may age "artificially" when precipitates only form at elevated temperatures. In some applications, naturally aging alloys may be stored in a freezer to prevent hardening until after further operations - assembly of rivets, for example, maybe easier with a softer part.
Examples of precipitation hardening alloys include 2000 series, 6000 series, and 7000 series aluminium alloy, as well as some superalloys and some stainless steels. Steels that harden by aging are typically referred to as maraging steels, from a combination of the term "martensite aging".
Quenching
Quenching is a process of cooling a metal at a rapid rate. This is most often done to produce a martensite transformation. In ferrous alloys, this will often produce a harder metal, while non-ferrous alloys will usually become softer than normal.
To harden by quenching, a metal (usually steel or cast iron) must be heated above the upper critical temperature (Steel: above 815~900 degrees Celsius) and then quickly cooled. Depending on the alloy and other considerations (such as concern for maximum hardness vs. cracking and distortion), cooling may be done with forced air or other gases, (such as nitrogen). Liquids may be used, due to their better thermal conductivity, such as oil, water, a polymer dissolved in water, or a brine. Upon being rapidly cooled, a portion of austenite (dependent on alloy composition) will transform to martensite, a hard, brittle crystalline structure. The quenched hardness of a metal depends on its chemical composition and quenching method. Cooling speeds, from fastest to slowest, go from brine, polymer (i.e. mixtures of water + glycol polymers), freshwater, oil, and forced air. However, quenching certain steel too fast can result in cracking, which is why high-tensile steels such as AISI 4140 should be quenched in oil, tool steels such as ISO 1.2767 or H13 hot work tool steel should be quenched in forced air, and low alloy or medium-tensile steels such as XK1320 or AISI 1040 should be quenched in brine.
Some Beta titanium based alloys have also shown similar trends of increased strength through rapid cooling. However, most non-ferrous metals, like alloys of copper, aluminum, or nickel, and some high alloy steels such as austenitic stainless steel (304, 316), produce an opposite effect when these are quenched: they soften. Austenitic stainless steels must be quenched to become fully corrosion resistant, as they work-harden significantly.
Tempering
Untempered martensitic steel, while very hard, is too brittle to be useful for most applications. A method for alleviating this problem is called tempering. Most applications require that quenched parts be tempered. Tempering consists of heating steel below the lower critical temperature, (often from 400˚F to 1105˚F or 205˚C to 595˚C, depending on the desired results), to impart some toughness. Higher tempering temperatures (maybe up to 1,300˚F or 700˚C, depending on the alloy and application) are sometimes used to impart further ductility, although some yield strength is lost.
Tempering may also be performed on normalized steels. Other methods of tempering consist of quenching to a specific temperature, which is above the martensite start temperature, and then holding it there until pure bainite can form or internal stresses can be relieved. These include austempering and martempering.
Tempering colors
Steel that has been freshly ground or polished will form oxide layers when heated. At a very specific temperature, the iron oxide will form a layer with a very specific thickness, causing thin-film interference. This causes colors to appear on the surface of the steel. As the temperature is increased, the iron oxide layer grows in thickness, changing the color. These colors, called tempering colors, have been used for centuries to gauge the temperature of the metal.
350˚F (176˚C), light yellowish
400˚F (204˚C), light-straw
440˚F (226˚C), dark-straw
500˚F (260˚C), brown
540˚F (282˚C), purple
590˚F (310˚C), deep blue
640˚F (337˚C), light blue
The tempering colors can be used to judge the final properties of the tempered steel. Very hard tools are often tempered in the light to the dark straw range, whereas springs are often tempered to the blue. However, the final hardness of the tempered steel will vary, depending on the composition of the steel. Higher-carbon tool steel will remain much harder after tempering than spring steel (of slightly less carbon) when tempered at the same temperature. The oxide film will also increase in thickness over time. Therefore, steel that has been held at 400˚F for a very long time may turn brown or purple, even though the temperature never exceeded that needed to produce a light straw color. Other factors affecting the final outcome are oil films on the surface and the type of heat source used.
Selective heat treating
Many heat treating methods have been developed to alter the properties of only a portion of an object. These tend to consist of either cooling different areas of an alloy at different rates, by quickly heating in a localized area and then quenching, by thermochemical diffusion, or by tempering different areas of an object at different temperatures, such as in differential tempering.
Differential hardening
Some techniques allow different areas of a single object to receive different heat treatments. This is called differential hardening. It is common in high quality knives and swords. The Chinese jian is one of the earliest known examples of this, and the Japanese katana may be the most widely known. The Nepalese Khukuri is another example. This technique uses an insulating layer, like layers of clay, to cover the areas that are to remain soft. The areas to be hardened are left exposed, allowing only certain parts of the steel to fully harden when quenched.
Flame hardening
Flame hardening is used to harden only a portion of the metal. Unlike differential hardening, where the entire piece is heated and then cooled at different rates, in flame hardening, only a portion of the metal is heated before quenching. This is usually easier than differential hardening, but often produces an extremely brittle zone between the heated metal and the unheated metal, as cooling at the edge of this heat-affected zone is extremely rapid.
Induction hardening
Induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly, using a no-contact method of induction heating. The alloy is then quenched, producing a martensite transformation at the surface while leaving the underlying metal unchanged. This creates a very hard, wear-resistant surface while maintaining the proper toughness in the majority of the object. Crankshaft journals are a good example of an induction hardened surface.
Case hardening
Case hardening is a thermochemical diffusion process in which an alloying element, most commonly carbon or nitrogen, diffuses into the surface of a monolithic metal. The resulting interstitial solid solution is harder than the base material, which improves wear resistance without sacrificing toughness.
Laser surface engineering is a surface treatment with high versatility, selectivity and novel properties. Since the cooling rate is very high in laser treatment, metastable even metallic glass can be obtained by this method.
Cold and cryogenic treating
Although quenching steel causes the austenite to transform into martensite, all of the austenite usually does not transform. Some austenite crystals will remain unchanged even after quenching below the martensite finish (Mf) temperature. Further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures. Cold treating generally consists of cooling the steel to around -115˚F (-81˚C), but does not eliminate all of the austenite. Cryogenic treating usually consists of cooling to much lower temperatures, often in the range of -315˚F (-192˚C), to transform most of the austenite into martensite.
Cold and cryogenic treatments are typically done immediately after quenching, before any tempering, and will increase the hardness, wear resistance, and reduce the internal stresses in the metal but, because it is really an extension of the quenching process, it may increase the chances of cracking during the procedure. The process is often used for tools, bearings, or other items that require good wear resistance. However, it is usually only effective in high-carbon or high-alloy steels in which more than 10% austenite is retained after quenching.
Decarburization
The heating of steel is sometimes used as a method to alter the carbon content. When steel is heated in an oxidizing environment, the oxygen combines with the iron to form an iron-oxide layer, which protects the steel from decarburization. When the steel turns to austenite, however, the oxygen combines with iron to form a slag, which provides no protection from decarburization. The formation of slag and scale actually increases decarburization, because the iron oxide keeps oxygen in contact with the decarburization zone even after the steel is moved into an oxygen-free environment, such as the coals of a forge. Thus, the carbon atoms begin combining with the surrounding scale and slag to form both carbon monoxide and carbon dioxide, which is released into the air.
Steel contains a relatively small percentage of carbon, which can migrate freely within the gamma iron. When austenitized steel is exposed to air for long periods of time, the carbon content in the steel can be lowered. This is the opposite from what happens when steel is heated in a reducing environment, in which carbon slowly diffuses further into the metal. In an oxidizing environment, the carbon can readily diffuse outwardly, so austenitized steel is very susceptible to decarburization. This is often used for cast steel, where a high carbon-content is needed for casting, but a lower carbon-content is desired in the finished product. It is often used on cast-irons to produce malleable cast iron, in a process called "white tempering". This tendency to decarburize is often a problem in other operations, such as blacksmithing, where it becomes more desirable to austenize the steel for the shortest amount of time possible to prevent too much decarburization.
Specification of heat treatment
Usually the end condition is specified instead of the process used in heat treatment.
Case hardening
Case hardening is specified by "hardness" and "case depth". The case depth can be specified in two ways: total case depth or effective case depth. The total case depth is the true depth of the case. For most alloys, the effective case depth is the depth of the case that has a hardness equivalent of HRC50; however, some alloys specify a different hardness (40-60 HRC) at effective case depth; this is checked on a Tukon microhardness tester. This value can be roughly approximated as 65% of the total case depth; however, the chemical composition and hardenability can affect this approximation. If neither type of case depth is specified the total case depth is assumed.
For case hardened parts the specification should have a tolerance of at least ±. If the part is to be ground after heat treatment, the case depth is assumed to be after grinding.
The Rockwell hardness scale used for the specification depends on the depth of the total case depth, as shown in the table below. Usually, hardness is measured on the Rockwell "C" scale, but the load used on the scale will penetrate through the case if the case is less than . Using Rockwell "C" for a thinner case will result in a false reading.
For cases that are less than thick a Rockwell scale cannot reliably be used, so is specified instead. File hard is approximately equivalent to 58 HRC.
When specifying the hardness either a range should be given or the minimum hardness specified. If a range is specified at least 5 points should be given.
Through hardening
Only hardness is listed for through hardening. It is usually in the form of HRC with at least a five-point range.
Annealing
The hardness for an annealing process is usually listed on the HRB scale as a maximum value. It is a process to refine grain size, improve strength, remove residual stress, and affect the electromagnetic properties...
Types of furnaces
Furnaces used for heat treatment can be split into two broad categories: batch furnaces and continuous furnaces. Batch furnaces are usually manually loaded and unloaded, whereas continuous furnaces have an automatic conveying system to provide a constant load into the furnace chamber.
Batch furnaces
Batch systems usually consist of an insulated chamber with a steel shell, a heating system, and an access door to the chamber.
Box-type furnace
Many basic box-type furnaces have been upgraded to a semi-continuous batch furnace with the addition of integrated quench tanks and slow-cool chambers. These upgraded furnaces are a very commonly used piece of equipment for heat-treating.
Car-type furnace
Also known as a " bogie hearth", the car furnace is an extremely large batch furnace. The floor is constructed as an insulated movable car that is moved in and out of the furnace for loading and unloading. The car is usually sealed using sand seals or solid seals when in position. Due to the difficulty in getting a sufficient seal, car furnaces are usually used for non-atmosphere processes.
Elevator-type furnace
Similar in type to the car furnace, except that the car and hearth are rolled into position beneath the furnace and raised by means of a motor-driven mechanism, elevator furnaces can handle large heavy loads and often eliminate the need for any external cranes and transfer mechanisms.
Bell-type furnace
Bell furnaces have removable covers called bells, which are lowered over the load and hearth by crane. An inner bell is placed over the hearth and sealed to supply a protective atmosphere. An outer bell is lowered to provide the heat supply.
Pit furnaces
Furnaces that are constructed in a pit and extend to floor level or slightly above are called pit furnaces. Workpieces can be suspended from fixtures, held in baskets, or placed on bases in the furnace. Pit furnaces are suited to heating long tubes, shafts, and rods by holding them in a vertical position. This manner of loading provides minimal distortion.
Salt bath furnaces
Salt baths are used in a wide variety of heat treatment processes including neutral hardening, liquid carburising, liquid nitriding, austempering, martempering and tempering.
Parts are loaded into a pot of molten salt where they are heated by conduction, giving a very readily available source of heat. The core temperature of a part rises in temperature at approximately the same rate as its surface in a salt bath.
Salt baths utilize a variety of salts for heat treatment, with cyanide salts being the most extensively used. Concerns about associated occupation health and safety, and expensive waste management and disposal due to their environmental effects have made the use of salt baths less attractive in recent years. Consequently, many salt baths are being replaced by more environmentally friendly fluidized bed furnaces.
Fluidised bed furnaces
A fluidised bed consists of a cylindrical retort made from high-temperature alloy, filled with sand-like aluminum oxide particulate. Gas (air or nitrogen) is bubbled through the oxide and the sand moves in such a way that it exhibits fluid-like behavior, hence the term fluidized. The solid-solid contact of the oxide gives very high thermal conductivity and excellent temperature uniformity throughout the furnace, comparable to those seen in a salt bath.
See also
Carbon steel
Carbonizing
Diffusion hardening
Induction hardening
Retrogression heat treatment
Nitriding
References
Further reading
International Heat Treatment Magazine in English
Metallurgy
Metalworking
Physical phenomena | Heat treating | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 6,933 | [
"Physical phenomena",
"Metallurgical processes",
"Metallurgy",
"Materials science",
"nan",
"Metal heat treatments"
] |
182,445 | https://en.wikipedia.org/wiki/Fermi%20liquid%20theory | Fermi liquid theory (also known as Landau's Fermi-liquid theory) is a theoretical model of interacting fermions that describes the normal state of the conduction electrons in most metals at sufficiently low temperatures. The theory describes the behavior of many-body systems of particles in which the interactions between particles may be strong. The phenomenological theory of Fermi liquids was introduced by the Soviet physicist Lev Davidovich Landau in 1956, and later developed by Alexei Abrikosov and Isaak Khalatnikov using diagrammatic perturbation theory. The theory explains why some of the properties of an interacting fermion system are very similar to those of the ideal Fermi gas (collection of non-interacting fermions), and why other properties differ.
Fermi liquid theory applies most notably to conduction electrons in normal (non-superconducting) metals, and to liquid helium-3. Liquid helium-3 is a Fermi liquid at low temperatures (but not low enough to be in its superfluid phase). An atom of helium-3 has two protons, one neutron and two electrons, giving an odd number of fermions, so the atom itself is a fermion. Fermi liquid theory also describes the low-temperature behavior of electrons in heavy fermion materials, which are metallic rare-earth alloys having partially filled f orbitals. The effective mass of electrons in these materials is much larger than the free-electron mass because of interactions with other electrons, so these systems are known as heavy Fermi liquids. Strontium ruthenate displays some key properties of Fermi liquids, despite being a strongly correlated material that is similar to high temperature superconductors such as the cuprates. The low-momentum interactions of nucleons (protons and neutrons) in atomic nuclei are also described by Fermi liquid theory.
Description
The key ideas behind Landau's theory are the notion of adiabaticity and the Pauli exclusion principle. Consider a non-interacting fermion system (a Fermi gas), and suppose we "turn on" the interaction slowly. Landau argued that in this situation, the ground state of the Fermi gas would adiabatically transform into the ground state of the interacting system.
By Pauli's exclusion principle, the ground state of a Fermi gas consists of fermions occupying all momentum states corresponding to momentum with all higher momentum states unoccupied. As the interaction is turned on, the spin, charge and momentum of the fermions corresponding to the occupied states remain unchanged, while their dynamical properties, such as their mass, magnetic moment etc. are renormalized to new values. Thus, there is a one-to-one correspondence between the elementary excitations of a Fermi gas system and a Fermi liquid system. In the context of Fermi liquids, these excitations are called "quasiparticles".
Landau quasiparticles are long-lived excitations with a lifetime that satisfies where is the quasiparticle energy (measured from the Fermi energy). At finite temperature, is on the order of the thermal energy , and the condition for Landau quasiparticles can be reformulated as .
For this system, the many-body Green's function can be written (near its poles) in the form
where is the chemical potential, is the energy corresponding to the given momentum state and is called the quasiparticle residue or renormalisation constant which is very characteristic of Fermi liquid theory. The spectral function for the system can be directly observed via angle-resolved photoemission spectroscopy (ARPES), and can be written (in the limit of low-lying excitations) in the form:
where is the Fermi velocity.
Physically, we can say that a propagating fermion interacts with its surrounding in such a way that the net effect of the interactions is to make the fermion behave as a "dressed" fermion, altering its effective mass and other dynamical properties. These "dressed" fermions are what we think of as "quasiparticles".
Another important property of Fermi liquids is related to the scattering cross section for electrons. Suppose we have an electron with energy above the Fermi surface, and suppose it scatters with a particle in the Fermi sea with energy . By Pauli's exclusion principle, both the particles after scattering have to lie above the Fermi surface, with energies . Now, suppose the initial electron has energy very close to the Fermi surface Then, we have that also have to be very close to the Fermi surface. This reduces the phase space volume of the possible states after scattering, and hence, by Fermi's golden rule, the scattering cross section goes to zero. Thus we can say that the lifetime of particles at the Fermi surface goes to infinity.
Similarities to Fermi gas
The Fermi liquid is qualitatively analogous to the non-interacting Fermi gas, in the following sense: The system's dynamics and thermodynamics at low excitation energies and temperatures may be described by substituting the non-interacting fermions with interacting quasiparticles, each of which carries the same spin, charge and momentum as the original particles. Physically these may be thought of as being particles whose motion is disturbed by the surrounding particles and which themselves perturb the particles in their vicinity. Each many-particle excited state of the interacting system may be described by listing all occupied momentum states, just as in the non-interacting system. As a consequence, quantities such as the heat capacity of the Fermi liquid behave qualitatively in the same way as in the Fermi gas (e.g. the heat capacity rises linearly with temperature).
Differences from Fermi gas
The following differences to the non-interacting Fermi gas arise:
Energy
The energy of a many-particle state is not simply a sum of the single-particle energies of all occupied states. Instead, the change in energy for a given change in occupation of states contains terms both linear and quadratic in (for the Fermi gas, it would only be linear, , where denotes the single-particle energies). The linear contribution corresponds to renormalized single-particle energies, which involve, e.g., a change in the effective mass of particles. The quadratic terms correspond to a sort of "mean-field" interaction between quasiparticles, which is parametrized by so-called Landau Fermi liquid parameters and determines the behaviour of density oscillations (and spin-density oscillations) in the Fermi liquid. Still, these mean-field interactions do not lead to a scattering of quasi-particles with a transfer of particles between different momentum states.
The renormalization of the mass of a fluid of interacting fermions can be calculated from first principles using many-body computational techniques. For the two-dimensional homogeneous electron gas, GW calculations and quantum Monte Carlo methods have been used to calculate renormalized quasiparticle effective masses.
Specific heat and compressibility
Specific heat, compressibility, spin-susceptibility and other quantities show the same qualitative behaviour (e.g. dependence on temperature) as in the Fermi gas, but the magnitude is (sometimes strongly) changed.
Interactions
In addition to the mean-field interactions, some weak interactions between quasiparticles remain, which lead to scattering of quasiparticles off each other. Therefore, quasiparticles acquire a finite lifetime. However, at low enough energies above the Fermi surface, this lifetime becomes very long, such that the product of excitation energy (expressed in frequency) and lifetime is much larger than one. In this sense, the quasiparticle energy is still well-defined (in the opposite limit, Heisenberg's uncertainty relation would prevent an accurate definition of the energy).
Structure
The structure of the "bare" particles (as opposed to quasiparticle) many-body Green's function is similar to that in the Fermi gas (where, for a given momentum, the Green's function in frequency space is a delta peak at the respective single-particle energy). The delta peak in the density-of-states is broadened (with a width given by the quasiparticle lifetime). In addition (and in contrast to the quasiparticle Green's function), its weight (integral over frequency) is suppressed by a quasiparticle weight factor . The remainder of the total weight is in a broad "incoherent background", corresponding to the strong effects of interactions on the fermions at short time scales.
Distribution
The distribution of particles (as opposed to quasiparticles) over momentum states at zero temperature still shows a discontinuous jump at the Fermi surface (as in the Fermi gas), but it does not drop from 1 to 0: the step is only of size .
Electrical resistivity
In a metal the resistivity at low temperatures is dominated by electron–electron scattering in combination with umklapp scattering. For a Fermi liquid, the resistivity from this mechanism varies as , which is often taken as an experimental check for Fermi liquid behaviour (in addition to the linear temperature-dependence of the specific heat), although it only arises in combination with the lattice. In certain cases, umklapp scattering is not required. For example, the resistivity of compensated semimetals scales as because of mutual scattering of electron and hole. This is known as the Baber mechanism.
Optical response
Fermi liquid theory predicts that the scattering rate, which governs the optical response of metals, not only depends quadratically on temperature (thus causing the dependence of the DC resistance), but it also depends quadratically on frequency. This is in contrast to the Drude prediction for non-interacting metallic electrons, where the scattering rate is a constant as a function of frequency.
One material in which optical Fermi liquid behavior was experimentally observed is the low-temperature metallic phase of Sr2RuO4.
Instabilities
The experimental observation of exotic phases in strongly correlated systems has triggered an enormous effort from the theoretical community to try to understand their microscopic origin. One possible route to detect instabilities of a Fermi liquid is precisely the analysis done by Isaak Pomeranchuk. Due to that, the Pomeranchuk instability has been studied by several authors with different techniques in the last few years and in particular, the instability of the Fermi liquid towards the nematic phase was investigated for several models.
Non-Fermi liquids
Non-Fermi liquids are systems in which the Fermi-liquid behaviour breaks down. The simplest example is a system of interacting fermions in one dimension, called the Luttinger liquid. Although Luttinger liquids are physically similar to Fermi liquids, the restriction to one dimension gives rise to several qualitative differences such as the absence of a quasiparticle peak in the momentum dependent spectral function, and the presence of spin-charge separation and of spin-density waves. One cannot ignore the existence of interactions in one dimension and has to describe the problem with a non-Fermi theory, where Luttinger liquid is one of them. At small finite spin temperatures in one dimension the ground state of the system is described by spin-incoherent Luttinger liquid (SILL).
Another example of non-Fermi-liquid behaviour is observed at quantum critical points of certain second-order phase transitions, such as heavy fermion criticality, Mott criticality and high- cuprate phase transitions. The ground state of such transitions is characterized by the presence of a sharp Fermi surface, although there may not be well-defined quasiparticles. That is, on approaching the critical point, it is observed that the quasiparticle residue .
In optimally doped cuprates and iron-based superconductors, the normal state above the critical temperature shows signs of non-Fermi liquid behaviour, and is often called a strange metal. In this region of phase diagram, resistivity increases linearly in temperature and the Hall coefficient is found to depend on temperature.
Understanding the behaviour of non-Fermi liquids is an important problem in condensed matter physics. Approaches towards explaining these phenomena include the treatment of marginal Fermi liquids; attempts to understand critical points and derive scaling relations; and descriptions using emergent gauge theories with techniques of holographic gauge/gravity duality.
See also
Classical fluid
Fermionic condensate
Luttinger liquid
Luttinger's theorem
Strongly correlated quantum spin liquid
References
Further reading
Condensed matter physics
Fermions
Electronic band structures
Lev Landau | Fermi liquid theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,624 | [
"Electron",
"Matter",
"Fermions",
"Phases of matter",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"Subatomic particles"
] |
182,455 | https://en.wikipedia.org/wiki/Corrin | Corrin is a heterocyclic compound. Although not known to exist on its own, the molecule is of interest as the parent macrocycle related to the cofactor and chromophore in vitamin B12. Its name reflects that it is the "core" of vitamin B12 (cobalamins). Compounds with a corrin core are known as "corrins".
There are two chiral centres, which in natural compounds like cobalamin have the same stereochemistry.
Coordination chemistry
Upon deprotonation, the corrinoid ring is capable of binding cobalt. In vitamin B12, the resulting complex also features a benzimidazole-derived ligand, and the sixth site on the octahedron serves as the catalytic center.
The corrin ring resembles the porphyrin ring. Both feature four pyrrole-like subunits organized into rings. Corrins have a central 15-membered ring whereas porphryins have an interior 16-membered ring. All four nitrogen centers are linked by conjugation structure, with alternating double and single bonds. In contrast to porphyrins, corrins lack one of the carbon groups that link the pyrrole-like units into a fully conjugated structure. With a conjugated system that extends only 3/4 of the way around the ring, and does not include any of the outer edge carbons, corrins have a number of non-conjugated sp3 carbons, making them more flexible than porphyrins and not as flat. A third closely related biological structure, the chlorin ring system found in chlorophyll, is intermediate between porphyrin and corrin, having 20 carbons like the porphyrins and a conjugated structure extending all the way around the central atom, but with only 6 of the 8 edge carbons participating.
Corroles (octadehydrocorrins) are fully aromatic derivatives of corrins.
References
Further reading
Biomolecules
Tetrapyrroles
Metabolism
Macrocycles
Schiff bases | Corrin | [
"Chemistry",
"Biology"
] | 441 | [
"Natural products",
"Biochemistry",
"Organic compounds",
"Macrocycles",
"Cellular processes",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Metabolism"
] |
182,499 | https://en.wikipedia.org/wiki/Porphyrin | Porphyrins ( ) are a group of heterocyclic, macrocyclic, organic compounds, composed of four modified pyrrole subunits interconnected at their α carbon atoms via methine bridges (). In vertebrates, an essential member of the porphyrin group is heme, which is a component of hemoproteins, whose functions include carrying oxygen in the bloodstream. In plants, an essential porphyrin derivative is chlorophyll, which is involved in light harvesting and electron transfer in photosynthesis.
The parent of porphyrins is porphine, a rare chemical compound of exclusively theoretical interest. Substituted porphines are called porphyrins. With a total of 26 π-electrons, of which 18 π-electrons form a planar, continuous cycle, the porphyrin ring structure is often described as aromatic. One result of the large conjugated system is that porphyrins typically absorb strongly in the visible region of the electromagnetic spectrum, i.e. they are deeply colored. The name "porphyrin" derives .
Structure
Porphyrin complexes consist of a square planar MN4 core. The periphery of the porphyrins, consisting of sp2-hybridized carbons, generally display small deviations from planarity. "Ruffled" or saddle-shaped porphyrins is attributed to interactions of the system with its environment. Additionally, the metal is often not centered in the N4 plane. For free porphyrins, the two pyrrole protons are mutually trans and project out of the N4 plane. These nonplanar distortions are associated with altered chemical and physical properties. Chlorophyll-rings are more distinctly nonplanar, but they are more saturated than porphyrins.
Complexes of porphyrins
Concomitant with the displacement of two N-H protons, porphyrins bind metal ions in the N4 "pocket". The metal ion usually has a charge of 2+ or 3+. A schematic equation for these syntheses is shown, where M = metal ion and L = a ligand:
Ancient porphyrins
A geoporphyrin, also known as a petroporphyrin, is a porphyrin of geologic origin. They can occur in crude oil, oil shale, coal, or sedimentary rocks. Abelsonite is possibly the only geoporphyrin mineral, as it is rare for porphyrins to occur in isolation and form crystals.
The field of organic geochemistry had its origins in the isolation of porphyrins from petroleum. This finding helped establish the biological origins of petroleum. Petroleum is sometimes "fingerprinted" by analysis of trace amounts of nickel and vanadyl porphyrins.
Biosynthesis
In non-photosynthetic eukaryotes such as animals, insects, fungi, and protozoa, as well as the α-proteobacteria group of bacteria, the committed step for porphyrin biosynthesis is the formation of δ-aminolevulinic acid (δ-ALA, 5-ALA or dALA) by the reaction of the amino acid glycine with succinyl-CoA from the citric acid cycle. In plants, algae, bacteria (except for the α-proteobacteria group) and archaea, it is produced from glutamic acid via glutamyl-tRNA and glutamate-1-semialdehyde. The enzymes involved in this pathway are glutamyl-tRNA synthetase, glutamyl-tRNA reductase, and glutamate-1-semialdehyde 2,1-aminomutase. This pathway is known as the C5 or Beale pathway.
Two molecules of dALA are then combined by porphobilinogen synthase to give porphobilinogen (PBG), which contains a pyrrole ring. Four PBGs are then combined through deamination into hydroxymethyl bilane (HMB), which is hydrolysed to form the circular tetrapyrrole uroporphyrinogen III. This molecule undergoes a number of further modifications. Intermediates are used in different species to form particular substances, but, in humans, the main end-product protoporphyrin IX is combined with iron to form heme. Bile pigments are the breakdown products of heme.
The following scheme summarizes the biosynthesis of porphyrins, with references by EC number and the OMIM database. The porphyria associated with the deficiency of each enzyme is also shown:
Laboratory synthesis
A common synthesis for porphyrins is the Rothemund reaction, first reported in 1936, which is also the basis for more recent methods described by Adler and Longo. The general scheme is a condensation and oxidation process starting with pyrrole and an aldehyde.
Potential applications
Photodynamic therapy
Porphyrins have been evaluated in the context of photodynamic therapy (PDT) since they strongly absorb light, which is then converted to heat in the illuminated areas. This technique has been applied in macular degeneration using verteporfin.
PDT is considered a noninvasive cancer treatment, involving the interaction between light of a determined frequency, a photo-sensitizer, and oxygen. This interaction produces the formation of a highly reactive oxygen species (ROS), usually singlet oxygen, as well as superoxide anion, free hydroxyl radical, or hydrogen peroxide. These high reactive oxygen species react with susceptible cellular organic biomolecules such as; lipids, aromatic amino acids, and nucleic acid heterocyclic bases, to produce oxidative radicals that damage the cell, possibly inducing apoptosis or even necrosis.
Molecular electronics and sensors
Porphyrin-based compounds are of interest as possible components of molecular electronics and photonics. Synthetic porphyrin dyes have been incorporated in prototype dye-sensitized solar cells.
Biological applications
Porphyrins have been investigated as possible anti-inflammatory agents and evaluated on their anti-cancer and anti-oxidant activity. Several porphyrin-peptide conjugates were found to have antiviral activity against HIV in vitro.
Toxicology
Heme biosynthesis is used as biomarker in environmental toxicology studies. While excess production of porphyrins indicate organochlorine exposure, lead inhibits ALA dehydratase enzyme.
Gallery
Related species
In nature
Several heterocycles related to porphyrins are found in nature, almost always bound to metal ions. These include
Synthetic
A benzoporphyrin is a porphyrin with a benzene ring fused to one of the pyrrole units. e.g. verteporfin is a benzoporphyrin derivative.
Non-natural porphyrin isomers
The first synthetic porphyrin isomer was reported by Emanual Vogel and coworkers in 1986. This isomer [18]porphyrin-(2.0.2.0) is named as porphycene, and the central N4 Cavity forms a rectangle shape as shown in figure. Porphycenes showed interesting photophysical behavior and found versatile compound towards the photodynamic therapy. This inspired Vogel and Sessler to took up the challenge of preparing [18]porphyrin-(2.1.0.1) and named it as corrphycene or porphycerin. The third porphyrin that is [18]porphyrin-(2.1.1.0), was reported by Callot and Vogel-Sessler. Vogel and coworkers reported successful isolation of [18]porphyrin-(3.0.1.0) or isoporphycene. The Japanese scientist Furuta and Polish scientist Latos-Grażyński almost simultaneously reported the N-confused porphyrins. The inversion of one of the pyrrolic subunits in the macrocyclic ring resulted in one of the nitrogen atoms facing outwards from the core of the macrocycle.
See also
A porphyrin-related disease: porphyria
Porphyrin coordinated to iron: heme
A heme-containing group of enzymes: Cytochrome P450
Porphyrin coordinated to magnesium: chlorophyll
The one-carbon-shorter analogues: corroles, including vitamin B12, which is coordinated to a cobalt
Corphins, the highly reduced porphyrin coordinated to nickel that binds the Cofactor F430 active site in methyl coenzyme M reductase (MCR)
Nitrogen-substituted porphyrins: phthalocyanine
References
External links
Journal of Porphyrins and Phthalocyanines
Handbook of Porphyrin Science
Porphynet – an informative site about porphyrins and related structures
Biomolecules
Metabolism
Photosynthetic pigments
Chelating agents | Porphyrin | [
"Chemistry",
"Biology"
] | 1,889 | [
"Photosynthetic pigments",
"Natural products",
"Photosynthesis",
"Organic compounds",
"Cellular processes",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Chelating agents",
"Porphyrins",
"Metabolism",
"Process chemicals",
"Molecular biology"
] |
182,650 | https://en.wikipedia.org/wiki/Winkler%20titration | The Winkler test is used to determine the concentration of dissolved oxygen in water samples. Dissolved oxygen (D.O.) is widely used in water quality studies and routine operation of water reclamation facilities to analyze its level of oxygen saturation.
In the test, an excess of manganese(II) salt, iodide (I−) and hydroxide (OH−) ions are added to a water sample causing a white precipitate of Mn(OH)2 to form. This precipitate is then oxidized by the oxygen that is present in the water sample into a brown manganese-containing precipitate with manganese in a more highly oxidized state (either Mn(III) or Mn(IV)).
In the next step, a strong acid (either hydrochloric acid or sulfuric acid) is added to acidify the solution. The brown precipitate then converts the iodide ion (I−) to iodine. The amount of dissolved oxygen is directly proportional to the titration of iodine with a thiosulfate solution. Today, the method is effectively used as its colorimetric modification, where the trivalent manganese produced on acidifying the brown suspension is directly reacted with ethylenediaminetetraacetic acid to give a pink color. As manganese is the only common metal giving a color reaction with ethylenediaminetetraacetic acid, it has the added effect of masking other metals as colorless complexes.
History
The test was originally developed by Ludwig Wilhelm Winkler, in later literature referred to as Lajos Winkler, while working at Budapest University on his doctoral dissertation in 1888. The amount of dissolved oxygen is a measure of the biological activity of the water masses. Phytoplankton and macroalgae present in the water mass-produce oxygen by way of photosynthesis. Bacteria and eukaryotic organisms (zooplankton, fish) consume this oxygen through cellular respiration. The result of these two mechanisms determines the concentration of dissolved oxygen, which in turn indicates the production of biomass. The difference between the physical concentration of oxygen in the water (or the theoretical concentration if there were no living organisms) and the actual concentration of oxygen is called the biochemical demand in oxygen. The Winkler test is often controversial as it is not 100% accurate and the oxygen levels may fluctuate from test to test despite using the same constant sample.
Chemical processes
In the first step, manganese(II) sulphate (at 48% of the total volume) is added to an environmental water sample. Next, potassium iodide (15% in potassium hydroxide 70%) is added to create a pinkish-brown precipitate. In the alkaline solution, dissolved oxygen will oxidize manganese(II) ions to the tetravalent state.
2 Mn2+ + O2 + 4 OH− → 2 MnO(OH)2
Mn has been oxidised to 4+, and MnO(OH)2 appears as a brown precipitate. There is some uncertainty about whether the oxidised manganese is tetravalent or trivalent. Some sources claim that Mn(OH)3 is the brown precipitate, but hydrated MnO2 may also give the brown colour.
4 Mn(OH)2 + O2 + 2 H2O → 4 Mn(OH)3
The second part of the Winkler test reduces (acidifies) the solution. The precipitate will dissolve back into solution as the H+ reacts with the O2− and OH− to form water.
MnO(OH)2 + 4 H+ → Mn4+ + 3 H2O
The acid facilitates the conversion by the brown, Manganese-containing precipitate of the Iodide ion into elemental Iodine.
The Mn(SO4)2 formed by the acid converts the iodide ions into iodine, itself being reduced back to manganese(II) ions in an acidic medium.
Mn(SO4)2 + 2 I− → Mn2+ + I2 + 2
Thiosulfate is used, with a starch indicator, to titrate the iodine.
2 + I2 → + 2 I−
Analysis
From the above stoichiometric equations, we can find that:
1 mole of O2 → 2 moles of MnO(OH)2 → 2 mole of I2 → 4 mole of
Therefore, after determining the number of moles of iodine produced, we can work out the number of moles of oxygen molecules present in the original water sample. The oxygen content is usually presented in milligrams per liter (mg/L).
Limitations
The success of this method is critically dependent upon the manner in which the sample is manipulated. At all stages, steps must be taken to ensure that oxygen is neither introduced to nor lost from the sample. Furthermore, the water sample must be free of any solutes that will oxidize or reduce iodine.
Instrumental methods for measurement of dissolved oxygen have widely supplanted the routine use of the Winkler test, although the test is still used to check instrument calibration.
BOD5
To determine five-day biochemical oxygen demand (BOD5), several dilutions of a sample are analyzed for dissolved oxygen before and after a five-day incubation period at 20 °C in the dark. In some cases, bacteria are used to provide a standardized community to uptake oxygen while consuming the organic matter in the sample; these bacteria are known as "seed". The difference in DO and the dilution factor are used to calculated BOD5. The resulting number (usually reported in parts per million or milligrams per liter) is useful in determining the relative organic strength of sewage or other polluted waters.
The BOD5 test is an example of analysis that determines classes of materials in a sample.
Winkler bottle
A Winkler bottle is a piece of laboratory glassware specifically made for carrying out the Winkler test. These bottles have conical tops and a close fitting stopper to aid in the exclusion of air bubbles when the top is sealed. This is important because oxygen in trapped air would be included in the measurement and would affect the accuracy of the test.
References
Further reading
Moran, Joseph M.; Morgan, Michael D., & Wiersma, James H. (1980). Introduction to Environmental Science (2nd ed.). W.H. Freeman and Company, New York, NY
Y.C. Wong & C.T. Wong. New Way Chemistry for Hong Kong A-Level Volume 4, p. 248.
Manganese (III) consistently claimed (NB: Gives unbalanced equation for formation of MnO(OH)2). Claims manganese (III) gives manganese (IV) consistently.
Aquatic ecology
Water quality indicators
Oxygen | Winkler titration | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,420 | [
"Aquatic ecology",
"Water quality indicators",
"Ecosystems",
"Water pollution"
] |
182,727 | https://en.wikipedia.org/wiki/Mach%27s%20principle | In theoretical physics, particularly in discussions of gravitation theories, Mach's principle (or Mach's conjecture) is the name given by Albert Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach. The hypothesis attempted to explain how rotating objects, such as gyroscopes and spinning celestial bodies, maintain a frame of reference.
The proposition is that the existence of absolute rotation (the distinction of local inertial frames vs. rotating reference frames) is determined by the large-scale distribution of matter, as exemplified by this anecdote:
You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?
Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. There are a number of rival formulations of the principle, often stated in vague ways like "mass out there influences inertia here". A very general statement of Mach's principle is "local physical laws are determined by the large-scale structure of the universe".
Mach's concept was a guiding factor in Einstein's development of the general theory of relativity. Einstein realized that the overall distribution of matter would determine the metric tensor which indicates which frame is stationary with respect to rotation. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements have been made which would qualify as a Mach principle, some of which are false. The Gödel rotating universe is a solution of the field equations that is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example does not completely settle the question of the physical relevance of the principle because it has closed timelike curves.
History
Mach put forth the idea in his book The Science of Mechanics (1883 in German, 1893 in English). Before Mach's time, the basic idea also appears in the writings of George Berkeley. After Mach, the book Absolute or Relative Motion? (1896) by Benedict Friedlaender and his brother Immanuel contained ideas similar to Mach's principle.
Einstein's use of the principle
There is a fundamental issue in relativity theory: if all motion is relative, how can we measure the inertia of a body? We must measure the inertia with respect to something else. But what if we imagine a particle completely on its own in the universe? We might hope to still have some notion of its state of motion. Mach's principle is sometimes interpreted as the statement that such a particle's state of motion has no meaning in that case.
In Mach's words, the principle is embodied as follows:
Albert Einstein seemed to view Mach's principle as something along the lines of:
In this sense, at least some of Mach's principles are related to philosophical holism. Mach's suggestion can be taken as the injunction that gravitation theories should be relational theories. Einstein brought the principle into mainstream physics while working on general relativity. Indeed, it was Einstein who first coined the phrase Mach's principle. There is much debate as to whether Mach really intended to suggest a new physical law since he never states it explicitly.
The writing in which Einstein found inspiration was Mach's book The Science of Mechanics (1883, tr. 1893), where the philosopher criticized Newton's idea of absolute space, in particular the argument that Newton gave sustaining the existence of an advantaged reference system: what is commonly called "Newton's bucket argument".
In his Philosophiae Naturalis Principia Mathematica, Newton tried to demonstrate that one can always decide if one is rotating with respect to the absolute space, measuring the apparent forces that arise only when an absolute rotation is performed. If a bucket is filled with water, and made to rotate, initially the water remains still, but then, gradually, the walls of the vessel communicate their motion to the water, making it curve and climb up the borders of the bucket, because of the centrifugal forces produced by the rotation. This experiment demonstrates that the centrifugal forces arise only when the water is in rotation with respect to the absolute space (represented here by the earth's reference frame, or better, the distant stars) instead, when the bucket was rotating with respect to the water no centrifugal forces were produced, this indicating that the latter was still with respect to the absolute space.
Mach, in his book, says that the bucket experiment only demonstrates that when the water is in rotation with respect to the bucket no centrifugal forces are produced, and that we cannot know how the water would behave if in the experiment the bucket's walls were increased in depth and width until they became leagues big. In Mach's idea this concept of absolute motion should be substituted with a total relativism in which every motion, uniform or accelerated, has sense only in reference to other bodies (i.e., one cannot simply say that the water is rotating, but must specify if it's rotating with respect to the vessel or to the earth). In this view, the apparent forces that seem to permit discrimination between relative and "absolute" motions should only be considered as an effect of the particular asymmetry that there is in our reference system between the bodies which we consider in motion, that are small (like buckets), and the bodies that we believe are still (the earth and distant stars), that are overwhelmingly bigger and heavier than the former.
This same thought had been expressed by the philosopher George Berkeley in his De Motu. It is then not clear, in the passages from Mach just mentioned, if the philosopher intended to formulate a new kind of physical action between heavy bodies. This physical mechanism should determine the inertia of bodies, in a way that the heavy and distant bodies of our universe should contribute the most to the inertial forces. More likely, Mach only suggested a mere "redescription of motion in space as experiences that do not invoke the term space". What is certain is that Einstein interpreted Mach's passage in the former way, originating a long-lasting debate.
Most physicists believe Mach's principle was never developed into a quantitative physical theory that would explain a mechanism by which the stars can have such an effect. Mach himself never made his principle exactly clear. Although Einstein was intrigued and inspired by Mach's principle, Einstein's formulation of the principle is not a fundamental assumption of general relativity, although the principle of equivalence of gravitational and inertial mass is most certainly fundamental.
Mach's principle in general relativity
Because intuitive notions of distance and time no longer apply, what exactly is meant by "Mach's principle" in general relativity is even less clear than in Newtonian physics and at least 21 formulations of Mach's principle are possible, some being considered more strongly Machian than others. A relatively weak formulation is the assertion that the motion of matter in one place should affect which frames are inertial in another.
Einstein, before completing his development of the general theory of relativity, found an effect which he interpreted as being evidence of Mach's principle. We assume a fixed background for conceptual simplicity, construct a large spherical shell of mass, and set it spinning in that background. The reference frame in the interior of this shell will precess with respect to the fixed background. This effect is known as the Lense–Thirring effect. Einstein was so satisfied with this manifestation of Mach's principle that he wrote a letter to Mach expressing this:
The Lense–Thirring effect certainly satisfies the very basic and broad notion that "matter there influences inertia here". The plane of the pendulum would not be dragged around if the shell of matter were not present, or if it were not spinning. As for the statement that "inertia originates in a kind of interaction between bodies", this, too, could be interpreted as true in the context of the effect.
More fundamental to the problem, however, is the very existence of a fixed background, which Einstein describes as "the fixed stars". Modern relativists see the imprints of Mach's principle in the initial-value problem. Essentially, we humans seem to wish to separate spacetime into slices of constant time. When we do this, Einstein's equations can be decomposed into one set of equations, which must be satisfied on each slice, and another set, which describe how to move between slices. The equations for an individual slice are elliptic partial differential equations. In general, this means that only part of the geometry of the slice can be given by the scientist, while the geometry everywhere else will then be dictated by Einstein's equations on the slice.
In the context of an asymptotically flat spacetime, the boundary conditions are given at infinity. Heuristically, the boundary conditions for an asymptotically flat universe define a frame with respect to which inertia has meaning. By performing a Lorentz transformation on the distant universe, of course, this inertia can also be transformed.
A stronger form of Mach's principle applies in Wheeler–Mach–Einstein spacetimes, which require spacetime to be spatially compact and globally hyperbolic. In such universes Mach's principle can be stated as the distribution of matter and field energy-momentum (and possibly other information) at a particular moment in the universe determines the inertial frame at each point in the universe (where "a particular moment in the universe" refers to a chosen Cauchy surface).
There have been other attempts to formulate a theory that is more fully Machian, such as the Brans–Dicke theory and the Hoyle–Narlikar theory of gravity, but most physicists argue that none have been fully successful. At an exit poll of experts, held in Tübingen in 1993, when asked the question "Is general relativity perfectly Machian?", 3 respondents replied "yes", and 22 replied "no". To the question "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?" the result was 14 "yes" and 7 "no".
However, Einstein was convinced that a valid theory of gravity would necessarily have to include the relativity of inertia:
Inertial induction
In 1953, in order to express Mach's Principle in quantitative terms, the Cambridge University physicist Dennis W. Sciama proposed the addition of an acceleration dependent term to the Newtonian gravitation equation. Sciama's acceleration dependent term was where r is the distance between the particles, G is the gravitational constant, a is the relative acceleration and c represents the speed of light in vacuum. Sciama referred to the effect of the acceleration dependent term as Inertial Induction.
Variations in the statement of the principle
The broad notion that "mass there influences inertia here" has been expressed in several forms.
Hermann Bondi and Joseph Samuel have listed eleven distinct statements that can be called Mach principles, labelled Mach0 through Mach10 (taking inspiration from the Mach number). Though their list is not necessarily exhaustive, it does give a flavor for the variety possible.
The universe, as represented by the average motion of distant galaxies, does not appear to rotate relative to local inertial frames.
Newton's gravitational constant G is a dynamical field.
An isolated body in otherwise empty space has no inertia.
Local inertial frames are affected by the cosmic motion and distribution of matter.
The universe is spatially closed.
The total energy, angular and linear momentum of the universe are zero.
Inertial mass is affected by the global distribution of matter.
If you take away all matter, there is no more space.
is a definite number, of order unity, where is the mean density of matter in the universe, and is the Hubble time.
The theory contains no absolute elements.
Overall rigid rotations and translations of a system are unobservable.
See also
Notes
References
Further reading
This textbook, among other writings by Sciama, helped revive interest in Mach's principle.
External links
Ernst Mach, The Science of Mechanics (tr. 1893) at Archive.org
"Mach's Principle" (1995) from Einstein Studies vol. 6 (13MB PDF)
(originally published in Italian as Gasco E. "Il contributo di mach sull'origine dell'inerzia." Quaderni di Storia della Fisica, 2004.)
Ernst Mach
Theories of gravity
Principles
Rotation
Philosophy of astronomy
Thought experiments in physics | Mach's principle | [
"Physics",
"Astronomy"
] | 2,714 | [
"Physical phenomena",
"Philosophy of astronomy",
"Theoretical physics",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Theories of gravity"
] |
182,734 | https://en.wikipedia.org/wiki/Magnetohydrodynamic%20drive | A magnetohydrodynamic drive or MHD accelerator is a method for propelling vehicles using only electric and magnetic fields with no moving parts, accelerating an electrically conductive propellant (liquid or gas) with magnetohydrodynamics. The fluid is directed to the rear and as a reaction, the vehicle accelerates forward.
Studies examining MHD in the field of marine propulsion began in the late 1950s.
Few large-scale marine prototypes have been built, limited by the low electrical conductivity of seawater. Increasing current density is limited by Joule heating and water electrolysis in the vicinity of electrodes, and increasing the magnetic field strength is limited by the cost, size and weight (as well as technological limitations) of electromagnets and the power available to feed them. In 2023 DARPA launched the PUMP program to build a marine engine using superconducting magnets expected to reach a field strength of 20 Tesla.
Stronger technical limitations apply to air-breathing MHD propulsion (where ambient air is ionized) that is still limited to theoretical concepts and early experiments.
Plasma propulsion engines using magnetohydrodynamics for space exploration have also been actively studied as such electromagnetic propulsion offers high thrust and high specific impulse at the same time, and the propellant would last much longer than in chemical rockets.
Principle
The working principle involves the acceleration of an electrically conductive fluid (which can be a liquid or an ionized gas called a plasma) by the Lorentz force, resulting from the cross product of an electric current (motion of charge carriers accelerated by an electric field applied between two electrodes) with a perpendicular magnetic field. The Lorentz force accelerates all charged particles, positive and negative species (in opposite directions). If either positive or negative species dominate the vehicle is put in motion in the opposite direction from the net charge.
This is the same working principle as an electric motor (more exactly a linear motor) except that in an MHD drive, the solid moving rotor is replaced by the fluid acting directly as the propellant. As with all electromagnetic devices, an MHD accelerator is reversible: if the ambient working fluid is moving relatively to the magnetic field, charge separation induces an electric potential difference that can be harnessed with electrodes: the device then acts as a power source with no moving parts, transforming the kinetic energy of the incoming fluid into electricity, called an MHD generator.
As the Lorentz force in an MHD converter does not act on a single isolated charged particle nor on electrons in a solid electrical wire, but on a continuous charge distribution in motion, it is a "volumetric" (body) force, a force per unit volume:
where f is the force density (force per unit volume), ρ the charge density (charge per unit volume), E the electric field, J the current density (current per unit area) and B the magnetic field.
Typology
MHD thrusters are classified in two categories according to the way the electromagnetic fields operate:
Conduction devices when a direct current flows in the fluid due to an applied voltage between pairs of electrodes, the magnetic field being steady.
Induction devices when alternating currents are induced by a rapidly varying magnetic field, as eddy currents. No electrodes are required in this case.
As induction MHD accelerators are electrodeless, they do not exhibit the common issues related to conduction systems (especially Joule heating, bubbles and redox from electrolysis) but need much more intense peak magnetic fields to operate. Since one of the biggest issues with such thrusters is the limited energy available on-board, induction MHD drives have not been developed out of the laboratory.
Both systems can put the working fluid in motion according to two main designs:
Internal flow when the fluid is accelerated within and propelled back out of a nozzle of tubular or ring-shaped cross-section, the MHD interaction being concentrated within the pipe (similarly to rocket or jet engines).
External flow when the fluid is accelerated around the whole wetted area of the vehicle, the electromagnetic fields extending around the body of the vehicle. The propulsion force results from the pressure distribution on the shell (as lift on a wing, or how ciliate microorganisms such as Paramecium move water around them).
Internal flow systems concentrate the MHD interaction in a limited volume, preserving stealth characteristics. External field systems on the contrary have the ability to act on a very large expanse of surrounding water volume with higher efficiency and the ability to decrease drag, increasing the efficiency even further.
Marine propulsion
MHD has no moving parts, which means that a good design might be silent, reliable, and efficient. Additionally, the MHD design eliminates many of the wear and friction pieces of the drivetrain with a directly driven propeller by an engine. Problems with current technologies include expense and slow speed compared to a propeller driven by an engine. The extra expense is from the large generator that must be driven by an engine. Such a large generator is not required when an engine directly drives a propeller.
The first prototype, a 3-meter (10-feet) long submarine called EMS-1, was designed and tested in 1966 by Stewart Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior year undergraduate students to build the operational unit. This MHD submarine operated on batteries delivering power to electrodes and electromagnets, which produced a magnetic field of 0.015 tesla. The cruise speed was about 0.4 meter per second (15 inches per second) during the test in the bay of Santa Barbara, California, in accordance with theoretical predictions.
Later, a Japanese prototype, the 3.6-meter long "ST-500", achieved speeds of up to 0.6 m/s in 1979.
In 1991, the world's first full-size prototype Yamato 1 was completed in Japan after 6 years of research and development (R&D) by the Ship & Ocean Foundation (later known as the Ocean Policy Research Foundation). The ship successfully carried a crew of ten plus passengers at speeds of up to in Kobe Harbour in June 1992.
Small-scale ship models were later built and studied extensively in the laboratory, leading to successful comparisons between the measurements and the theoretical prediction of ship terminal speeds.
Military research about underwater MHD propulsion included high-speed torpedoes, remotely operated underwater vehicles (ROV), autonomous underwater vehicles (AUV), up to larger ones such as submarines.
Aircraft propulsion
Passive flow control
First studies of the interaction of plasmas with hypersonic flows around vehicles date back to the late 1950s, with the concept of a new kind of thermal protection system for space capsules during high-speed reentry. As low-pressure air is naturally ionized at such very high velocities and altitude, it was thought to use the effect of a magnetic field produced by an electromagnet to replace thermal ablative shields by a "magnetic shield". Hypersonic ionized flow interacts with the magnetic field, inducing eddy currents in the plasma. The current combines with the magnetic field to give Lorentz forces that oppose the flow and detach the bow shock wave further ahead of the vehicle, lowering the heat flux which is due to the brutal recompression of air behind the stagnation point. Such passive flow control studies are still ongoing, but a large-scale demonstrator has yet to be built.
Active flow control
Active flow control by MHD force fields on the contrary involves a direct and imperious action of forces to locally accelerate or slow down the airflow, modifying its velocity, direction, pressure, friction, heat flux parameters, in order to preserve materials and engines from stress, allowing hypersonic flight. It is a field of magnetohydrodynamics also called magnetogasdynamics, magnetoaerodynamics or magnetoplasma aerodynamics, as the working fluid is the air (a gas instead of a liquid) ionized to become electrically conductive (a plasma).
Air ionization is achieved at high altitude (electrical conductivity of air increases as atmospheric pressure reduces according to Paschen's law) using various techniques: high voltage electric arc discharge, RF (microwaves) electromagnetic glow discharge, laser, e-beam or betatron, radioactive source… with or without seeding of low ionization potential alkali substances (like caesium) into the flow.
MHD studies applied to aeronautics try to extend the domain of hypersonic planes to higher Mach regimes:
Action on the boundary layer to prevent laminar flow from becoming turbulent.
Shock wave mitigation for thermal control and reduction of the wave drag and form drag. Some theoretical studies suggest the flow velocity could be controlled everywhere on the wetted area of an aircraft, so shock waves could be totally cancelled when using enough power.
Inlet flow control.
Airflow velocity reduction upstream to feed a scramjet by the use of an MHD generator section combined with an MHD accelerator downstream at the exhaust nozzle, powered by the generator through an MHD bypass system.
The Russian project Ayaks (Ajax) is an example of MHD-controlled hypersonic aircraft concept. A US program also exists to design a hypersonic MHD bypass system, the Hypersonic Vehicle Electric Power System (HVEPS). A working prototype was completed in 2017 under development by General Atomics and the University of Tennessee Space Institute, sponsored by the US Air Force Research Laboratory. These projects aim to develop MHD generators feeding MHD accelerators for a new generation of high-speed vehicles. Such MHD bypass systems are often designed around a scramjet engine, but easier to design turbojets are also considered, as well as subsonic ramjets.
Such studies covers a field of resistive MHD with magnetic Reynolds number ≪ 1 using nonthermal weakly ionized gases, making the development of demonstrators much more difficult to realize than for MHD in liquids. "Cold plasmas" with magnetic fields are subject to the electrothermal instability occurring at a critical Hall parameter, which makes full-scale developments difficult.
Prospects
MHD propulsion has been considered as the main propulsion system for both marine and space ships since there is no need to produce lift to counter the gravity of Earth in water (due to buoyancy) nor in space (due to weightlessness), which is ruled out in the case of flight in the atmosphere.
Nonetheless, considering the current problem of the electric power source solved (for example with the availability of a still missing multi-megawatt compact fusion reactor), one could imagine future aircraft of a new kind silently powered by MHD accelerators, able to ionize and direct enough air downward to lift several tonnes. As external flow systems can control the flow over the whole wetted area, limiting thermal issues at high speeds, ambient air would be ionized and radially accelerated by Lorentz forces around an axisymmetric body (shaped as a cylinder, a cone, a sphere…), the entire airframe being the engine. Lift and thrust would arise as a consequence of a pressure difference between the upper and lower surfaces, induced by the Coandă effect. In order to maximize such pressure difference between the two opposite sides, and since the most efficient MHD converters (with a high Hall effect) are disk-shaped, such MHD aircraft would be preferably flattened to take the shape of a biconvex lens. Having no wings nor airbreathing jet engines, it would share no similarities with conventional aircraft, but it would behave like a helicopter whose rotor blades would have been replaced by a "purely electromagnetic rotor" with no moving part, sucking the air downward. Such concepts of flying MHD disks have been developed in the peer review literature from the mid 1970s mainly by physicists Leik Myrabo with the Lightcraft, and Subrata Roy with the Wingless Electromagnetic Air Vehicle (WEAV).
These futuristic visions have been advertised in the media although they still remain beyond the reach of modern technology.
Spacecraft propulsion
A number of experimental methods of spacecraft propulsion are based on magnetohydrodynamics. As this kind of MHD propulsion involves compressible fluids in the form of plasmas (ionized gases) it is also referred to as magnetogasdynamics or magnetoplasmadynamics.
In such electromagnetic thrusters, the working fluid is most of the time ionized hydrazine, xenon or lithium. Depending on the propellant used, it can be seeded with alkali such as potassium or caesium to improve its electrical conductivity. All charged species within the plasma, from positive and negative ions to free electrons, as well as neutral atoms by the effect of collisions, are accelerated in the same direction by the Lorentz "body" force, which results from the combination of a magnetic field with an orthogonal electric field (hence the name of "cross-field accelerator"), these fields not being in the direction of the acceleration. This is a fundamental difference with ion thrusters which rely on electrostatics to accelerate only positive ions using the Coulomb force along a high voltage electric field.
First experimental studies involving cross-field plasma accelerators (square channels and rocket nozzles) date back to the late 1950s. Such systems provide greater thrust and higher specific impulse than conventional chemical rockets and even modern ion drives, at the cost of a higher required energy density.
Some devices also studied nowadays besides cross-field accelerators include the magnetoplasmadynamic thruster sometimes referred to as the Lorentz force accelerator (LFA), and the electrodeless pulsed inductive thruster (PIT).
Even today, these systems are not ready to be launched in space as they still lack a suitable compact power source offering enough energy density (such as hypothetical fusion reactors) to feed the power-greedy electromagnets, especially pulsed inductive ones. The rapid ablation of electrodes under the intense thermal flow is also a concern. For these reasons, studies remain largely theoretical and experiments are still conducted in the laboratory, although over 60 years have passed since the first research in this kind of thrusters.
Fiction
Oregon, a ship in the Oregon Files series of books by author Clive Cussler, has a magnetohydrodynamic drive. This allows the ship to turn very sharply and brake instantly, instead of gliding for a few miles. In Valhalla Rising, Clive Cussler writes the same drive into the powering of Captain Nemo's Nautilus.
The film adaptation of The Hunt for Red October popularized the magnetohydrodynamic drive as a "caterpillar drive" for submarines, a nearly undetectable "silent drive" intended to achieve stealth in submarine warfare. In reality, the current traveling through the water would create gases and noise, and the magnetic fields would induce a detectable magnetic signature. In the film, it was suggested that this sound could be confused with geological activity. In the novel from which the film was adapted, the caterpillar that Red October used was actually a pump-jet of the so-called "tunnel drive" type (the tunnels provided acoustic camouflage for the cavitation from the propellers).
In the Ben Bova novel The Precipice, the ship where some of the action took place, Starpower 1, built to prove that exploration and mining of the Asteroid Belt was feasible and potentially profitable, had a magnetohydrodynamic drive mated to a fusion power plant.
See also
Electrohydrodynamics
Lorentz force, relates electric and magnetic fields to propulsion force
References
External links
Demonstrate Magnetohydrodynamic Propulsion in a Minute
Marine propulsion
Fluid dynamics
Plasma technology and applications
Magnetic propulsion devices | Magnetohydrodynamic drive | [
"Physics",
"Chemistry",
"Engineering"
] | 3,219 | [
"Plasma physics",
"Plasma technology and applications",
"Chemical engineering",
"Marine engineering",
"Piping",
"Marine propulsion",
"Fluid dynamics"
] |
183,083 | https://en.wikipedia.org/wiki/Galaxy%20rotation%20curve | The rotation curve of a disc galaxy (also called a velocity curve) is a plot of the orbital speeds of visible stars or gas in that galaxy versus their radial distance from that galaxy's centre. It is typically rendered graphically as a plot, and the data observed from each side of a spiral galaxy are generally asymmetric, so that data from each side are averaged to create the curve. A significant discrepancy exists between the experimental curves observed, and a curve derived by applying gravity theory to the matter observed in a galaxy. Theories involving dark matter are the main postulated solutions to account for the variance.
The rotational/orbital speeds of galaxies/stars do not follow the rules found in other orbital systems such as stars/planets and planets/moons that have most of their mass at the centre. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. In contrast, the orbital velocities of planets in planetary systems and moons orbiting planets decline with distance according to Kepler’s third law. This reflects the mass distributions within those systems. The mass estimations for galaxies based on the light they emit are far too low to explain the velocity observations.
The galaxy rotation problem is the discrepancy between observed galaxy rotation curves and the theoretical prediction, assuming a centrally dominated mass associated with the observed luminous material. When mass profiles of galaxies are calculated from the distribution of stars in spirals and mass-to-light ratios in the stellar disks, they do not match with the masses derived from the observed rotation curves and the law of gravity. A solution to this conundrum is to hypothesize the existence of dark matter and to assume its distribution from the galaxy's center out to its halo. Thus the discrepancy between the two curves can be accounted for by adding a dark matter halo surrounding the galaxy.
Though dark matter is by far the most accepted explanation of the rotation problem, other proposals have been offered with varying degrees of success. Of the possible alternatives, one of the most notable is modified Newtonian dynamics (MOND), which involves modifying the laws of gravity.
History
In 1932, Jan Hendrik Oort became the first to report that measurements of the stars in the solar neighborhood indicated that they moved faster than expected when a mass distribution based upon visible matter was assumed, but these measurements were later determined to be essentially erroneous. In 1939, Horace Babcock reported in his PhD thesis measurements of the rotation curve for Andromeda which suggested that the mass-to-luminosity ratio increases radially. He attributed that to either the absorption of light within the galaxy or to modified dynamics in the outer portions of the spiral and not to any form of missing matter. Babcock's measurements turned out to disagree substantially with those found later, and the first measurement of an extended rotation curve in good agreement with modern data was published in 1957 by Henk van de Hulst and collaborators, who studied M31 with the Dwingeloo Radio Observatory's newly commissioned 25-meter radio telescope. A companion paper by Maarten Schmidt showed that this rotation curve could be fit by a flattened mass distribution more extensive than the light. In 1959, Louise Volders used the same telescope to demonstrate that the spiral galaxy M33 also does not spin as expected according to Keplerian dynamics.
Reporting on NGC 3115, Jan Oort wrote that "the distribution of mass in the system appears to bear almost no relation to that of light... one finds the ratio of mass to light in the outer parts of NGC 3115 to be about 250". On page 302–303 of his journal article, he wrote that "The strongly condensed luminous system appears imbedded in a large and more or less homogeneous mass of great density" and although he went on to speculate that this mass may be either extremely faint dwarf stars or interstellar gas and dust, he had clearly detected the dark matter halo of this galaxy.
The Carnegie telescope (Carnegie Double Astrograph) was intended to study this problem of Galactic rotation.
In the late 1960s and early 1970s, Vera Rubin, an astronomer at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, worked with a new sensitive spectrograph that could measure the velocity curve of edge-on spiral galaxies to a greater degree of accuracy than had ever before been achieved. Together with fellow staff-member Kent Ford, Rubin announced at a 1975 meeting of the American Astronomical Society the discovery that most stars in spiral galaxies orbit at roughly the same speed, and that this implied that galaxy masses grow approximately linearly with radius well beyond the location of most of the stars (the galactic bulge). Rubin presented her results in an influential paper in 1980. These results suggested either that Newtonian gravity does not apply universally or that, conservatively, upwards of 50% of the mass of galaxies was contained in the relatively dark galactic halo. Although initially met with skepticism, Rubin's results have been confirmed over the subsequent decades.
If Newtonian mechanics is assumed to be correct, it would follow that most of the mass of the galaxy had to be in the galactic bulge near the center and that the stars and gas in the disk portion should orbit the center at decreasing velocities with radial distance from the galactic center (the dashed line in Fig. 1).
Observations of the rotation curve of spirals, however, do not bear this out. Rather, the curves do not decrease in the expected inverse square root relationship but are "flat", i.e. outside of the central bulge the speed is nearly a constant (the solid line in Fig. 1). It is also observed that galaxies with a uniform distribution of luminous matter have a rotation curve that rises from the center to the edge, and most low-surface-brightness galaxies (LSB galaxies) have the same anomalous rotation curve.
The rotation curves might be explained by hypothesizing the existence of a substantial amount of matter permeating the galaxy outside of the central bulge that is not emitting light in the mass-to-light ratio of the central bulge. The material responsible for the extra mass was dubbed dark matter, the existence of which was first posited in the 1930s by Jan Oort in his measurements of the Oort constants and Fritz Zwicky in his studies of the masses of galaxy clusters.
Dark matter
While the observed galaxy rotation curves were one of the first indications that some mass in the universe may not be visible, many different lines of evidence now support the concept of cold dark matter as the dominant form of matter in the universe. Among the lines of evidence are mass-to-light ratios which are much too low without a dark matter component, the amount of hot gas detected in galactic clusters by x-ray astronomy, measurements of cluster mass with the Sunyaev–Zeldovich effect and with gravitational lensing. Models of the formation of galaxies are based on their dark matter halos. The existence of non-baryonic cold dark matter (CDM) is today a major feature of the Lambda-CDM model that describes the cosmology of the universe and matches high precision astrophysical observations.
Further investigations
The rotational dynamics of galaxies are well characterized by their position on the Tully–Fisher relation, which shows that for spiral galaxies the rotational velocity is uniquely related to their total luminosity. A consistent way to predict the rotational velocity of a spiral galaxy is to measure its bolometric luminosity and then read its rotation rate from its location on the Tully–Fisher diagram. Conversely, knowing the rotational velocity of a spiral galaxy gives its luminosity. Thus the magnitude of the galaxy rotation is related to the galaxy's visible mass.
While precise fitting of the bulge, disk, and halo density profiles is a rather complicated process, it is straightforward to model the observables of rotating galaxies through this relationship. So, while state-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included can be matched to galaxy observations, there is not yet any straightforward explanation as to why the observed scaling relationship exists. Additionally, detailed investigations of the rotation curves of low-surface-brightness galaxies (LSB galaxies) in the 1990s and of their position on the Tully–Fisher relation showed that LSB galaxies had to have dark matter haloes that are more extended and less dense than those of galaxies with high surface brightness, and thus surface brightness is related to the halo properties. Such dark-matter-dominated dwarf galaxies may hold the key to solving the dwarf galaxy problem of structure formation.
Very importantly, the analysis of the inner parts of low and high surface brightness galaxies showed that the shape of the rotation curves in the centre of dark-matter dominated systems indicates a profile different from the NFW spatial mass distribution profile. This so-called cuspy halo problem is a persistent problem for the standard cold dark matter theory. Simulations involving the feedback of stellar energy into the interstellar medium in order to alter the predicted dark matter distribution in the innermost regions of galaxies are frequently invoked in this context.
Halo density profiles
In order to accommodate a flat rotation curve, a density profile for a galaxy and its environs must be different than one that is centrally concentrated. Newton's version of Kepler's Third Law implies that the spherically symmetric, radial density profile is:
where is the radial orbital velocity profile and is the gravitational constant. This profile closely matches the expectations of a singular isothermal sphere profile where if is approximately constant then the density to some inner "core radius" where the density is then assumed constant. Observations do not comport with such a simple profile, as reported by Navarro, Frenk, and White in a seminal 1996 paper.
The authors then remarked that a "gently changing logarithmic slope" for a density profile function could also accommodate approximately flat rotation curves over large scales. They found the famous Navarro–Frenk–White profile, which is consistent both with N-body simulations and observations given by
where the central density, , and the scale radius, , are parameters that vary from halo to halo. Because the slope of the density profile diverges at the center, other alternative profiles have been proposed, for example the Einasto profile, which has exhibited better agreement with certain dark matter halo simulations.
Observations of orbit velocities in spiral galaxies suggest a mass structure according to:
with the galaxy gravitational potential.
Since observations of galaxy rotation do not match the distribution expected from application of Kepler's laws, they do not match the distribution of luminous matter. This implies that spiral galaxies contain large amounts of dark matter or, alternatively, the existence of exotic physics in action on galactic scales. The additional invisible component becomes progressively more conspicuous in each galaxy at outer radii and among galaxies in the less luminous ones.
A popular interpretation of these observations is that about 26% of the mass of the Universe is composed of dark matter, a hypothetical type of matter which does not emit or interact with electromagnetic radiation. Dark matter is believed to dominate the gravitational potential of galaxies and clusters of galaxies. Under this theory, galaxies are baryonic condensations of stars and gas (namely hydrogen and helium) that lie at the centers of much larger haloes of dark matter, affected by a gravitational instability caused by primordial density fluctuations.
Many cosmologists strive to understand the nature and the history of these ubiquitous dark haloes by investigating the properties of the galaxies they contain (i.e. their luminosities, kinematics, sizes, and morphologies). The measurement of the kinematics (their positions, velocities and accelerations) of the observable stars and gas has become a tool to investigate the nature of dark matter, as to its content and distribution relative to that of the various baryonic components of those galaxies.
Alternatives to dark matter
There have been a number of attempts to solve the problem of galaxy rotation by modifying gravity without invoking dark matter. One of the most discussed is modified Newtonian dynamics (MOND), originally proposed by Mordehai Milgrom in 1983, which modifies the Newtonian force law at low accelerations to enhance the effective gravitational attraction. MOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group.
Using data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) database, a group has found that the radial acceleration traced by rotation curves (an effect given the name "radial acceleration relation") could be predicted just from the observed baryon distribution (that is, including stars and gas but not dark matter). This so-called radial acceleration relation (RAR) might be fundamental for understanding the dynamics of galaxies. The same relation provided a good fit for 2693 samples in 153 rotating galaxies, with diverse shapes, masses, sizes, and gas fractions. Brightness in the near infrared, where the more stable light from red giants dominates, was used to estimate the density contribution due to stars more consistently. The results are consistent with MOND, and place limits on alternative explanations involving dark matter alone. However, cosmological simulations within a Lambda-CDM framework that include baryonic feedback effects reproduce the same relation, without the need to invoke new dynamics (such as MOND). Thus, a contribution due to dark matter itself can be fully predictable from that of the baryons, once the feedback effects due to the dissipative collapse of baryons are taken into account. MOND is not a relativistic theory, although relativistic theories which reduce to MOND have been proposed, such as tensor–vector–scalar gravity (TeVeS), scalar–tensor–vector gravity (STVG), and the f(R) theory of Capozziello and De Laurentis.
Attempts to model of galaxy rotation based on a general relativity metric, showing that the rotation curves for the Milky Way, NGC 3031, NGC 3198 and NGC 7331 are consistent with the mass density distributions of the visible matter and other similar work have been disputed.
According to recent analysis of the data produced by the Gaia spacecraft, it would seem possible to explain at least the Milky Way's rotation curve without requiring any dark matter if instead of a Newtonian approximation the entire set of equations of general relativity is adopted.
See also
List of unsolved problems in physics
Long-slit spectroscopy
Nonsymmetric gravitational theory
Footnotes
Further reading
Primary research report discussing Oort limit, and citing original Oort 1932 study.
This 1991 data analysis concludes "that MOND is currently the best phenomenological description of the systematics of the discrepancy in galaxies."
Bibliography
Galactic Astronomy, Dmitri Mihalas and Paul McRae. W. H. Freeman 1968.
External links
The Case Against Dark Matter. About Erik Verlinde's approach to the problem. (November 2016)
Concepts in astrophysics
Rotation curve
Articles containing video clips
Physics beyond the Standard Model
Rotation | Galaxy rotation curve | [
"Physics",
"Astronomy"
] | 3,100 | [
"Physical phenomena",
"Concepts in astrophysics",
"Unsolved problems in physics",
"Classical mechanics",
"Rotation",
"Astrophysics",
"Galactic astronomy",
"Motion (physics)",
"Particle physics",
"Physics beyond the Standard Model",
"Astronomical sub-disciplines"
] |
183,120 | https://en.wikipedia.org/wiki/Overpressure | Overpressure (or blast overpressure) is the pressure caused by a shock wave over and above normal atmospheric pressure. The shock wave may be caused by sonic boom or by explosion, and the resulting overpressure receives particular attention when measuring the effects of nuclear weapons or thermobaric bombs.
Effects
According to an article in the journal Toxicological Sciences,
Blast overpressure (BOP), also known as high energy impulse noise, is a damaging outcome of explosive detonations and firing of weapons. Exposure to BOP shock waves alone results in injury predominantly to the hollow organ systems such as auditory, respiratory, and gastrointestinal systems.
An EOD suit worn by bomb disposal experts can protect against the effects of BOP.
The above table details the effects of overpressure on the human body in a building affected by a blast of overpressure waves, as clarified later in the journal.
According to documents released by the United States Military Defense Technical Information Center (DTIC),
Calculation for an enclosed space
Overpressure in an enclosed space is determined using "Weibull's formula":
where:
22.5 is a constant based on experimentation
= (kilograms) net explosive mass calculated using all explosive materials and their relative effectiveness
= (cubic meters) volume of given area (primarily used to determine volume within an enclosed space)
See also
Bomb disposal
References
Pressure
Explosives
Shock waves | Overpressure | [
"Physics",
"Chemistry"
] | 283 | [
"Scalar physical quantities",
"Physical phenomena",
"Mechanical quantities",
"Shock waves",
"Physical quantities",
"Pressure",
"Waves",
"Explosives",
"Explosions",
"Wikipedia categories named after physical quantities"
] |
183,132 | https://en.wikipedia.org/wiki/Connate%20fluids | In geology and sedimentology, connate fluids are liquids that were trapped in the pores of sedimentary rocks as they were deposited. These liquids are largely composed of water, but also contain many mineral components as ions in solution.
As rocks are buried, they undergo lithification and the connate fluids are usually expelled. If the escape route for these fluids is blocked, the pore fluid pressure can build up, leading to overpressure.
Significance
An understanding of the geochemistry of connate fluids is important if the diagenesis of the rock is to be quantified. The solutes in the connate fluids often precipitate and reduce the porosity and permeability of the host rock, which can have important implications for its hydrocarbon prospectivity. The chemical components of the connate fluid can also yield information on the provenance of aquifers and of the thermal history of the host rock. Minute bubbles of fluid are often trapped within the crystals of the cementing material. These fluid inclusions provide direct information about the composition of the fluid and the pressure-temperature conditions that existed during diagenesis of the sediments.
Some analyses of connate water samples from Louisiana (USA) compared to seawater
Similar, but different in origin, is the concept of fossil water, which is used to describe very old groundwater found in deep aquifers or bedrock. Typically it was recharged during a different climatic period (e.g., the last ice age) so is also very old, but possibly not of the same genesis as the rock.
See also
Petroleum geology
References
Petroleum
Sedimentology
Soil mechanics | Connate fluids | [
"Physics",
"Chemistry"
] | 329 | [
"Soil mechanics",
"Petroleum",
"Chemical mixtures",
"Applied and interdisciplinary physics"
] |
183,194 | https://en.wikipedia.org/wiki/Thermal%20history%20modelling | Thermal history modelling is an exercise undertaken during basin modelling to evaluate the temperature history of stratigraphic layers in a sedimentary basin.
The thermal history of a basin is usually calibrated using thermal indicator data, including vitrinite reflectance and fission tracks in the minerals apatite and zircon.
The temperatures undergone by rocks in a sedimentary basin are crucial when attempting to evaluate the quantity, nature and volume of hydrocarbons (fossil fuels) produced by diagenesis of kerogens (a group of chemicals formed from the decay of organic matter).
Fourier's law provides a simplified one-dimensional description of the variation in heat flow Q as a function of thermal conductivity k and thermal gradient dT/dz:
(The minus sign indicates that heat flows in the opposite direction to increasing depth, that is, towards the Earth's surface.)
If the assumptions used to justify this simplified approximation (i.e. steady-state heat conduction, no convection or advection) are accepted, we define the simple 1-dimensional heat diffusion equation where temperature T at a depth z and time t is given by the equation:
where Tt0 is the surface temperature history, Qt is the heat flow history and k is thermal conductivity. The integral thus represents the integrated thermal conductivity history of a 1-dimensional column of rock.
Thermal history modelling attempts to describe the temperature history Tz,t and therefore requires a knowledge of the burial history of the stratigraphic layers which is obtained through the process of back-stripping.
References
See also
Petroleum geology
Petroleum geology
Sedimentology | Thermal history modelling | [
"Chemistry"
] | 325 | [
"Petroleum",
"Petroleum geology"
] |
183,241 | https://en.wikipedia.org/wiki/Satellite%20modem | A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa.
There are some devices that include only a demodulator (and no modulator, thus only allowing data to be downloaded by satellite) that are also referred to as "satellite modems." These devices are used in satellite Internet access (in this case uploaded data is transferred through a conventional PSTN modem or an ADSL modem).
Satellite link
A satellite modem is not the only device needed to establish a communication channel. Other equipment that is essential for creating a satellite link include satellite antennas and frequency converters.
Data to be transmitted are transferred to a modem from data terminal equipment (e.g. a computer). The modem usually has intermediate frequency (IF) output (that is, 50-200 MHz), however, sometimes the signal is modulated directly to L band. In most cases, frequency has to be converted using an upconverter before amplification and transmission.
A modulated signal is a sequence of symbols, pieces of data represented by a corresponding signal state, e.g. a bit or a few bits, depending upon the modulation scheme being used. Recovering a symbol clock (making a local symbol clock generator synchronous with the remote one) is one of the most important tasks of a demodulator.
Similarly, a signal received from a satellite is firstly downconverted (this is done by a Low-noise block converter - LNB), then demodulated by a modem, and at last handled by data terminal equipment. The LNB is usually powered by the modem through the signal cable with 13 or 18 V DC.
Features
The main functions of a satellite modem are modulation and demodulation. Satellite communication standards also define error correction codes and framing formats.
Popular modulation types being used for satellite communications:
Binary phase-shift keying (BPSK);
Quadrature phase-shift keying (QPSK);
Offset quadrature phase-shift keying (OQPSK);
8PSK;
Quadrature amplitude modulation (QAM), especially 16QAM.
The popular satellite error correction codes include:
Convolutional codes:
with constraint length less than 10, usually decoded using a Viterbi algorithm (see Viterbi decoder);
with constraint length more than 10, usually decoded using a Fano algorithm (see Sequential decoder);
Reed–Solomon codes usually concatenated with convolutional codes with an interleaving;
New modems support superior error correction codes (turbo codes and LDPC codes).
Frame formats that are supported by various satellite modems include:
Intelsat business service (IBS) framing
Intermediate data rate (IDR) framing
MPEG-2 transport framing (used in DVB)
E1 and T1 framing
High-end modems also incorporate some additional features:
Multiple data interfaces (like RS-232, RS-422, V.35, G.703, LVDS, Ethernet);
Embedded Distant-end Monitor and Control (EDMAC), allowing to control the distant-end modem;
Automatic Uplink Power Control (AUPC), that is, adjusting the output power to maintain a constant signal to noise ratio at the remote end;
Drop and insert feature for a multiplexed stream, allowing to replace some channels in it.
Internal structure
Probably the best way of understanding how a modem works is to look at its internal structure. A block diagram of a generic satellite modem is shown on the image.
Analog tract
After a digital-to-analog conversion in the transmitter, the signal passes through a reconstruction filter. Then, if needed, frequency conversion is performed.
The purpose of the analog tract in the receiver is to convert signal's frequency, to adjust its power via an automatic gain control circuit and to get its complex envelope components.
The input signal for the analog tract is at the intermediate frequency, sometimes, in the L band, in which case it must be converted to an IF. Then the signal is either sampled or processed by the four-quadrant multiplier which produces the complex envelope components (I, Q) through multiplying it by the heterodyne frequency (see superheterodyne receiver).
At last the signal passes through an anti-aliasing filter and is sampled or (digitized).
Modulator and demodulator
A digital modulator transforms a digital stream into a radio signal at the intermediate frequency (IF). A modulator is generally simpler than a demodulator because it doesn't have to recover symbol and carrier frequencies.
A demodulator is one of the most important parts of the receiver. The exact structure of the demodulator is defined by a modulation type. However, the fundamental concepts are similar. Moreover, it is possible to develop a demodulator that can process signals with different modulation types.
Digital demodulation implies that a symbol clock (and, in most cases, an intermediate frequency generator) at the receiving side has to be synchronous with those at the transmitting side. This is achieved by the following two circuits:
timing recovery circuit, determining the borders of symbols;
carrier recovery circuit, which determines the actual meaning of each symbol. There are modulation types (like frequency-shift keying) that can be demodulated without carrier recovery, however, this method, known as noncoherent demodulation, is generally worse.
There are also additional components in the demodulator such as the intersymbol interference equalizer.
If the analog signal was digitized without a four-quadrant multiplier, the complex envelope has to be calculated by a digital complex mixer.
Sometimes a digital automatic gain control circuit is implemented in the demodulator.
FEC coding
Error correction techniques are essential for satellite communications, because, due to satellite's limited power a signal-to-noise ratio at the receiver is usually rather poor. Error correction works by adding an artificial redundancy to a data stream at the transmitting side and using this redundancy to correct errors caused by noise and interference. This is performed by an FEC encoder. The encoder applies an error correction code to the digital stream, thereby adding redundancy.
An FEC decoder decodes the Forward error correction code used within the signal. For example, the Digital Video Broadcasting standard defines a concatenated code consisting of inner convolutional (standard NASA code, punctured, with rates , , , , ), interleaving and outer Reed–Solomon code (block length: 204 bytes, information block: 188 bytes, can correct up to 8 bytes in the block).
Differential coding
There are several modulation types (such as PSK and QAM) that have a phase ambiguity, that is, a carrier can be restored in different ways. Differential coding is used to resolve this ambiguity.
When differential coding is used, the data are deliberately made to depend not only on the current symbol, but also on the previous one.
Scrambling
Scrambling is a technique used to randomize a data stream to eliminate long '0'-only and '1'-only sequences and to assure energy dispersal. Long '0'-only and '1'-only sequences create difficulties for timing recovery circuit. Scramblers and descramblers are usually based on linear-feedback shift registers.
A scrambler randomizes the transmitted data stream. A descrambler restores the original stream from the scrambled one.
Scrambling shouldn't be confused with encryption, since it doesn't protect information from intruders.
Multiplexing
A multiplexer transforms several digital streams into one stream. This is often referred to as 'muxing.'
Generally, a demultiplexer is a device that transforms one multiplexed data stream into several. Satellite modems don't have many outputs, so a demultiplexer here performs a drop operation, allowing to the modem to choose channels that will be transferred to the output.
A demultiplexer achieves this goal by maintaining frame synchronization.
Applications
Satellite modems are often used for home internet access.
There are two different types, both employing the Digital Video Broadcasting (DVB) standard as their basis:
One-way satmodems (DVB-IP modems) use a return channel not based on communication with the satellite, such as telephone or cable.
Two-way satmodems (DVB-RCS modems, also called astromodems) employ a satellite-based return channel as well; they do not need another connection. DVB-RCS is ETSI standard EN 301 790.
There are also industrial satellite modems intended to provide a permanent link. They are used, for example, in the Steel shankar network.
See also
Communications satellite
Data collection satellite
Yahsat
Intelsat
Satellite Internet access
VSAT
External links
ITU Radio Regulations, Section IV. Radio Stations and Systems – Article 1.113, definition: satellite link International Telecommunication Union (ITU)
Satellite broadcasting
Modems
Telecommunications equipment
Telecommunications infrastructure | Satellite modem | [
"Engineering"
] | 1,914 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
183,256 | https://en.wikipedia.org/wiki/Nuclear%20isomer | A nuclear isomer is a metastable state of an atomic nucleus, in which one or more nucleons (protons or neutrons) occupy excited state levels (higher energy levels). "Metastable" describes nuclei whose excited states have half-lives 100 to 1000 times longer than the half-lives of the excited nuclear states that decay with a "prompt" half life (ordinarily on the order of 10−12 seconds). The term "metastable" is usually restricted to isomers with half-lives of 10−9 seconds or longer. Some references recommend 5 × 10−9 seconds to distinguish the metastable half life from the normal "prompt" gamma-emission half-life. Occasionally the half-lives are far longer than this and can last minutes, hours, or years. For example, the nuclear isomer survives so long (at least 1015 years) that it has never been observed to decay spontaneously. The half-life of a nuclear isomer can even exceed that of the ground state of the same nuclide, as shown by as well as , , , , and multiple holmium isomers.
Sometimes, the gamma decay from a metastable state is referred to as isomeric transition, but this process typically resembles shorter-lived gamma decays in all external aspects with the exception of the long-lived nature of the meta-stable parent nuclear isomer. The longer lives of nuclear isomers' metastable states are often due to the larger degree of nuclear spin change which must be involved in their gamma emission to reach the ground state. This high spin change causes these decays to be forbidden transitions and delayed. Delays in emission are caused by low or high available decay energy.
The first nuclear isomer and decay-daughter system (uranium X2/uranium Z, now known as /) was discovered by Otto Hahn in 1921.
Nuclei of nuclear isomers
The nucleus of a nuclear isomer occupies a higher energy state than the non-excited nucleus existing in the ground state. In an excited state, one or more of the protons or neutrons in a nucleus occupy a nuclear orbital of higher energy than an available nuclear orbital. These states are analogous to excited states of electrons in atoms.
When excited atomic states decay, energy is released by fluorescence. In electronic transitions, this process usually involves emission of light near the visible range. The amount of energy released is related to bond-dissociation energy or ionization energy and is usually in the range of a few to few tens of eV per bond. However, a much stronger type of binding energy, the nuclear binding energy, is involved in nuclear processes. Due to this, most nuclear excited states decay by gamma ray emission. For example, a well-known nuclear isomer used in various medical procedures is , which decays with a half-life of about 6 hours by emitting a gamma ray of 140 keV of energy; this is close to the energy of medical diagnostic X-rays.
Nuclear isomers have long half-lives because their gamma decay is "forbidden" from the large change in nuclear spin needed to emit a gamma ray. For example, has a spin of 9 and must gamma-decay to with a spin of 1. Similarly, has a spin of 1/2 and must gamma-decay to with a spin of 9/2.
While most metastable isomers decay through gamma-ray emission, they can also decay through internal conversion. During internal conversion, energy of nuclear de-excitation is not emitted as a gamma ray, but is instead used to accelerate one of the inner electrons of the atom. These excited electrons then leave at a high speed. This occurs because inner atomic electrons penetrate the nucleus where they are subject to the intense electric fields created when the protons of the nucleus re-arrange in a different way.
In nuclei that are far from stability in energy, even more decay modes are known.
After fission, several of the fission fragments that may be produced have a metastable isomeric state. These fragments are usually produced in a highly excited state, in terms of energy and angular momentum, and go through a prompt de-excitation. At the end of this process, the nuclei can populate both the ground and the isomeric states. If the half-life of the isomers is long enough, it is possible to measure their production rate and compare it to that of the ground state, calculating the so-called isomeric yield ratio.
Metastable isomers
Metastable isomers can be produced through nuclear fusion or other nuclear reactions. A nucleus produced this way generally starts its existence in an excited state that relaxes through the emission of one or more gamma rays or conversion electrons. Sometimes the de-excitation does not completely proceed rapidly to the nuclear ground state. This usually occurs as a spin isomer when the formation of an intermediate excited state has a spin far different from that of the ground state. Gamma-ray emission is hindered if the spin of the post-emission state differs greatly from that of the emitting state, especially if the excitation energy is low. The excited state in this situation is a good candidate to be metastable if there are no other states of intermediate spin with excitation energies less than that of the metastable state.
Metastable isomers of a particular isotope are usually designated with an "m". This designation is placed after the mass number of the atom; for example, cobalt-58m1 is abbreviated , where 27 is the atomic number of cobalt. For isotopes with more than one metastable isomer, "indices" are placed after the designation, and the labeling becomes m1, m2, m3, and so on. Increasing indices, m1, m2, etc., correlate with increasing levels of excitation energy stored in each of the isomeric states (e.g., hafnium-178m2, or ).
A different kind of metastable nuclear state (isomer) is the fission isomer or shape isomer. Most actinide nuclei in their ground states are not spherical, but rather prolate spheroidal, with an axis of symmetry longer than the other axes, similar to an American football or rugby ball. This geometry can result in quantum-mechanical states where the distribution of protons and neutrons is so much further from spherical geometry that de-excitation to the nuclear ground state is strongly hindered. In general, these states either de-excite to the ground state far more slowly than a "usual" excited state, or they undergo spontaneous fission with half-lives of the order of nanoseconds or microseconds—a very short time, but many orders of magnitude longer than the half-life of a more usual nuclear excited state. Fission isomers may be denoted with a postscript or superscript "f" rather than "m", so that a fission isomer, e.g. of plutonium-240, can be denoted as plutonium-240f or .
Nearly stable isomers
Most nuclear excited states are very unstable and "immediately" radiate away the extra energy after existing on the order of 10−12 seconds. As a result, the characterization "nuclear isomer" is usually applied only to configurations with half-lives of 10−9 seconds or longer. Quantum mechanics predicts that certain atomic species should possess isomers with unusually long lifetimes even by this stricter standard and have interesting properties. Some nuclear isomers are so long-lived that they are relatively stable and can be produced and observed in quantity.
The most stable nuclear isomer occurring in nature is , which is present in all tantalum samples at about 1 part in 8,300. Its half-life is at least 1015 years, markedly longer than the age of the universe. The low excitation energy of the isomeric state causes both gamma de-excitation to the ground state (which itself is radioactive by beta decay, with a half-life of only 8 hours) and direct electron capture to hafnium or beta decay to tungsten to be suppressed due to spin mismatches. The origin of this isomer is mysterious, though it is believed to have been formed in supernovae (as are most other heavy elements). Were it to relax to its ground state, it would release a photon with a photon energy of 75 keV.
It was first reported in 1988 by C. B. Collins that theoretically can be forced to release its energy by weaker X-rays, although at that time this de-excitation mechanism had never been observed. However, the de-excitation of by resonant photo-excitation of intermediate high levels of this nucleus (E ≈ 1 MeV) was observed in 1999 by Belic and co-workers in the Stuttgart nuclear physics group.
is another reasonably stable nuclear isomer. It possesses a half-life of 31 years and the highest excitation energy of any comparably long-lived isomer. One gram of pure contains approximately 1.33 gigajoules of energy, the equivalent of exploding about of TNT. In the natural decay of , the energy is released as gamma rays with a total energy of 2.45 MeV. As with , there are disputed reports that can be stimulated into releasing its energy. Due to this, the substance is being studied as a possible source for gamma-ray lasers. These reports indicate that the energy is released very quickly, so that can produce extremely high powers (on the order of exawatts). Other isomers have also been investigated as possible media for gamma-ray stimulated emission.
Holmium's nuclear isomer has a half-life of 1,200 years, which is nearly the longest half-life of any holmium radionuclide. Only , with a half-life of 4,570 years, is more stable.
has a remarkably low-lying metastable isomer only above the ground state. This low energy produces "gamma rays" at a wavelength of , in the far ultraviolet, which allows for direct nuclear laser spectroscopy. Such ultra-precise spectroscopy, however, could not begin without a sufficiently precise initial estimate of the wavelength, something that was only achieved in 2024 after two decades of effort. The energy is so low that the ionization state of the atom affects its half-life. Neutral decays by internal conversion with a half-life of , but because the isomeric energy is less than thorium's second ionization energy of , this channel is forbidden in thorium cations and decays by gamma emission with a half-life of . This conveniently moderate lifetime allows the development of a nuclear clock of unprecedented accuracy.
High-spin suppression of decay
The most common mechanism for suppression of gamma decay of excited nuclei, and thus the existence of a metastable isomer, is lack of a decay route for the excited state that will change nuclear angular momentum along any given direction by the most common amount of 1 quantum unit ħ in the spin angular momentum. This change is necessary to emit a gamma photon, which has a spin of 1 unit in this system. Integral changes of 2 and more units in angular momentum are possible, but the emitted photons carry off the additional angular momentum. Changes of more than 1 unit are known as forbidden transitions. Each additional unit of spin change larger than 1 that the emitted gamma ray must carry inhibits decay rate by about 5 orders of magnitude. The highest known spin change of 8 units occurs in the decay of 180mTa, which suppresses its decay by a factor of 1035 from that associated with 1 unit. Instead of a natural gamma-decay half-life of 10−12 seconds, it has a half-life of more than 1023 seconds, or at least 3 × 1015 years, and thus has yet to be observed to decay.
Gamma emission is impossible when the nucleus begins in a zero-spin state, as such an emission would not conserve angular momentum.
Applications
Hafnium isomers (mainly 178m2Hf) have been considered as weapons that could be used to circumvent the Nuclear Non-Proliferation Treaty, since it is claimed that they can be induced to emit very strong gamma radiation. This claim is generally discounted. DARPA had a program to investigate this use of both nuclear isomers. The potential to trigger an abrupt release of energy from nuclear isotopes, a prerequisite to their use in such weapons, is disputed. Nonetheless a 12-member Hafnium Isomer Production Panel (HIPP) was created in 2003 to assess means of mass-producing the isotope.
Technetium isomers (with a half-life of 6.01 hours) and (with a half-life of 61 days) are used in medical and industrial applications.
Nuclear batteries
Nuclear batteries use small amounts (milligrams and microcuries) of radioisotopes with high energy densities. In one betavoltaic device design, radioactive material sits atop a device with adjacent layers of P-type and N-type silicon. Ionizing radiation directly penetrates the junction and creates electron–hole pairs. Nuclear isomers could replace other isotopes, and with further development, it may be possible to turn them on and off by triggering decay as needed. Current candidates for such use include 108Ag, 166Ho, 177Lu, and 242Am. As of 2004, the only successfully triggered isomer was 180mTa, which required more photon energy to trigger than was released.
An isotope such as 177Lu releases gamma rays by decay through a series of internal energy levels within the nucleus, and it is thought that by learning the triggering cross sections with sufficient accuracy, it may be possible to create energy stores that are 106 times more concentrated than high explosive or other traditional chemical energy storage.
Decay processes
An isomeric transition or internal transition (IT) is the decay of a nuclear isomer to a lower-energy nuclear state. The actual process has two types (modes):
γ (gamma ray) emission (emission of a high-energy photon),
internal conversion (the energy is used to eject one of the atom's electrons).
Isomers may decay into other elements, though the rate of decay may differ between isomers. For example, 177mLu can beta-decay to 177Hf with a half-life of 160.4 d, or it can undergo isomeric transition to 177Lu with a half-life of 160.4 d, which then beta-decays to 177Hf with a half-life of 6.68 d.
The emission of a gamma ray from an excited nuclear state allows the nucleus to lose energy and reach a lower-energy state, sometimes its ground state. In certain cases, the excited nuclear state following a nuclear reaction or other type of radioactive decay can become a metastable nuclear excited state. Some nuclei are able to stay in this metastable excited state for minutes, hours, days, or occasionally far longer.
The process of isomeric transition is similar to gamma emission from any excited nuclear state, but differs by involving excited metastable states of nuclei with longer half-lives. As with other excited states, the nucleus can be left in an isomeric state following the emission of an alpha particle, beta particle, or some other type of particle.
The gamma ray may transfer its energy directly to one of the most tightly bound electrons, causing that electron to be ejected from the atom, a process termed the photoelectric effect. This should not be confused with the internal conversion process, in which no gamma-ray photon is produced as an intermediate particle.
See also
Induced gamma emission
Isomeric shift
Mössbauer effect
References
External links
Research group which presented initial claims of hafnium nuclear isomer de-excitation control. – The Center for Quantum Electronics, The University of Texas at Dallas.
JASON Defense Advisory Group report on high energy nuclear materials mentioned in the Washington Post story above
login required?
Confidence for Hafnium Isomer Triggering in 2006. – The Center for Quantum Electronics, The University of Texas at Dallas.
Reprints of articles about nuclear isomers in peer reviewed journals. – The Center for Quantum Electronics, The University of Texas at Dallas.
Isomer, nuclear | Nuclear isomer | [
"Physics"
] | 3,338 | [
"Nuclear physics"
] |
183,350 | https://en.wikipedia.org/wiki/Ambidexterity | Ambidexterity is the ability to use both the right and left hand equally well. When referring to objects, the term indicates that the object is equally suitable for right-handed and left-handed people. When referring to humans, it indicates that a person has no marked preference for the use of the right or left hand.
Only about one percent of people are naturally ambidextrous, which equates to about 80,000,000 people in the world today. In modern times, it is common to find some people considered ambidextrous who were originally left-handed and who learned to be ambidextrous, either by choice or as a result of training in schools or in jobs where right-handedness is often emphasized or required. Since many everyday devices such as can openers and scissors are asymmetrical and designed for right-handed people, many left-handers learn to use them right-handedly due to the rarity or lack of left-handed models. Thus, left-handed people are more likely to develop motor skills in their non-dominant hand than right-handed people.
Etymology
The word "ambidextrous" is derived from the Latin roots ambi-, meaning "both", and dexter, meaning "right" or "favorable". Thus, ambidextrous is literally "both right" or "both favorable". The term ambidexter in English was originally used in a legal sense of jurors who accepted bribes from both parties for their verdict.
Writing
Some people can write with both hands. Famous examples include Albert Einstein, Benjamin Franklin, Nikola Tesla, James A. Garfield, and Leonardo da Vinci.
In India's Singrauli district there is a unique ambidextrous school named Veena Vadini School in Budhela village, where students are taught to write simultaneously with both hands.
Sports
Baseball
Ambidexterity is highly prized in the sport of baseball. "Switch hitting" is the most common phenomenon, and is highly prized because a batter usually has a higher statistical chance of successfully hitting the baseball when it is thrown by an opposite-handed pitcher. Therefore, an ambidextrous hitter can bat from whichever side is more advantageous to them in that situation. Pete Rose, the record holder for most hits in Major League Baseball, was a switch hitter.
Switch pitchers, comparatively rare in contrast to switch hitters, also exist. Tony Mullane won 284 games in the 19th century. Elton Chamberlain and Larry Corcoran were also notable ambidextrous pitchers. In the 20th century, Greg A. Harris was the only major league pitcher to pitch with both his left and his right arm. A natural right-hander, by 1986 he could throw well enough with his left hand that he felt capable of pitching with either hand in a game. Harris was not allowed to throw left-handed in a regular-season game until September 1995 in the penultimate game of his career. Against the Cincinnati Reds in the ninth inning, Harris (then a member of the Montreal Expos) retired Reggie Sanders pitching right-handed, then switched to his left hand for the next two hitters, Hal Morris and Ed Taubensee, who both batted left-handed. Harris walked Morris but got Taubensee to ground out. He then went back to his right hand to retire Bret Boone to end the inning.
In the 21st century there is only one major league pitcher, Pat Venditte of the Seattle Mariners, who regularly pitches with both arms. Venditte became the 21st century's first switch pitcher in the major leagues with his debut on June 5, 2015, against the Boston Red Sox, pitching two innings, allowing only one hit and recording five outs right-handed and one out left-handed. During his career, an eponymous "Venditte Rule" was created restricting the ability of a pitcher to change arms in the middle of an at-bat.
Billy Wagner was a natural right-handed pitcher in his youth, but after breaking his throwing arm twice, he taught himself how to use his left arm by throwing nothing but fastballs against a barn wall. He became a dominant left-handed relief pitcher, most known for his 100+ mph fastball. In his 1999 season, Wagner captured the National League Relief Man of the Year Award as a Houston Astro.
St. Louis Cardinals pitcher Brett Cecil is naturally right-handed, but starting from a very early age, threw with his left. As such, he writes and performs most tasks with the right side of his body, but throws with his left.
Basketball
In basketball a player may choose to make a pass or shot with the weaker hand. NBA stars LeBron James, Larry Bird, Kyrie Irving, Carlos Boozer, David Lee, John Wall, Derrick Rose, Chandler Parsons, Andrew Bogut, John Henson, Michael Beasley, and Jerryd Bayless are ambidextrous players, as was Kobe Bryant. Bogut and Henson are both stronger in the post with their left-handed hook shot than they are with their natural right hands. Brothers Marc and Pau Gasol can make hook shots with either hand while the right hand is dominant for each. Bob Cousy, a Boston Celtics legend was forced to play with his left hand in high school when he injured his right hand, thus making him effectively ambidextrous. Mike Conley shoots left-handed, but has preferred to shoot floaters right handed, as he does everything else right-handed off the court. Ben Simmons and Luke Kennard are also natural right-handers shooting left-handed. Tristan Thompson is a natural left-hander, and was a left-handed shooter, but has shot right-handed since the 2013–2014 season. He does perform left-handed hook shots more often. Los Angeles Lakers center DeAndre Jordan who is left-handed, shoots with his left hand but has been known to dunk with his right hand, spin clockwise in his 360 dunks, and shoot right handed hook shots more accurately and from further out. Charlotte Hornets power forward Miles Bridges is a left-handed shooter; however, he dunks the ball and blocks shots more frequently with his right hand. Former Los Angeles Lakers center Roy Hibbert shoots his hook shots equally well with either hand. Former Oklahoma City Thunder left-handed point guard Derek Fisher used to dunk with his right hand in his early years. Candace Parker, forward for the Chicago Sky, also has equal dominance with either hand. Los Angeles Lakers superstar Kobe Bryant shot with either hand, although his right hand was dominant: due to an injury to the right hand, he was forced to shoot with his left. Paul George, Tracy McGrady and Vince Carter are all noted to be right-handed, but rotates clockwise for dunks, but Carter is able to also spin counterclockwise, as he did during high school. McGrady also spins anti-clockwise for his baseline dunks. Larry Bird, LeBron James, Paul Millsap, Russell Westbrook, Danny Ainge and Gary Payton shoot right-handed, but do almost everything left-handed off the courts, but Bird once had a game in which he only shot left-handed running hook shots, cross passes and layups. Ronnie Price, however has a tendency to dunk with his left hand, but he is a right-handed shooter. Josh McRoberts is known to be a left handed shooter but does everything with his right hand such as his famous dunks. Ivica Zubac is a right handed shooter, but can shoot hook shots with both hands, and is more accurate with his left handed hooks. Greg Monroe is also a left-handed shooter but does right handed jump hooks and everything else right handed off the court.
Trevor Booker is left handed for shooting a basketball but writes with his right hand. Ben Simmons shoots jumpers and free throws left-handed, but does everything else right-handed, including dunking, throwing long passes and writing. He also shoots more right-handed non-jumpers (layups, floaters and hook shots).
Board sports
In skateboarding, being able to skate successfully with not only one's dominant foot forward but also the less dominant one is called "switch skating", or "skating goofy", and is a prized ability. To illustrate the stances further; there is "Regular" which is left shoulder and foot towards the front of the board and the opposite (right shoulder foot towards the front) is referred to as goofy. These terms hold true to surfing and snowboarding. With skateboarding, whether one pushes with their front or back foot determines whether they are considered regular v. regular-mongo or goofy v. goofy-mongo. The ability to ride both regular and goofy is considered to be "switch stance". Notable switch skateboarders include Rodney Mullen, Eric Koston, Guy Mariano, Paul Rodriguez Jr., Mike Mo Capaldi, and Bob Burnquist. Similarly, surfers who ride equally well in either stance are said to be surfing "switch”. Also, snowboarding at the advanced level requires the ability to ride equally well in either.
Combat sports
In combat sports fighters may choose to face their opponent with either the left shoulder forward in a right-handed stance ("orthodox") or the right shoulder forward in a left-handed stance ("south-paw"), thus a degree of cross dominance is useful. In boxing, Manny Pacquiao has a southpaw stance in the ring even though he is really ambidextrous outside the ring. Also, in mixed martial arts, many naturally left-handed strikers like Lyoto Machida and Anderson Silva will switch stances in order to counter opponent's strikes or takedown attempts to stay standing. Additionally, some fighters actually choose to fight in a southpaw stance despite their dominant hand being their right, one such fighter being Vasyl Lomachenko. This is done as it gives access to a strong and precise jab from the lead hand, which is arguably the most important strike in boxing for setting up combos and interrupting your opponent during their attacks. Bruce Lee also practiced this same method of fighting with his dominant hand forward. Left handed fighters such as Oscar De La Hoya, Miguel Cotto, Andre Ward, and Gerry Cooney fought in orthodox. This made their left hooks their most powerful weapons, along with enhancing the strength of their jab.
Cricket
In cricket, it is also beneficial to be able to use both arms. Ambidextrous fielders can make one-handed catches or throws with either hand. Sachin Tendulkar uses his left hand for writing, but bats and bowls with his right hand, it is the same with Ajinkya Rahane, Kane Williamson, and Shane Watson. There are many players who are naturally right-handed but bat left and vice versa. Sourav Ganguly, Thisara Perera uses his right hand for writing and bowls with the right hand, too, but bats with his left hand. Players due to injuries may also switch arms for fielding. Zaheer Khan bowls left-arm fast-medium but bats right handed. Phillip Hughes batted, bowled, and fielded left-handed before a shoulder injury. Australian batsman George Bailey also due to sustaining an injury, taught himself to throw with his weaker left arm. He is now often seen throughout matches switching between arms as he throws the ball. See also reverse sweep and switch hitting. David Warner has batted right-handed in high school, and has practiced right-handed as well, when he is normally a left-handed switch-hitter. Alastair Cook, Jimmy Anderson, Stuart Broad, Ben Stokes, Eoin Morgan, Ben Dunk, Adam Gilchrist, Matthew Hayden, Travis Head, Chris Gayle, Gautam Gambhir, Rishabh Pant, Ishan Kishan, Devdutt Padikkal, Yashasvi Jaiswal, Smriti Mandhana and Kagiso Rabada are natural right-handers, but bat left-handed.
Michael Clarke is naturally a left handed person who bowls left handed but bats right handed.
Akshay Karnewar is an ambidextrous bowler. Originally, he only bowled with his right hand, but since he does everything else with his left hand, he was taught to bowl left-handed as well but needs to signal to the umpire when he switches hands when bowling to allow for the field to change. He is a left-handed batsman. As an off-spinner and left-arm orthodox spin, the ball will always spin towards the batsman (OB vs. RHB; SLO vs. LHB), or away from opposite-handed batsmen, which is the predominant role of switch-handed spinners.
Sri Lankan Kusal Perera started his cricket as a right hand batsman, until he changed to left hand to mimic his favourite cricketer Sanath Jayasuriya. Jayasuriya bats and bowls left handed but writes with his right hand. Another Sri Lankan Kamindu Mendis is also a handy ambidextrous bowler. He can bowl orthodox left-arm spin and he can bowl right-arm offspin as well. Yasir Jan, however is a fast bowler both right and left handed and tops over 140 km/h with both hands, with his right arm being faster.
Jofra Archer warms up with slow orthodox left-arm spin, and Jos Buttler practiced left-handed as a club cricketer.
Cue sports
In cue sports, players can reach farther across the table if they are able to play with either hand, since the cue must either be placed on the left or the right side of the body. English snooker player Ronnie O'Sullivan is a rarity amongst top snooker professionals, in that he is able to play to world-class standard with either hand. While he lacks power in his left arm, his ability to alternate hands allows him to take shots that would otherwise require awkward cueing or the use of a rest. When he first displayed this ability in the 1996 World Championship against the Canadian player Alain Robidoux, Robidoux accused him of disrespect. O'Sullivan responded that he played better with his left hand than Robidoux could with his right. O'Sullivan was summoned to a disciplinary hearing in response to Robidoux's formal complaint, where he had to prove that he could play to a high level with his left hand.
Figure skating
In figure skating, most skaters who are right-handed spin and jump to the left, and vice versa for left-handed individuals, but it also down to habit for ballerinas. Olympic Champion figure skater John Curry notably performed his jumps in one direction (anti-clockwise) while spinning predominantly in the other. Very few skaters have such an ability to perform jumps and spins in both directions, and it is now considered a "difficult variation" in spins under the ISU Judging System to rotate in the non-dominant direction. Michelle Kwan used an opposite-rotating camel spin in some of her programs as a signature move. No point bonus exists for opposite direction jumps or bi-directional combination jumps, despite their being much harder to perfect. Nobody can perform a jump sequence (because it requires change of edge, whereas a combo is maintained on the same edge) from clockwise to anti-clockwise, or vice versa.
Football codes
American football
In American football, it is especially advantageous to be able to use either arm to perform various tasks. Ambidextrous receivers can make one-handed catches with either hand; linemen can hold their shoulders square and produce an equal amount of power with both arms; and punters can handle a bad snap and roll out and punt with either leg, limiting the chance of a block. Naturally right-handed quarterbacks may have to perform left-handed passes to avoid sacks. Chris Jones is cross-dominant. Although he is a left-footed punter, he throws with his right. Chris Hanson was dual-footed, able to punt with either foot.
Golf
Some players find cross-dominance advantageous in golf, especially if a left-handed player utilizes right-handed clubs. Having more precise coordination with the left hand is believed to allow better-controlled, and stronger drives. Mac O'Grady was a touring pro who played right-handed, yet could play "scratch" (no handicap) golf left-handed. He lobbied the USGA for years to be certified as an amateur "lefty" and a pro "righty" to no avail. Although not ambidextrous, Phil Mickelson and Mike Weir are both right-handers who golf left-handed; Ben Hogan was the opposite, being a natural left-hander who played golf right-handed, as is Cristie Kerr. This is known as cross-dominance or mixed-handedness.
Hockey
Ice hockey players may shoot from the left or right side of the body. For the most part, right-handed players shoot left and, likewise, most left-handed players shoot right as the player will often wield the stick one-handed. The dominant hand is typically placed on the top of the stick to allow for better stickhandling and control of the puck. Gordie Howe was one of few players capable of doing both, although this was at a time when the blade of the stick was not curved.
Another ice hockey goaltender Bill Durnan, had the ability to catch the puck with either hand. He won the Vezina Trophy, then for the National Hockey League's goalie with the fewest goals allowed six times out of only seven seasons. He had developed this ability playing for church-league teams in Toronto and Montreal to make up for his poor lateral movement. He wore custom gloves that permitted him to hold his stick with either hand. Most goaltenders nowadays choose to catch with their non-dominant hand.
Field hockey players are forced to play right-handed. The rules of the game denote that the ball can only be struck with the flat side of the stick. Only one player Laeeq Ahmed on Pakistan National Hockey team, played with unorthodox left hand below and right hand up side of stick grip with full command. He played from 1991 to 1992 for the national team. Perhaps to avoid confusing referees, there are no left-handed sticks. In floorball, like ice hockey, right-handed players shoot left and, likewise, most left-handed players shoot right as the player will often wield the stick one-handed. Floorball goalkeepers do not use a stick, so they have two glove hands, and act much like a soccer goalkeeper, but with an ice hockey helmet. When they venture out of the goal box, they act just like an outfield soccer player.
Lacrosse
In field lacrosse, which is more popular in the United States, it is extremely advantageous to be able to use both hands, as players can play on both sides of the field and are harder to defend against. Usually in field lacrosse, all players except goalies, but especially offensive players, are expected to be able to catch and throw with their weak hand. However, in box lacrosse, which is more popular in Canada, players often only use their dominant hand, like in hockey.
Martial arts
The traditional martial arts tend to feature a larger number of practitioners who have intentionally developed ambidexterity to a high degree, compared to athletes in combat sports. This is because unlike sports, which have structured rules and common player preferences, traditional martial arts are intended for situations such as self-defense, in which a wider array of physical challenges may occur.
Some arts and schools practice all or most techniques and movements with both sides, while others emphasize that some techniques should only be trained on the right or the left (though both sides tend to eventually receive nearly equal attention). This may be for a number of reasons. Some of these arts rely on the tendency of right-handed people to move differently with the left side than with the right, and attempt to take advantage of this. Similarly, certain weapons are more often carried on one side. For instance, most weapons in ancient China were wielded primarily with the right hand and on the right side; this habit has carried on to the practice of those weapons in modern times. As an example, in xingyiquan, most schools that teach spear-fighting only practice on the right side, although much of the rest of the art is ambidextrous in practice.
Professional wrestling
Shawn Michaels is ambidextrous. He typically kicks with his right leg in Sweet Chin Music, but uses either arm for his signature elbow drop, depending on the position.
Racing
In professional sports car racing, drivers who participate in various events in both the United States and Europe will sometimes encounter machines with the steering wheel mounted on different sides of the car. While steering ability is largely unaffected, the hand used for shifting changes is, due to the shift pattern relative to the driver changing, i.e., a gear change that requires moving the lever toward the driver in a left-hand-drive vehicle becomes a movement away from the driver in a right-hand-drive vehicle. A driver skilled in shifting with the opposite hand is at an advantage.
Racket sports
In tennis, a player may be able to reach balls on the backhand side more easily if they're able to use the weaker hand. An example of a player who is ambidextrous is Luke Jensen. Due to a physical advantage on the space of time needed when matching the ball with the racket simultaneously with tagging the opponent's movement, being laterality-crossed on eyedness with handedness may be a decisive factor for outstanding performance, since the hand which strikes the ball can do it while the overriding eye, matching with this hand, can be tagging the opponent's movement-decisions. Such have the case of Rafael Nadal who uses his right hand for writing, but plays tennis with left. There are many players who are naturally right handed, but play lefty and vice versa. Evgenia Kulikovskaya is also an ambidextrous player, Kulikovskaya played with two forehands and no backhand, switching her racket hand depending on where the ball was coming. Jan-Michael Gambill is the opposite case of Kulikovskaya, since he played with a two-handed forehand and backhand, although he served with his right hand. Other famous examples of a two-handed forehand are Fabrice Santoro and Monica Seles. Seles' playing style was unusual in that she hit with two hands on both sides and, at the same time, always kept her (dominant) left hand at the base of her racket. This meant that she hit her forehand cross-handed. Maria Sharapova is also known to be ambidextrous. Cheong-eui Kim is a truly ambidextrous player with no backhand, and can serve left-handed as well as right-handed.
Some table tennis players have used their ability to hit with their non-dominant hand to return balls out of reach of their dominant hand's backhand, most notably Timo Boll, a former world #1 player.
Although it is quite uncommon, in badminton, ambidextrous players are able to switch the racquet between their hands, often to get to the awkward backhand corner quickly. As badminton can be a very fast sport at professional levels of play, players might not have time to switch the racquet, as this disrupts their reaction time.
Rugby
In rugby league and rugby union being ambidextrous is an advantage when it comes to passing the ball between teammates as well as being able to use both feet by the halves is an advantage in gaining field position by kicking the ball ahead. Jonny Wilkinson is a prime example of a union player who is good at kicking with both feet, he is left handed and normally place kicks using his left, but he dropped the goal that won the Rugby World Cup in 2003 with his right. Dan Carter is actually right handed, but kicks predominantly with his left, sometimes with his right.
Volleyball
A volleyball player has to be ambidextrous to control the ball to either direction and performing basic digs. On the other hand, the setter has to be proficient in performing dump sets with either hand to throw off blockers. Wing spikers that can spike with either hand can alter trajectories to throw off receivers' timing.
In art
Although most artists have a favored hand, some artists use both of their hands for arts such as drawing and sculpting. It is believed that Leonardo da Vinci utilized both of his hands after an injury to his right hand during his early childhood.
A contemporary artist, Gur Keren, can draw with both his hands and even feet. Thea Alba was a well-known German who could write with all ten fingers.
In music
In drum and bugle corps (and drum and bell corps), snare drummers, quads (tenors), and bass drummers need to be somewhat ambidextrous. Since they have to abide by what the composer/arranger has written, they have to learn to play evenly in terms of dynamics and speed with their right and left hands. Former Beatles member Paul McCartney is left-handed (guitar and bass guitar) and played left-handed when performing (as can be seen in many photos and videos throughout his musical career). The drummer of The Beatles, Ringo Starr, is left-handed as well, but he plays a right-handed drum kit. American instrumental guitarist Michael Angelo Batio is known for being able to play both right-handed and left-handed guitar proficiently.
The ambidexterity of Jimi Hendrix has been explored in a psychology, but he was known for playing a standard right-handed guitar with his left hand. The guitarist Duane Allman was the reverse of Hendrix, playing right-handed but left-handed in all other tasks. Shara Lin is naturally left-handed, but plays the violin and guitar right handed. She can also play the piano with her left hand while playing the zither with her right. Also, naturally left-handed musicians have to play with right-handed-only instruments (violin, viola, cello).
Kurt Cobain, frontman of Nirvana, was naturally ambidextrous. He grew up having a slight preference for his left-hand (as can be seen in many of his childhood photographies), but as an adult he wrote right-handed. He played guitar exclusively left-handed.
Tools
With respect to tools, ambidextrous may be used to mean that the tool may be used equally well with either hand; an "ambidextrous knife" refers to the opening mechanism and locking mechanism on a folding knife. It can also mean that the tool can be interchanged between left and right in some other way, such as an "ambidextrous headset," which can be worn on either the left or right ear. Many tools and implements are made specifically for use in the right hand, and will not work properly if used in the other hand. There exist shops dedicated to selling implements and tools made specifically for left-handed use. For example, left-handed, and ambidextrous, scissors are available.
Many knives are sold sharpened asymmetrically for right-hand use, and resharpened in the same way. It is possible to buy knives sharpened for left-handed use, and to sharpen any knife in that way.
Medicine and surgery
A degree of ambidexterity is required in surgery because surgeons must be able to tie with their left and right hands in either single or double knots. This is usually due to factors like the positioning of the surgeon, whether they have an assistant and the angle required to throw and secure the knot.
Ambidexterity is also useful after surgery on a dominant hand or arm, as it allows the patient to use their non-dominant hand with equal facility as the limb which is recovering from surgery.
Ambisinistrality
A related variation to one that is ambidextrous is a person who displays "ambisinistrality" or is "ambisinistrous". This term is a near inverse to ambidexterity as Latin root of the word ambi- means both and the Latin root of the word -sinistral means "left", being derived from the word sinister. The term "ambisinistral" can be directly interpreted as "both left" or "both sinister".
The term is used in non-scientific manners to describe individuals who have two non-dominant hands, as both hands are either clumsy or insufficient in motor skill and are therefore used equally as much. In a 1992 New York Times Q&A article on ambidexterity, the term was used to describe people "...with both hands as skilled as a right-hander's left hand."
See also
Brain asymmetry
Cross-dominance
Dual brain theory
Dual wield
Handedness
Laterality
Lateralization of brain function
Note
References
Further reading
Handedness
Mental processes | Ambidexterity | [
"Physics",
"Chemistry",
"Biology"
] | 6,017 | [
"Behavior",
"Motor control",
"Chirality",
"Asymmetry",
"Handedness",
"Symmetry"
] |
184,011 | https://en.wikipedia.org/wiki/Infinitesimal%20strain%20theory | In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller (indeed, infinitesimally smaller) than any relevant dimension of the body; so that its geometry and the constitutive properties of the material (such as density and stiffness) at each point of space can be assumed to be unchanged by the deformation.
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory, or small displacement-gradient theory. It is contrasted with the finite strain theory where the opposite assumption is made.
The infinitesimal strain theory is commonly adopted in civil and mechanical engineering for the stress analysis of structures built from relatively stiff elastic materials like concrete and steel, since a common goal in the design of such structures is to minimize their deformation under typical loads. However, this approximation demands caution in the case of thin flexible bodies, such as rods, plates, and shells which are susceptible to significant rotations, thus making the results unreliable.
Infinitesimal strain tensor
For infinitesimal deformations of a continuum body, in which the displacement gradient tensor (2nd order tensor) is small compared to unity, i.e. ,
it is possible to perform a geometric linearization of any one of the finite strain tensors used in finite strain theory, e.g. the Lagrangian finite strain tensor , and the Eulerian finite strain tensor . In such a linearization, the non-linear or second-order terms of the finite strain tensor are neglected. Thus we have
or
and
or
This linearization implies that the Lagrangian description and the Eulerian description are approximately the same as there is little difference in the material and spatial coordinates of a given material point in the continuum. Therefore, the material displacement gradient tensor components and the spatial displacement gradient tensor components are approximately equal. Thus we have
or
where are the components of the infinitesimal strain tensor , also called Cauchy's strain tensor, linear strain tensor, or small strain tensor.
or using different notation:
Furthermore, since the deformation gradient can be expressed as where is the second-order identity tensor, we have
Also, from the general expression for the Lagrangian and Eulerian finite strain tensors we have
Geometric derivation
Consider a two-dimensional deformation of an infinitesimal rectangular material element with dimensions by (Figure 1), which after deformation, takes the form of a rhombus. From the geometry of Figure 1 we have
For very small displacement gradients, i.e., , we have
The normal strain in the -direction of the rectangular element is defined by
and knowing that , we have
Similarly, the normal strain in the and becomes
The engineering shear strain, or the change in angle between two originally orthogonal material lines, in this case line and , is defined as
From the geometry of Figure 1 we have
For small rotations, i.e., and are we have
and, again, for small displacement gradients, we have
thus
By interchanging and and and , it can be shown that .
Similarly, for the - and - planes, we have
It can be seen that the tensorial shear strain components of the infinitesimal strain tensor can then be expressed using the engineering strain definition, as
Physical interpretation
From finite strain theory we have
For infinitesimal strains then we have
Dividing by we have
For small deformations we assume that , thus the second term of the left hand side becomes: .
Then we have
where , is the unit vector in the direction of , and the left-hand-side expression is the normal strain in the direction of . For the particular case of in the direction, i.e., , we have
Similarly, for and we can find the normal strains and , respectively. Therefore, the diagonal elements of the infinitesimal strain tensor are the normal strains in the coordinate directions.
Strain transformation rules
If we choose an orthonormal coordinate system () we can write the tensor in terms of components with respect to those base vectors as
In matrix form,
We can easily choose to use another orthonormal coordinate system () instead. In that case the components of the tensor are different, say
The components of the strain in the two coordinate systems are related by
where the Einstein summation convention for repeated indices has been used and . In matrix form
or
Strain invariants
Certain operations on the strain tensor give the same result without regard to which orthonormal coordinate system is used to represent the components of strain. The results of these operations are called strain invariants. The most commonly used strain invariants are
In terms of components
Principal strains
It can be shown that it is possible to find a coordinate system () in which the components of the strain tensor are
The components of the strain tensor in the () coordinate system are called the principal strains and the directions are called the directions of principal strain. Since there are no shear strain components in this coordinate system, the principal strains represent the maximum and minimum stretches of an elemental volume.
If we are given the components of the strain tensor in an arbitrary orthonormal coordinate system, we can find the principal strains using an eigenvalue decomposition determined by solving the system of equations
This system of equations is equivalent to finding the vector along which the strain tensor becomes a pure stretch with no shear component.
Volumetric strain
The volumetric strain, also called bulk strain, is the relative variation of the volume, as arising from dilation or compression; it is the first strain invariant or trace of the tensor:
Actually, if we consider a cube with an edge length a, it is a quasi-cube after the deformation (the variations of the angles do not change the volume) with the dimensions and V0 = a3, thus
as we consider small deformations,
therefore the formula.
In case of pure shear, we can see that there is no change of the volume.
Strain deviator tensor
The infinitesimal strain tensor , similarly to the Cauchy stress tensor, can be expressed as the sum of two other tensors:
a mean strain tensor or volumetric strain tensor or spherical strain tensor, , related to dilation or volume change; and
a deviatoric component called the strain deviator tensor, , related to distortion.
where is the mean strain given by
The deviatoric strain tensor can be obtained by subtracting the mean strain tensor from the infinitesimal strain tensor:
Octahedral strains
Let () be the directions of the three principal strains. An octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on an octahedral plane is called the octahedral shear strain and is given by
where are the principal strains.
The normal strain on an octahedral plane is given by
Equivalent strain
A scalar quantity called the equivalent strain, or the von Mises equivalent strain, is often used to describe the state of strain in solids. Several definitions of equivalent strain can be found in the literature. A definition that is commonly used in the literature on plasticity is
This quantity is work conjugate to the equivalent stress defined as
Compatibility equations
For prescribed strain components the strain tensor equation represents a system of six differential equations for the determination of three displacements components , giving an over-determined system. Thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components. With the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the "Saint Venant compatibility equations".
The compatibility functions serve to assure a single-valued continuous displacement function . If the elastic medium is visualised as a set of infinitesimal cubes in the unstrained state, after the medium is strained, an arbitrary strain tensor may not yield a situation in which the distorted cubes still fit together without overlapping.
In index notation, the compatibility equations are expressed as
In engineering notation,
Special cases
Plane strain
In real engineering components, stress (and strain) are 3-D tensors but in prismatic structures such as a long metal billet, the length of the structure is much greater than the other two dimensions. The strains associated with length, i.e., the normal strain and the shear strains and (if the length is the 3-direction) are constrained by nearby material and are small compared to the cross-sectional strains. Plane strain is then an acceptable approximation. The strain tensor for plane strain is written as:
in which the double underline indicates a second order tensor. This strain state is called plane strain. The corresponding stress tensor is:
in which the non-zero is needed to maintain the constraint . This stress term can be temporarily removed from the analysis to leave only the in-plane terms, effectively reducing the 3-D problem to a much simpler 2-D problem.
Antiplane strain
Antiplane strain is another special state of strain that can occur in a body, for instance in a region close to a screw dislocation. The strain tensor for antiplane strain is given by
Relation to infinitesimal rotation tensor
The infinitesimal strain tensor is defined as
Therefore the displacement gradient can be expressed as
where
The quantity is the infinitesimal rotation tensor or infinitesimal angular displacement tensor (related to the infinitesimal rotation matrix). This tensor is skew symmetric. For infinitesimal deformations the scalar components of satisfy the condition . Note that the displacement gradient is small only if the strain tensor and the rotation tensor are infinitesimal.
The axial vector
A skew symmetric second-order tensor has three independent scalar components. These three components are used to define an axial vector, , as follows
where is the permutation symbol. In matrix form
The axial vector is also called the infinitesimal rotation vector. The rotation vector is related to the displacement gradient by the relation
In index notation
If and then the material undergoes an approximate rigid body rotation of magnitude around the vector .
Relation between the strain tensor and the rotation vector
Given a continuous, single-valued displacement field and the corresponding infinitesimal strain tensor , we have (see Tensor derivative (continuum mechanics))
Since a change in the order of differentiation does not change the result, . Therefore
Also
Hence
Relation between rotation tensor and rotation vector
From an important identity regarding the curl of a tensor we know that for a continuous, single-valued displacement field ,
Since we have
Strain tensor in non-Cartesian coordinates
Strain tensor in cylindrical coordinates
In cylindrical polar coordinates (), the displacement vector can be written as
The components of the strain tensor in a cylindrical coordinate system are given by:
Strain tensor in spherical coordinates
In spherical coordinates (), the displacement vector can be written as
The components of the strain tensor in a spherical coordinate system are given by
See also
Deformation (mechanics)
Compatibility (mechanics)
Stress tensor
Strain gauge
Elasticity tensor
Stress–strain curve
Hooke's law
Poisson's ratio
Finite strain theory
Strain rate
Plane stress
Digital image correlation
References
External links
Physical quantities
Elasticity (physics)
Materials science
Solid mechanics
Mechanics | Infinitesimal strain theory | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,302 | [
"Solid mechanics",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Physical quantities",
"Elasticity (physics)",
"Deformation (mechanics)",
"Quantity",
"Materials science",
"Mechanics",
"nan",
"Mechanical engineering",
"Physical properties"
] |
184,306 | https://en.wikipedia.org/wiki/Perovskite%20%28structure%29 | A perovskite is any material of formula ABX3 with a crystal structure similar to that of the mineral perovskite, which consists of calcium titanium oxide (CaTiO3). The mineral was first discovered in the Ural mountains of Russia by Gustav Rose in 1839 and named after Russian mineralogist L. A. Perovski (1792–1856). 'A' and 'B' are two positively charged ions (i.e. cations), often of very different sizes, and X is a negatively charged ion (an anion, frequently oxide) that bonds to both cations. The 'A' atoms are generally larger than the 'B' atoms. The ideal cubic structure has the B cation in 6-fold coordination, surrounded by an octahedron of anions, and the A cation in 12-fold cuboctahedral coordination. Additional perovskite forms may exist where both/either the A and B sites have a configuration of A1x-1A2x and/or B1y-1B2y and the X may deviate from the ideal coordination configuration as ions within the A and B sites undergo changes in their oxidation states.
As one of the most abundant structural families, perovskites are found in an enormous number of compounds which have wide-ranging properties, applications and importance. Natural compounds with this structure are perovskite, loparite, and the silicate perovskite bridgmanite. Since the 2009 discovery of perovskite solar cells, which contain methylammonium lead halide perovskites, there has been considerable research interest into perovskite materials.
Structure
Perovskite structures are adopted by many compounds that have the chemical formula ABX3. The idealized form is a cubic structure (space group Pmm, no. 221), which is rarely encountered. The orthorhombic (e.g. space group Pnma, no. 62, or Amm2, no. 68) and tetragonal (e.g. space group I4/mcm, no. 140, or P4mm, no. 99) structures are the most common non-cubic variants. Although the perovskite structure is named after CaTiO3, this mineral has a non-cubic structure. SrTiO3 and CaRbF3 are examples of cubic perovskites. Barium titanate is an example of a perovskite which can take on the rhombohedral (space group R3m, no. 160), orthorhombic, tetragonal and cubic forms depending on temperature.
In the idealized cubic unit cell of such a compound, the type 'A' atom sits at cube corner position (0, 0, 0), the type 'B' atom sits at the body-center position (1/2, 1/2, 1/2) and X atoms (typically oxygen) sit at face centered positions (1/2, 1/2, 0), (1/2, 0, 1/2) and (0, 1/2, 1/2). The diagram to the right shows edges for an equivalent unit cell with A in the cube corner position, B at the body center, and X at face-centered positions.
Four general categories of cation-pairing are possible: A+B2+X−3, or 1:2 perovskites; A2+B4+X2−3, or 2:4 perovskites; A3+B3+X2−3, or 3:3 perovskites; and A+B5+X2−3, or 1:5 perovskites.
The relative ion size requirements for stability of the cubic structure are quite stringent, so slight buckling and distortion can produce several lower-symmetry distorted versions, in which the coordination numbers of A cations, B cations or both are reduced. Tilting of the BO6 octahedra reduces the coordination of an undersized A cation from 12 to as low as 8. Conversely, off-centering of an undersized B cation within its octahedron allows it to attain a stable bonding pattern. The resulting electric dipole is responsible for the property of ferroelectricity and shown by perovskites such as BaTiO3 that distort in this fashion.
Complex perovskite structures contain two different B-site cations. This results in the possibility of ordered and disordered variants.
Layered perovskites
Perovskites may be structured in layers, with the structure separated by thin sheets of intrusive material. Different forms of intrusions, based on the chemical makeup of the intrusion, are defined as:
Aurivillius phase: the intruding layer is composed of a []2+ ion, occurring every n layers, leading to an overall chemical formula of []-. Their oxide ion-conducting properties were first discovered in the 1970s by Takahashi et al., and they have been used for this purpose ever since.
Dion−Jacobson phase: the intruding layer is composed of an alkali metal (M) every n layers, giving the overall formula as
Ruddlesden-Popper phase: the simplest of the phases, the intruding layer occurs between every one (n = 1) or multiple (n > 1) layers of the lattice. Ruddlesden−Popper phases have a similar relationship to perovskites in terms of atomic radii of elements with A typically being large (such as La or Sr) with the B ion being much smaller typically a transition metal (such as Mn, Co or Ni). Recently, hybrid organic-inorganic layered perovskites have been developed, where the structure is constituted of one or more layers of MX64--octahedra, where M is a +2 metal (such as Pb2+ or Sn2+) and X and halide ion (such as ), separated by layers of organic cations (such as butylammonium- or phenylethylammonium-cation).
Thin films
Perovskites can be deposited as epitaxial thin films on top of other perovskites, using techniques such as pulsed laser deposition and molecular-beam epitaxy. These films can be a couple of nanometres thick or as small as a single unit cell. The well-defined and unique structures at the interfaces between the film and substrate can be used for interface engineering, where new types properties can arise. This can happen through several mechanisms, from mismatch strain between the substrate and film, change in the oxygen octahedral rotation, compositional changes, and quantum confinement. An example of this is LaAlO3 grown on SrTiO3, where the interface can exhibit conductivity, even though both LaAlO3 and SrTiO3 are non-conductive. Another example is SrTiO3 grown on LSAT ((LaAlO3)0.3 (Sr2AlTaO6)0.7) or DyScO3 can morph the incipient ferroelectric into a ferroelectric at room temperature through the means of epitaxially applied biaxial strain. The lattice mismatch of GdScO3 to SrTiO3 (+1.0%) applies tensile stress resulting in a decrease of the out-of-plane lattice constant of SrTiO3, compared to LSAT (−0.9 %), which epitaxially applies compressive stress leading to an extension of the out-of-plane lattice constant of SrTiO3 (and subsequent increase of the in-plane lattice constant).
Octahedral tilting
Beyond the most common perovskite symmetries (cubic, tetragonal, orthorhombic), a more precise determination leads to a total of 23 different structure types that can be found. These 23 structure can be categorized into 4 different so-called tilt systems that are denoted by their respective Glazer notation.
The notation consists of a letter a/b/c, which describes the rotation around a Cartesian axis and a superscript +/—/0 to denote the rotation with respect to the adjacent layer. A "+" denotes that the rotation of two adjacent layers points in the same direction, whereas a "—" denotes that adjacent layers are rotated in opposite directions. Common examples are a0a0a0, a0a0a– and a0a0a+ which are visualized here.
Examples
Minerals
The perovskite structure is adopted at high pressure by bridgmanite, a silicate with the chemical formula , which is the most common mineral in the Earth's mantle. As pressure increases, the SiO44− tetrahedral units in the dominant silica-bearing minerals become unstable compared with SiO68− octahedral units. At the pressure and temperature conditions of the lower mantle, the second most abundant material is likely the rocksalt-structured oxide, periclase.
At the high pressure conditions of the Earth's lower mantle, the pyroxene enstatite, MgSiO3, transforms into a denser perovskite-structured polymorph; this phase may be the most common mineral in the Earth. This phase has the orthorhombically distorted perovskite structure (GdFeO3-type structure) that is stable at pressures from ~24 GPa to ~110 GPa. However, it cannot be transported from depths of several hundred km to the Earth's surface without transforming back into less dense materials. At higher pressures, MgSiO3 perovskite, commonly known as silicate perovskite, transforms to post-perovskite.
Complex perovskites
Although there is a large number of simple known ABX3 perovskites, this number can be greatly expanded if the A and B sites are increasingly doubled / complex ABX6. Ordered double perovskites are usually denoted as A2BO6 where disordered are denoted as A(B)O3. In ordered perovskites, three different types of ordering are possible: rock-salt, layered, and columnar. The most common ordering is rock-salt followed by the much more uncommon disordered and very distant columnar and layered. The formation of rock-salt superstructures is dependent on the B-site cation ordering. Octahedral tilting can occur in double perovskites, however Jahn–Teller distortions and alternative modes alter the B–O bond length.
Others
Although the most common perovskite compounds contain oxygen, there are a few perovskite compounds that form without oxygen. Fluoride perovskites such as NaMgF3 are well known. A large family of metallic perovskite compounds can be represented by RT3M (R: rare-earth or other relatively large ion, T: transition metal ion and M: light metalloids). The metalloids occupy the octahedrally coordinated "B" sites in these compounds. RPd3B, RRh3B and CeRu3C are examples. MgCNi3 is a metallic perovskite compound and has received lot of attention because of its superconducting properties. An even more exotic type of perovskite is represented by the mixed oxide-aurides of Cs and Rb, such as Cs3AuO, which contain large alkali cations in the traditional "anion" sites, bonded to O2− and Au− anions.
Materials properties
Perovskite materials exhibit many interesting and intriguing properties from both the theoretical and the application point of view. Colossal magnetoresistance, ferroelectricity, superconductivity, charge ordering, spin dependent transport, high thermopower and the interplay of structural, magnetic and transport properties are commonly observed features in this family. These compounds are used as sensors and catalyst electrodes in certain types of fuel cells and are candidates for memory devices and spintronics applications.
Many superconducting ceramic materials (the high temperature superconductors) have perovskite-like structures, often with 3 or more metals including copper, and some oxygen positions left vacant. One prime example is yttrium barium copper oxide which can be insulating or superconducting depending on the oxygen content.
Chemical engineers are considering a cobalt-based perovskite material as a replacement for platinum in catalytic converters for diesel vehicles.
Aspirational applications
Physical properties of interest to materials science among perovskites include superconductivity, magnetoresistance, ionic conductivity, and a multitude of dielectric properties, which are of great importance in microelectronics and telecommunications. They are also some interests for scintillator as they have a large light yield for radiation conversion. Because of the flexibility of bond angles inherent in the perovskite structure there are many different types of distortions that can occur from the ideal structure. These include tilting of the octahedra, displacements of the cations out of the centers of their coordination polyhedra, and distortions of the octahedra driven by electronic factors (Jahn-Teller distortions). The financially biggest application of perovskites is in ceramic capacitors, in which BaTiO3 is used because of its high dielectric constant.
Photovoltaics
Synthetic perovskites are possible materials for high-efficiency photovoltaics – they showed a conversion efficiency of up to 26.3% and can be manufactured using the same thin-film manufacturing techniques as that used for thin film silicon solar cells. Methylammonium tin halides and methylammonium lead halides are of interest for use in dye-sensitized solar cells. Some perovskite PV cells reach a theoretical peak efficiency of 31%.
Among the methylammonium halides studied so far the most common is the methylammonium lead triiodide (). It has a high charge carrier mobility and charge carrier lifetime that allow light-generated electrons and holes to move far enough to be extracted as current, instead of losing their energy as heat within the cell. effective diffusion lengths are some 100 nm for both electrons and holes.
Methylammonium halides are deposited by low-temperature solution methods (typically spin-coating). Other low-temperature (below 100 °C) solution-processed films tend to have considerably smaller diffusion lengths. Stranks et al. described nanostructured cells using a mixed methylammonium lead halide () and demonstrated one amorphous thin-film solar cell with an 11.4% conversion efficiency, and another that reached 15.4% using vacuum evaporation. The film thickness of about 500 to 600 nm implies that the electron and hole diffusion lengths were at least of this order. They measured values of the diffusion length exceeding 1 μm for the mixed perovskite, an order of magnitude greater than the 100 nm for the pure iodide. They also showed that carrier lifetimes in the mixed perovskite are longer than in the pure iodide. Liu et al. applied Scanning Photo-current Microscopy to show that the electron diffusion length in mixed halide perovskite along (110) plane is in the order of 10 μm.
For , open-circuit voltage (VOC) typically approaches 1 V, while for with low Cl content, VOC > 1.1 V has been reported. Because the band gaps (Eg) of both are 1.55 eV, VOC-to-Eg ratios are higher than usually observed for similar third-generation cells. With wider bandgap perovskites, VOC up to 1.3 V has been demonstrated.
The technique offers the potential of low cost because of the low temperature solution methods and the absence of rare elements. Cell durability is currently insufficient for commercial use. However, the solar cells are prone to degradation due to volatility of the organic [CH3NH3]+I− salt. The all-inorganic perovskite cesium lead iodide perovskite (CsPbI3) circumvents this problem, but is itself phase-unstable, the low temperature solution methods of which have only been recently developed.
Planar heterojunction perovskite solar cells can be manufactured in simplified device architectures (without complex nanostructures) using only vapor deposition. This technique produces 15% solar-to-electrical power conversion as measured under simulated full sunlight.
Lasers
LaAlO3 doped with neodymium gave laser emission at 1080 nm. Mixed methylammonium lead halide () cells fashioned into optically pumped vertical-cavity surface-emitting lasers (VCSELs) convert visible pump light to near-IR laser light with a 70% efficiency.
Light-emitting diodes
Due to their high photoluminescence quantum efficiencies, perovskites may find use in light-emitting diodes (LEDs). Although the stability of perovskite LEDs is not yet as good as III-V or organic LEDs, there is ongoing research to solve this problem, such as incorporating organic molecules or potassium dopants in perovskite LEDs. Perovskite-based printing ink can be used to produce OLED display and quantum dot display panels.
Photoelectrolysis
Water electrolysis at 12.3% efficiency use perovskite photovoltaics.
Scintillators
Cerium-doped lutetium aluminum perovskite (LuAP:Ce) single crystals were reported. The main property of those crystals is a large mass density of 8.4 g/cm3, which gives short X- and gamma-ray absorption length. The scintillation light yield and the decay time with Cs137 radiation source are 11,400 photons/MeV and 17 ns, respectively. Those properties made LUAP:Ce scintillators attractive for commercials and they were used quite often in high energy physics experiments. Until eleven years later, one group in Japan proposed Ruddlesden-Popper solution-based hybrid organic-inorganic perovskite crystals as low-cost scintillators. However, the properties were not so impressive in comparison with LuAP:Ce. Until the next nine years, the solution-based hybrid organic-inorganic perovskite crystals became popular again through a report about their high light yields of more than 100,000 photons/MeV at cryogenic temperatures. Recent demonstration of perovskite nanocrystal scintillators for X-ray imaging screen was reported and it is triggering more research efforts for perovskite scintillators. Layered Ruddlesden-Popper perovskites have shown potential as fast novel scintillators with room temperature light yields up to 40,000 photons/MeV, fast decay times below 5 ns and negligible afterglow. In addition this class of materials have shown capability for wide-range particle detection, including alpha particles and thermal neutrons.
Examples of perovskites
Simple:
Strontium titanate
Calcium titanate
Lead titanate
Bismuth ferrite
Lanthanum ytterbium oxide
Silicate perovskite
Lanthanum manganite
Yttrium aluminum perovskite (YAP)
Lutetium aluminum perovskite (LuAP)
Solid solutions:
Lanthanum strontium manganite
LSAT (lanthanum aluminate – strontium aluminum tantalate)
Lead scandium tantalate
Lead zirconate titanate
Methylammonium lead halide
Methylammonium tin halide
Formamidinium tin halide
See also
Antiperovskite
Aurivillius phases
Diamond anvil
Goldschmidt tolerance factor
Ruddlesden-Popper phase
Spinel
References
Further reading
External links
(includes a Java applet with which the structure can be interactively rotated)
Перовскит в Каталоге Минералов
Mineralogy
Solar power
Crystal structure types
Crystallography
de:Perowskit#Kristallstruktur | Perovskite (structure) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,216 | [
"Applied and interdisciplinary physics",
"Crystal structure types",
"Materials science",
"Crystallography",
"Condensed matter physics",
"nan"
] |
1,062,015 | https://en.wikipedia.org/wiki/Associated%20Legendre%20polynomials | In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
or equivalently
where the indices ℓ and m (which are integers) are referred to as the degree and order of the associated Legendre polynomial respectively. This equation has nonzero solutions that are nonsingular on only if ℓ and m are integers with 0 ≤ m ≤ ℓ, or with trivially equivalent negative values. When in addition m is even, the function is a polynomial. When m is zero and ℓ integer, these functions are identical to the Legendre polynomials. In general, when ℓ and m are integers, the regular solutions are sometimes called "associated Legendre polynomials", even though they are not polynomials when m is odd. The fully general class of functions with arbitrary real or complex values of ℓ and m are Legendre functions. In that case the parameters are usually labelled with Greek letters.
The Legendre ordinary differential equation is frequently encountered in physics and other technical fields. In particular, it occurs when solving Laplace's equation (and related partial differential equations) in spherical coordinates. Associated Legendre polynomials play a vital role in the definition of spherical harmonics.
Definition for non-negative integer parameters and
These functions are denoted , where the superscript indicates the order and not a power of P. Their most straightforward definition is in terms
of derivatives of ordinary Legendre polynomials (m ≥ 0)
The factor in this formula is known as the Condon–Shortley phase. Some authors omit it. That the functions described by this equation satisfy the general Legendre differential equation with the indicated values of the parameters ℓ and m follows by differentiating m times the Legendre equation for :
Moreover, since by Rodrigues' formula,
the P can be expressed in the form
This equation allows extension of the range of m to: . The definitions of , resulting from this expression by substitution of , are proportional. Indeed, equate the coefficients of equal powers on the left and right hand side of
then it follows that the proportionality constant is
so that
Alternative notations
The following alternative notations are also used in literature:
Closed Form
The Associated Legendre Polynomial can also be written as:
with simple monomials and the generalized form of the binomial coefficient.
Orthogonality
The associated Legendre polynomials are not mutually orthogonal in general. For example, is not orthogonal to . However, some subsets are orthogonal. Assuming 0 ≤ m ≤ ℓ, they satisfy the orthogonality condition for fixed m:
Where is the Kronecker delta.
Also, they satisfy the orthogonality condition for fixed :
Negative and/or negative
The differential equation is clearly invariant under a change in sign of m.
The functions for negative m were shown above to be proportional to those of positive m:
(This followed from the Rodrigues' formula definition. This definition also makes the various recurrence formulas work for positive or negative .)
The differential equation is also invariant under a change from to , and the functions for negative are defined by
Parity
From their definition, one can verify that the Associated Legendre functions are either even or odd according to
The first few associated Legendre functions
The first few associated Legendre functions, including those for negative values of m, are:
Recurrence formula
These functions have a number of recurrence properties:
Helpful identities (initial values for the first recursion):
with the double factorial.
Gaunt's formula
The integral over the product of three associated Legendre polynomials (with orders matching as shown below) is a necessary ingredient when developing products of Legendre polynomials into a series linear in the Legendre polynomials. For instance, this turns out to be necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator are needed. For this we have Gaunt's formula
This formula is to be used under the following assumptions:
the degrees are non-negative integers
all three orders are non-negative integers
is the largest of the three orders
the orders sum up
the degrees obey
Other quantities appearing in the formula are defined as
The integral is zero unless
the sum of degrees is even so that is an integer
the triangular condition is satisfied
Dong and Lemus (2002) generalized the derivation of this formula to integrals over a product of an arbitrary number of associated Legendre polynomials.
Generalization via hypergeometric functions
These functions may actually be defined for general complex parameters and argument:
where is the gamma function and is the hypergeometric function
They are called the Legendre functions when defined in this more general way. They satisfy the same differential equation as before:
Since this is a second order differential equation, it has a second solution, , defined as:
and both obey the various recurrence formulas given previously.
Reparameterization in terms of angles
These functions are most useful when the argument is reparameterized in terms of angles, letting :
Using the relation , the list given above yields the first few polynomials, parameterized this way, as:
The orthogonality relations given above become in this formulation:
for fixed m, are orthogonal, parameterized by θ over , with weight :
Also, for fixed ℓ:
In terms of θ, are solutions of
More precisely, given an integer m0, the above equation has
nonsingular solutions only when for ℓ
an integer ≥ m, and those solutions are proportional to
.
Applications in physics: spherical harmonics
In many occasions in physics, associated Legendre polynomials in terms of angles occur where spherical symmetry is involved. The colatitude angle in spherical coordinates is
the angle used above. The longitude angle, , appears in a multiplying factor. Together, they make a set of functions called spherical harmonics. These functions express the symmetry of the two-sphere under the action of the Lie group SO(3).
What makes these functions useful is that they are central to the solution of the equation
on the surface of a sphere. In spherical coordinates θ (colatitude) and φ (longitude), the Laplacian is
When the partial differential equation
is solved by the method of separation of variables, one gets a φ-dependent part or for integer m≥0, and an equation for the θ-dependent part
for which the solutions are with
and .
Therefore, the equation
has nonsingular separated solutions only when ,
and those solutions are proportional to
and
For each choice of ℓ, there are functions
for the various values of m and choices of sine and cosine.
They are all orthogonal in both ℓ and m when integrated over the
surface of the sphere.
The solutions are usually written in terms of complex exponentials:
The functions are the spherical harmonics, and the quantity in the square root is a normalizing factor.
Recalling the relation between the associated Legendre functions of positive and negative m, it is easily shown that the spherical harmonics satisfy the identity
The spherical harmonic functions form a complete orthonormal set of functions in the sense of Fourier series. Workers in the fields of geodesy, geomagnetism and spectral analysis use a different phase and normalization factor than given here (see spherical harmonics).
When a 3-dimensional spherically symmetric partial differential equation is solved by the method of separation of variables in spherical coordinates, the part that remains after removal of the radial part is typically
of the form
and hence the solutions are spherical harmonics.
Generalizations
The Legendre polynomials are closely related to hypergeometric series. In the form of spherical harmonics, they express the symmetry of the two-sphere under the action of the Lie group SO(3). There are many other Lie groups besides SO(3), and analogous generalizations of the Legendre polynomials exist to express the symmetries of semi-simple Lie groups and Riemannian symmetric spaces. Crudely speaking, one may define a Laplacian on symmetric spaces; the eigenfunctions of the Laplacian can be thought of as generalizations of the spherical harmonics to other settings.
See also
Angular momentum
Gaussian quadrature
Legendre polynomials
Spherical harmonics
Whipple's transformation of Legendre functions
Laguerre polynomials
Hermite polynomials
Notes and references
; Section 12.5. (Uses a different sign convention.)
.
; Chapter 3.
.
; Chapter 2.
.
Schach, S. R. (1973) New Identities for Legendre Associated Functions of Integral Order and Degree , Society for Industrial and Applied Mathematics Journal on Mathematical Analysis, 1976, Vol. 7, No. 1 : pp. 59–69
External links
Associated Legendre polynomials in MathWorld
Legendre polynomials in MathWorld
Legendre and Related Functions in DLMF
Atomic physics
Orthogonal polynomials | Associated Legendre polynomials | [
"Physics",
"Chemistry"
] | 1,756 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
1,063,353 | https://en.wikipedia.org/wiki/Aspartate%20transaminase | Aspartate transaminase (AST) or aspartate aminotransferase, also known as AspAT/ASAT/AAT or (serum) glutamic oxaloacetic transaminase (GOT, SGOT), is a pyridoxal phosphate (PLP)-dependent transaminase enzyme () that was first described by Arthur Karmen and colleagues in 1954. AST catalyzes the reversible transfer of an α-amino group between aspartate and glutamate and, as such, is an important enzyme in amino acid metabolism. AST is found in the liver, heart, skeletal muscle, kidneys, brain, red blood cells and gall bladder. Serum AST level, serum ALT (alanine transaminase) level, and their ratio (AST/ALT ratio) are commonly measured clinically as biomarkers for liver health. The tests are part of blood panels.
The half-life of total AST in the circulation approximates 17 hours and, on average, 87 hours for mitochondrial AST. Aminotransferase is cleared by sinusoidal cells in the liver.
Function
Aspartate transaminase catalyzes the interconversion of aspartate and α-ketoglutarate to oxaloacetate and glutamate.
L-Aspartate (Asp) + α-ketoglutarate ↔ oxaloacetate + L-glutamate (Glu)
As a prototypical transaminase, AST relies on PLP (Vitamin B6) as a cofactor to transfer the amino group from aspartate or glutamate to the corresponding ketoacid. In the process, the cofactor shuttles between PLP and the pyridoxamine phosphate (PMP) form. The amino group transfer catalyzed by this enzyme is crucial in both amino acid degradation and biosynthesis. In amino acid degradation, following the conversion of α-ketoglutarate to glutamate, glutamate subsequently undergoes oxidative deamination to form ammonium ions, which are excreted as urea. In the reverse reaction, aspartate may be synthesized from oxaloacetate, which is a key intermediate in the citric acid cycle.
Isoenzymes
Two isoenzymes are present in a wide variety of eukaryotes. In humans:
GOT1/cAST, the cytosolic isoenzyme derives mainly from red blood cells and heart.
GOT2/mAST, the mitochondrial isoenzyme is present predominantly in liver.
These isoenzymes are thought to have evolved from a common ancestral AST via gene duplication, and they share a sequence homology of approximately 45%.
AST has also been found in a number of microorganisms, including E. coli, H. mediterranei, and T. thermophilus. In E. coli, the enzyme is encoded by the aspCgene and has also been shown to exhibit the activity of an aromatic-amino-acid transaminase ().
Structure
X-ray crystallography studies have been performed to determine the structure of aspartate transaminase from various sources, including chicken mitochondria, pig heart cytosol, and E. coli. Overall, the three-dimensional polypeptide structure for all species is quite similar. AST is dimeric, consisting of two identical subunits, each with approximately 400 amino acid residues and a molecular weight of approximately 45 kD. Each subunit is composed of a large and a small domain, as well as a third domain consisting of the N-terminal residues 3-14; these few residues form a strand, which links and stabilizes the two subunits of the dimer. The large domain, which includes residues 48-325, binds the PLP cofactor via an aldimine linkage to the ε-amino group of Lys258. Other residues in this domain – Asp 222 and Tyr 225 – also interact with PLP via hydrogen bonding. The small domain consists of residues 15-47 and 326-410 and represents a flexible region that shifts the enzyme from an "open" to a "closed" conformation upon substrate binding.
The two independent active sites are positioned near the interface between the two domains. Within each active site, a couple arginine residues are responsible for the enzyme's specificity for dicarboxylic acid substrates: Arg386 interacts with the substrate's proximal (α-)carboxylate group, while Arg292 complexes with the distal (side-chain) carboxylate.
In terms of secondary structure, AST contains both α and β elements. Each domain has a central sheet of β-strands with α-helices packed on either side.
Mechanism
Aspartate transaminase, as with all transaminases, operates via dual substrate recognition; that is, it is able to recognize and selectively bind two amino acids (Asp and Glu) with different side-chains. In either case, the transaminase reaction consists of two similar half-reactions that constitute what is referred to as a ping-pong mechanism. In the first half-reaction, amino acid 1 (e.g., L-Asp) reacts with the enzyme-PLP complex to generate ketoacid 1 (oxaloacetate) and the modified enzyme-PMP. In the second half-reaction, ketoacid 2 (α-ketoglutarate) reacts with enzyme-PMP to produce amino acid 2 (L-Glu), regenerating the original enzyme-PLP in the process. Formation of a racemic product (D-Glu) is very rare.
The specific steps for the half-reaction of Enzyme-PLP + aspartate ⇌ Enzyme-PMP + oxaloacetate are as follows (see figure); the other half-reaction (not shown) proceeds in the reverse manner, with α-ketoglutarate as the substrate.
Internal aldimine formation: First, the ε-amino group of Lys258 forms a Schiff base linkage with the aldehyde carbon to generate an internal aldimine.
Transaldimination: The internal aldimine then becomes an external aldimine when the ε-amino group of Lys258 is displaced by the amino group of aspartate. This transaldimination reaction occurs via a nucleophilic attack by the deprotonated amino group of Asp and proceeds through a tetrahedral intermediate. As this point, the carboxylate groups of Asp are stabilized by the guanidinium groups of the enzyme's Arg386 and Arg 292 residues.
Quinonoid formation: The hydrogen attached to the a-carbon of Asp is then abstracted (Lys258 is thought to be the proton acceptor) to form a quinonoid intermediate.
Ketimine formation: The quinonoid is reprotonated, but now at the aldehyde carbon, to form the ketimine intermediate.
Ketimine hydrolysis: Finally, the ketimine is hydrolyzed to form PMP and oxaloacetate.
This mechanism is thought to have multiple partially rate-determining steps. However, it has been shown that the substrate binding step (transaldimination) drives the catalytic reaction forward.
Clinical significance
AST is similar to alanine transaminase (ALT) in that both enzymes are associated with liver parenchymal cells. The difference is that ALT is found predominantly in the liver, with clinically negligible quantities found in the kidneys, heart, and skeletal muscle, while AST is found in the liver, heart (cardiac muscle), skeletal muscle, kidneys, brain, and red blood cells. As a result, ALT is a more specific indicator of liver inflammation than AST, as AST may be elevated also in diseases affecting other organs, such as myocardial infarction, acute pancreatitis, acute hemolytic anemia, severe burns, acute renal disease, musculoskeletal diseases, and trauma.
AST was defined as a biochemical marker for the diagnosis of acute myocardial infarction in 1954. However, the use of AST for such a diagnosis is now redundant and has been superseded by the cardiac troponins.
Laboratory tests should always be interpreted using the reference range from the laboratory that performed the test. Example reference ranges are shown below:
See also
Alanine transaminase (ALT/ALAT/SGPT)
Transaminases
References
Further reading
External links
AST - Lab Tests Online
AST: MedlinePlus Medical Encyclopedia
Liver function tests
EC 2.6.1
Glutamate (neurotransmitter) | Aspartate transaminase | [
"Chemistry"
] | 1,856 | [
"Chemical pathology",
"Liver function tests"
] |
1,063,406 | https://en.wikipedia.org/wiki/Serology | Serology is the scientific study of serum and other body fluids. In practice, the term usually refers to the diagnostic identification of antibodies in the serum. Such antibodies are typically formed in response to an infection (against a given microorganism), against other foreign proteins (in response, for example, to a mismatched blood transfusion), or to one's own proteins (in instances of autoimmune disease). In either case, the procedure is simple.
Serological tests
Serological tests are diagnostic methods that are used to identify antibodies and antigens in a patient's sample. Serological tests may be performed to diagnose infections and autoimmune illnesses, to check if a person has immunity to certain diseases, and in many other situations, such as determining an individual's blood type. Serological tests may also be used in forensic serology to investigate crime scene evidence. Several methods can be used to detect antibodies and antigens, including ELISA, agglutination, precipitation, complement-fixation, and fluorescent antibodies and more recently chemiluminescence.
Applications
Microbiology
In microbiology, serologic tests are used to determine if a person has antibodies against a specific pathogen, or to detect antigens associated with a pathogen in a person's sample. Serologic tests are especially useful for organisms that are difficult to culture by routine laboratory methods, like Treponema pallidum (the causative agent of syphilis), or viruses.
The presence of antibodies against a pathogen in a person's blood indicates that they have been exposed to that pathogen. Most serologic tests measure one of two types of antibodies: immunoglobulin M (IgM) and immunoglobulin G (IgG). IgM is produced in high quantities shortly after a person is exposed to the pathogen, and production declines quickly thereafter. IgG is also produced on the first exposure, but not as quickly as IgM. On subsequent exposures, the antibodies produced are primarily IgG, and they remain in circulation for a prolonged period of time.
This affects the interpretation of serology results: a positive result for IgM suggests that a person is currently or recently infected, while a positive result for IgG and negative result for IgM suggests that the person may have been infected or immunized in the past. Antibody testing for infectious diseases is often done in two phases: during the initial illness (acute phase) and after recovery (convalescent phase). The amount of antibody in each specimen (antibody titer) is compared, and a significantly higher amount of IgG in the convalescent specimen suggests infection as opposed to previous exposure. False negative results for antibody testing can occur in people who are immunosuppressed, as they produce lower amounts of antibodies, and in people who receive antimicrobial drugs early in the course of the infection.
Transfusion medicine
Blood typing is typically performed using serologic methods. The antigens on a person's red blood cells, which determine their blood type, are identified using reagents that contain antibodies, called antisera. When the antibodies bind to red blood cells that express the corresponding antigen, they cause red blood cells to clump together (agglutinate), which can be identified visually. The person's blood group antibodies can also be identified by adding plasma to cells that express the corresponding antigen and observing the agglutination reactions.
Other serologic methods used in transfusion medicine include crossmatching and the direct and indirect antiglobulin tests. Crossmatching is performed before a blood transfusion to ensure that the donor blood is compatible. It involves adding the recipient's plasma to the donor blood cells and observing for agglutination reactions. The direct antiglobulin test is performed to detect if antibodies are bound to red blood cells inside the person's body, which is abnormal and can occur in conditions like autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions. The indirect antiglobulin test is used to screen for antibodies that could cause transfusion reactions and identify certain blood group antigens.
Immunology
Serologic tests can help to diagnose autoimmune disorders by identifying abnormal antibodies directed against a person's own tissues (autoantibodies). All people have different immunology graphs.
Serological surveys
A 2016 research paper by Metcalf et al., amongst whom were Neil Ferguson and Jeremy Farrar, stated that serological surveys are often used by epidemiologists to determine the prevalence of a disease in a population. Such surveys are sometimes performed by random, anonymous sampling from samples taken for other medical tests or to assess the prevalence of antibodies of a specific organism or protective titre of antibodies in a population. Serological surveys are usually used to quantify the proportion of people or animals in a population positive for a specific antibody or the titre or concentrations of an antibody. These surveys are potentially the most direct and informative technique available to infer the dynamics of a population's susceptibility and level of immunity. The authors proposed a World Serology Bank (or serum bank) and foresaw "associated major methodological developments in serological testing, study design, and quantitative analysis, which could drive a step change in our understanding and optimum control of infectious diseases."
In a helpful reply entitled "Opportunities and challenges of a World Serum Bank", de Lusignan and Correa observed that the
In another helpful reply on the World Serum Bank, the Australian researcher Karen Coates declared that:
In April 2020, Justin Trudeau formed the COVID-19 Immunity Task Force, whose mandate is to carry out a serological survey in a scheme hatched in the midst of the COVID-19 pandemic.
See also
Forensic serology
Medical laboratory
Medical technologist
Seroconversion
Serovar
Geoffrey Tovey, noted serologist
References
External links
Serology (archived) – MedlinePlus Medical Encyclopedia
Clinical pathology
Blood tests
Epidemiology
Immunologic tests | Serology | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,255 | [
"Blood tests",
"Immunologic tests",
"Epidemiology",
"Chemical pathology",
"Environmental social science"
] |
1,063,435 | https://en.wikipedia.org/wiki/Normal%20force | In mechanics, the normal force is the component of a contact force that is perpendicular to the surface that an object contacts. In this instance normal is used in the geometric sense and means perpendicular, as opposed to the meaning "ordinary" or "expected". A person standing still on a platform is acted upon by gravity, which would pull them down towards the Earth's core unless there were a countervailing force from the resistance of the platform's molecules, a force which is named the "normal force".
The normal force is one type of ground reaction force. If the person stands on a slope and does not sink into the ground or slide downhill, the total ground reaction force can be divided into two components: a normal force perpendicular to the ground and a frictional force parallel to the ground. In another common situation, if an object hits a surface with some speed, and the surface can withstand the impact, the normal force provides for a rapid deceleration, which will depend on the flexibility of the surface and the object.
Equations
In the case of an object resting upon a flat table (unlike on an incline as in Figures 1 and 2), the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, , where m is mass, and g is the gravitational field strength (about 9.81 m/s2 on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table be sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball.
Where an object rests on an incline as in Figures 1 and 2, the normal force is perpendicular to the plane the object rests on. Still, the normal force will be as large as necessary to prevent sinking through the surface, presuming the surface is sturdy enough. The strength of the force can be calculated as:
where is the normal force, m is the mass of the object, g is the gravitational field strength, and θ is the angle of the inclined surface measured from the horizontal.
The normal force is one of the several forces which act on the object. In the simple situations so far considered, the most important other forces acting on it are friction and the force of gravity.
Using vectors
In general, the magnitude of the normal force, N, is the projection of the net surface interaction force, T, in the normal direction, n, and so the normal force vector can be found by scaling the normal direction by the net surface interaction force. The surface interaction force, in turn, is equal to the dot product of the unit normal with the Cauchy stress tensor describing the stress state of the surface. That is:
or, in indicial notation,
The parallel shear component of the contact force is known as the frictional force ().
The static coefficient of friction for an object on an inclined plane can be calculated as follows:
for an object on the point of sliding where is the angle between the slope and the horizontal.
Physical origin
Normal force is directly a result of Pauli exclusion principle and not a true force per se: it is a result of the interactions of the electrons at the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration.
However these interactions are often modeled as van der Waals force, a force that grows very large very quickly as distance becomes smaller.
On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces.
Practical applications
In an elevator either stationary or moving at constant velocity, the normal force on the person's feet balances the person's weight. In an elevator that is accelerating upward, the normal force is greater than the person's ground weight and so the person's perceived weight increases (making the person feel heavier). In an elevator that is accelerating downward, the normal force is less than the person's ground weight and so a passenger's perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger's feet, and will be different than the person's ground weight if the elevator cab is accelerating up or down. The weighing scale measures normal force (which varies as the elevator cab accelerates), not gravitational force (which does not vary as the cab accelerates).
When we define upward to be the positive direction, constructing Newton's second law and solving for the normal force on a passenger yields the following equation:
In a gravitron amusement ride, the static friction caused by and perpendicular to the normal force acting on the passengers against the walls results in suspension of the passengers above the floor as the ride rotates. In such a scenario, the walls of the ride apply normal force to the passengers in the direction of the center, which is a result of the centripetal force applied to the passengers as the ride rotates. As a result of the normal force experienced by the passengers, the static friction between the passengers and the walls of the ride counteracts the pull of gravity on the passengers, resulting in suspension above ground of the passengers throughout the duration of the ride.
When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation:
where is the normal force on the passenger, is the mass of the passenger, is the tangential velocity of the passenger and is the distance of the passenger from the center of the ride.
With the normal force known, we can solve for the static coefficient of friction needed to maintain a net force of zero in the vertical direction:
where is the static coefficient of friction, and is the gravitational field strength.
See also
Force
Contact mechanics
Normal stress
References
Force
Statics | Normal force | [
"Physics",
"Mathematics"
] | 1,405 | [
"Statics",
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
1,063,456 | https://en.wikipedia.org/wiki/X-ray%20spectroscopy | X-ray spectroscopy is a general term for several spectroscopic techniques for characterization of materials by using x-ray radiation.
Characteristic X-ray spectroscopy
When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. When it returns to the low energy level, the energy it previously gained by excitation is emitted as a photon of one of the wavelengths uniquely characteristic of the element. Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen. Comparison of the specimen's spectrum with the spectra of samples of known composition produces quantitative results (after some mathematical corrections for absorption, fluorescence and atomic number).
Atoms can be excited by a high-energy beam of charged particles such as electrons (in an electron microscope for example), protons (see PIXE) or a beam of X-rays (see X-ray fluorescence, or XRF or also recently in transmission XRT). These methods enable elements from the entire periodic table to be analysed, with the exception of H, He and Li.
In electron microscopy an electron beam excites X-rays; there are two main techniques for analysis of spectra of characteristic X-ray radiation: energy-dispersive X-ray spectroscopy (EDS) and wavelength dispersive X-ray spectroscopy (WDS). In X-ray transmission (XRT), the equivalent atomic composition (Zeff) is captured based on photoelectric and Compton effects.
Energy-dispersive X-ray spectroscopy
In an energy-dispersive X-ray spectrometer, a semiconductor detector measures energy of incoming photons. To maintain detector integrity and resolution it should be cooled with liquid nitrogen or by Peltier cooling. EDS is widely employed in electron microscopes (where imaging rather than spectroscopy is a main task) and in cheaper and/or portable XRF units.
Wavelength-dispersive X-ray spectroscopy
In a wavelength-dispersive X-ray spectrometer, a single crystal diffracts the photons according to Bragg's law, which are then collected by a detector. By moving the diffraction crystal and detector relative to each other, a wide region of the spectrum can be observed. To observe a large spectral range, three of four different single crystals may be needed. In contrast to EDS, WDS is a method of sequential spectrum acquisition. While WDS is slower than EDS and more sensitive to the positioning of the sample in the spectrometer, it has superior spectral resolution and sensitivity. WDS is widely used in microprobes (where X-ray microanalysis is the main task) and in XRF;
it is widely used in the field of X-ray diffraction to calculate various data such as interplanar spacing and wavelength of the incident X-ray using Bragg's law.
X-ray emission spectroscopy
The father-and-son scientific team of William Lawrence Bragg and William Henry Bragg, who were 1915 Nobel Prize Winners, were the original pioneers in developing X-ray emission spectroscopy. An example of a spectrometer developed by William Henry Bragg, which was used by both father and son to investigate the structure of crystals, can be seen at the Science Museum, London. Jointly they measured the X-ray wavelengths of many elements to high precision, using high-energy electrons as excitation source. The cathode-ray tube or an x-ray tube was the method used to pass electrons through a crystal of numerous elements. They also painstakingly produced numerous diamond-ruled glass diffraction gratings for their spectrometers. The law of diffraction of a crystal is called Bragg's law in their honor.
Intense and wavelength-tunable X-rays are now typically generated with synchrotrons. In a material, the X-rays may suffer an energy loss compared to the incoming beam. This energy loss of the re-emerging beam reflects an internal excitation of the atomic system, an X-ray analogue to the well-known Raman spectroscopy that is widely used in the optical region.
In the X-ray region there is sufficient energy to probe changes in the electronic state (transitions between orbitals; this is in contrast with the optical region, where the energy emitted or absorbed is often due to changes in the state of the rotational or vibrational degrees of freedom of the system's atoms and groups of atoms). For instance, in the ultra soft X-ray region (below about 1 keV), crystal field excitations give rise to the energy loss.
The photon-in-photon-out process may be thought of as a scattering event. When the x-ray energy corresponds to the binding energy of a core-level electron, this scattering process is resonantly enhanced by many orders of magnitude. This type of X-ray emission spectroscopy is often referred to as resonant inelastic X-ray scattering (RIXS).
Due to the wide separation of orbital energies of the core levels, it is possible to select a certain atom of interest. The small spatial extent of core level orbitals forces the RIXS process to reflect the electronic structure in close vicinity of the chosen atom. Thus, RIXS experiments give valuable information about the local electronic structure of complex systems, and theoretical calculations are relatively simple to perform.
Instrumentation
There exist several efficient designs for analyzing an X-ray emission spectrum in the ultra soft X-ray region. The figure of merit for such instruments is the spectral throughput, i.e. the product of detected intensity and spectral resolving power. Usually, it is possible to change these parameters within a certain range while keeping their product constant.
Grating spectrometers
Usually X-ray diffraction in spectrometers is achieved on crystals, but in Grating spectrometers, the X-rays emerging from a sample must pass a source-defining slit, then optical elements (mirrors and/or gratings) disperse them by diffraction according to their wavelength and, finally, a detector is placed at their focal points.
Spherical grating mounts
Henry Augustus Rowland (1848–1901) devised an instrument that allowed the use of a single optical element that combines diffraction and focusing: a spherical grating. Reflectivity of X-rays is low, regardless of the used material and therefore, grazing incidence upon the grating is necessary. X-ray beams impinging on a smooth surface at a few degrees glancing angle of incidence undergo external total reflection which is taken advantage of to enhance the instrumental efficiency substantially.
Denoted by R the radius of a spherical grating. Imagine a circle with half the radius R tangent to the center of the grating surface. This small circle is called the Rowland circle. If the entrance slit is anywhere on this circle, then a beam passing the slit and striking the grating will be split into a specularly reflected beam, and beams of all diffraction orders, that come into focus at certain points on the same circle.
Plane grating mounts
Similar to optical spectrometers, a plane grating spectrometer first needs optics that turns the divergent rays emitted by the x-ray source into a parallel beam. This may be achieved by using a parabolic mirror. The parallel rays emerging from this mirror strike a plane grating (with constant groove distance) at the same angle and are diffracted according to their wavelength. A second parabolic mirror then collects the diffracted rays at a certain angle and creates an image on a detector. A spectrum within a certain wavelength range can be recorded simultaneously by using a two-dimensional position-sensitive detector such as a microchannel photomultiplier plate or an X-ray sensitive CCD chip (film plates are also possible to use).
Interferometers
Instead of using the concept of multiple beam interference that gratings produce, the two rays may simply interfere. By recording the intensity of two such co-linearly at some fixed point and changing their relative phase one obtains an intensity spectrum as a function of path length difference. One can show that this is equivalent to a Fourier transformed spectrum as a function of frequency. The highest recordable frequency of such a spectrum is dependent on the minimum step size chosen in the scan and the frequency resolution (i.e. how well a certain wave can be defined in terms of its frequency) depends on the maximum path length difference achieved. The latter feature allows a much more compact design for achieving high resolution than for a grating spectrometer because x-ray wavelengths are small compared to attainable path length differences.
Early history of X-ray spectroscopy in the U.S.
Philips Gloeilampen Fabrieken, headquartered in Eindhoven in the Netherlands, got its start as a manufacturer of light bulbs, but quickly evolved until it is now one of the leading manufacturers of electrical apparatus, electronics, and related products including X-ray equipment. It also has had one of the world's largest R&D labs. In 1940, the Netherlands was overrun by Hitler’s Germany. The company was able to transfer a substantial sum of money to a company that it set up as an R&D laboratory in an estate in Irvington on the Hudson in NY. As an extension to their work on light bulbs, the Dutch company had developed a line of X-ray tubes for medical applications that were powered by transformers. These X-ray tubes could also be used in scientific X-ray instrumentations, but there was very little commercial demand for the latter. As a result, management decided to try to develop this market and they set up development groups in their research labs in both Holland and the United States.
They hired Dr. Ira Duffendack, a professor at University of Michigan and a world expert on infrared research to head the lab and to hire a staff. In 1951 he hired Dr. David Miller as Assistant Director of Research. Dr. Miller had done research on X-ray instrumentation at Washington University in St. Louis. Dr. Duffendack also hired Dr. Bill Parish, a well known researcher in X-ray diffraction, to head up the section of the lab on X-ray instrumental development. X-ray diffraction units were widely used in academic research departments to do crystal analysis. An essential component of a diffraction unit was a very accurate angle measuring device known as a goniometer. Such units were not commercially available, so each investigator had do try to make their own. Dr Parrish decided this would be a good device to use to generate an instrumental market, so his group designed and learned how to manufacture a goniometer. This market developed quickly and, with the readily available tubes and power supplies, a complete diffraction unit was made available and was successfully marketed.
The U.S. management did not want the laboratory to be converted to a manufacturing unit so it decided to set up a commercial unit to further develop the X-ray instrumentation market. In 1953 Norelco Electronics was established in Mount Vernon, NY, dedicated to the sale and support of X-ray instrumentation. It included a sales staff, a manufacturing group, an engineering department and an applications lab. Dr. Miller was transferred from the lab to head up the engineering department. The sales staff sponsored three schools a year, one in Mount Vernon, one in Denver, and one in San Francisco. The week-long school curricula reviewed the basics of X-ray instrumentation and the specific application of Norelco products. The faculty were members of the engineering department and academic consultants. The schools were well attended by academic and industrial R&D scientists. The engineering department was also a new product development group. It added an X-ray spectrograph to the product line very quickly and contributed other related products for the next 8 years.
The applications lab was an essential sales tool. When the spectrograph was introduced as a quick and accurate analytical chemistry device, it was met with widespread skepticism. All research facilities had a chemistry department and analytical analysis was done by “wet chemistry” methods. The idea of doing this analysis by physics instrumentation was considered suspect. To overcome this bias, the salesman would ask a prospective customer for a task the customer was doing by “wet methods”. The task would be given to the applications lab and they would demonstrate how accurately and quickly it could be done using the X-ray units. This proved to be a very strong sales tool, particularly when the results were published in the Norelco Reporter, a technical journal issued monthly by the company with wide distribution to commercial and academic institutions.
An X-ray spectrograph consists of a high voltage power supply (50 kV or 100 kV), a broad band X-ray tube, usually with a tungsten anode and a beryllium window, a specimen holder, an analyzing crystal, a goniometer, and an X-ray detector device. These are arranged as shown in Fig. 1.
The continuous X-spectrum emitted from the tube irradiates the specimen and excites the characteristic spectral X-ray lines in the specimen. Each of the 92 elements emits a characteristic spectrum. Unlike the optical spectrum, the X-ray spectrum is quite simple. The strongest line, usually the Kalpha line, but sometimes the Lalpha line, suffices to identify the element. The existence of a particular line betrays the existence of an element, and the intensity is proportional to the amount of the particular element in the specimen. The characteristic lines are reflected from a crystal, the analyzer, under an angle that is given by the Bragg condition. The crystal samples all the diffraction angles theta by rotation, while the detector rotates over the corresponding angle 2-theta. With a sensitive detector, the X-ray photons are counted individually. By stepping the detectors along the angle, and leaving it in position for a known time, the number of counts at each angular position gives the line intensity. These counts may be plotted on a curve by an appropriate display unit. The characteristic X-rays come out at specific angles, and since the angular position for every X-ray spectral line is known and recorded, it is easy to find the sample's composition.
A chart for a scan of a Molybdenum specimen is shown in Fig. 2. The tall peak on the left side is the characteristic alpha line at a two theta of 12 degrees. Second and third order lines also appear.
Since the alpha line is often the only line of interest in many industrial applications, the final device in the Norelco X- ray spectrographic instrument line was the Autrometer. This device could be programmed to automatically read at any desired two theta angle for any desired time interval.
Soon after the Autrometer was introduced, Philips decided to stop marketing X-ray instruments developed in both the U.S. and Europe and settled on offering only the Eindhoven line of instruments.
In 1961, during the development of the Autrometer, Norelco was given a sub-contract from the Jet Propulsion Lab. The Lab was working on the instrument package for the Surveyor spaceship. The composition of the Moon’s surface was of major interest and the use of an X-ray detection instrument was viewed as a possible solution. Working with a power limit of 30 watts was very challenging, and a device was delivered but it wasn’t used. Later NASA developments did lead to an X-ray spectrographic unit that did make the desired moon soil analysis.
The Norelco efforts faded but the use of X-ray spectroscopy in units known as XRF instruments continued to grow. With a boost from NASA, units were finally reduced to handheld size and are seeing widespread use. Units are available from Bruker, Thermo Scientific, Elvatech Ltd. and SPECTRA.
Other types of X-ray spectroscopy
X-ray absorption spectroscopy
X-ray magnetic circular dichroism
See also
Auger electron spectroscopy
X-Ray Spectrometry (journal)
New perspectives of explosive detection based on CdTe/CDZnTe spectrometric detectors
References | X-ray spectroscopy | [
"Physics",
"Chemistry"
] | 3,310 | [
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,063,470 | https://en.wikipedia.org/wiki/Hydrogen%20embrittlement | Hydrogen embrittlement (HE), also known as hydrogen-assisted cracking or hydrogen-induced cracking (HIC), is a reduction in the ductility of a metal due to absorbed hydrogen. Hydrogen atoms are small and can permeate solid metals. Once absorbed, hydrogen lowers the stress required for cracks in the metal to initiate and propagate, resulting in embrittlement. Hydrogen embrittlement occurs in steels, as well as in iron, nickel, titanium, cobalt, and their alloys. Copper, aluminium, and stainless steels are less susceptible to hydrogen embrittlement.
The essential facts about the nature of hydrogen embrittlement have been known since the 19th century.
Hydrogen embrittlement is maximised at around room temperature in steels, and most metals are relatively immune to hydrogen embrittlement at temperatures above 150 °C. Hydrogen embrittlement requires the presence of both atomic ("diffusible") hydrogen and a mechanical stress to induce crack growth, although that stress may be applied or residual. Hydrogen embrittlement increases at lower strain rates. In general, higher-strength steels are more susceptible to hydrogen embrittlement than mid-strength steels.
Metals can be exposed to hydrogen from two types of sources: gaseous hydrogen and hydrogen chemically generated at the metal surface. Gaseous hydrogen is molecular hydrogen and does not cause embrittlement, though it can cause a hot hydrogen attack (see below). It is the atomic hydrogen from a chemical attack which causes embrittlement because the atomic hydrogen dissolves quickly into the metal at room temperature. Gaseous hydrogen is found in pressure vessels and pipelines. Electrochemical sources of hydrogen include acids (as may be encountered during pickling, etching, or cleaning), corrosion (typically due to aqueous corrosion or cathodic protection), and electroplating. Hydrogen can be introduced into the metal during manufacturing by the presence of moisture during welding or while the metal is molten. The most common causes of failure in practice are poorly controlled electroplating or damp welding rods.
Hydrogen embrittlement as a term can be used to refer specifically to the embrittlement that occurs in steels and similar metals at relatively low hydrogen concentrations, or it can be used to encompass all embrittling effects that hydrogen has on metals. These broader embrittling effects include hydride formation, which occurs in titanium and vanadium but not in steels, and hydrogen-induced blistering, which only occurs at high hydrogen concentrations and does not require the presence of stress. However, hydrogen embrittlement is almost always distinguished from high temperature hydrogen attack (HTHA), which occurs in steels at temperatures above 204 °C and involves the formation of methane pockets. The mechanisms (there are many) by which hydrogen causes embrittlement in steels are not comprehensively understood and continue to be explored and studied.
Mechanisms
Hydrogen embrittlement is a complex process involving a number of distinct contributing micro-mechanisms, not all of which need to be present. The mechanisms include the formation of brittle hydrides, the creation of voids that can lead to high-pressure bubbles, enhanced decohesion at internal surfaces and localised plasticity at crack tips that assist in the propagation of cracks. There is a great variety of mechanisms that have been proposed and investigated as to the cause of brittleness once diffusible hydrogen has been dissolved into the metal. In recent years, it has become widely accepted that HE is a complex process dependent on material and environment so that no single mechanism applies exclusively.
Internal pressure: At high hydrogen concentrations, absorbed hydrogen species recombine in voids to form hydrogen molecules (H2), creating pressure from within the metal. This pressure can increase to levels where cracks form, commonly designated hydrogen-induced cracking (HIC), as well as blisters forming on the specimen surface, designated hydrogen-induced blistering. These effects can reduce ductility and tensile strength.
Hydrogen enhanced localised plasticity (HELP): Hydrogen increases the nucleation and movement of dislocations at a crack tip. HELP results in crack propagation by localised ductile failure at the crack tip with less deformation occurring in the surrounding material, which gives a brittle appearance to the fracture.
Hydrogen decreased dislocation emission: Molecular dynamics simulations reveal a ductile-to-brittle transition caused by the suppression of dislocation emission at the crack tip by dissolved hydrogen. This prevents the crack tip rounding-off, so the sharp crack then leads to brittle-cleavage failure.
Hydrogen enhanced decohesion (HEDE): Interstitial hydrogen lowers the stress required for metal atoms to fracture apart. HEDE can only occur when the local concentration of hydrogen is high, such as due to the increased hydrogen solubility in the tensile stress field at a crack tip, at stress concentrators, or in the tension field of edge dislocations.
Metal hydride formation: The formation of brittle hydrides with the parent material allows cracks to propagate in a brittle fashion. This is particularly a problem with vanadium alloys, while most other structural alloys do not easily form hydrides.
Phase transformations: Hydrogen can induce phase transformations in some materials, and the new phase may be less ductile.
Material susceptibility
Hydrogen embrittles a variety of metals including steel, aluminium (at high temperatures only), and titanium. Austempered iron is also susceptible, though austempered steel (and possibly other austempered metals) displays increased resistance to hydrogen embrittlement. NASA has reviewed which metals are susceptible to embrittlement and which only prone to hot hydrogen attack: nickel alloys, austenitic stainless steels, aluminium and alloys, copper (including alloys, e.g. beryllium copper). Sandia has also produced a comprehensive guide.
Steels
Steel with an ultimate tensile strength of less than 1000 MPa (~145,000 psi) or hardness of less than HRC 32 on the Hardness Rockwell Scale is not generally considered susceptible to hydrogen embrittlement. As an example of severe hydrogen embrittlement, the elongation at failure of 17-4PH precipitation hardened stainless steel was measured to drop from 17% to only 1.7% when smooth specimens were exposed to high-pressure hydrogen
As the strength of steels increases, the fracture toughness decreases, so the likelihood that hydrogen embrittlement will lead to fracture increases. In high-strength steels, anything above a hardness of HRC 32 may be susceptible to early hydrogen cracking after plating processes that introduce hydrogen. They may also experience long-term failures any time from weeks to decades after being placed in service due to accumulation of hydrogen over time from cathodic protection and other sources. Numerous failures have been reported in the hardness range from HRC 32-36 and above; therefore, parts in this range should be checked during quality control to ensure they are not susceptible.
Testing the fracture toughness of hydrogen-charged, embrittled specimens is complicated by the need to keep charged specimens very cold, in liquid nitrogen, to prevent the hydrogen diffusing away.
Copper
Copper alloys which contain oxygen can be embrittled if exposed to hot hydrogen. The hydrogen diffuses through the copper and reacts with inclusions of , forming 2 metallic Cu atoms and (water), which then forms pressurized bubbles at the grain boundaries. This process can cause the grains to be forced away from each other, and is known as steam embrittlement (because steam is directly produced inside the copper crystal lattice, not because exposure of copper to external steam causes the problem).
Vanadium, nickel, and titanium
Alloys of vanadium, nickel, and titanium have a high hydrogen solubility, and can therefore absorb significant amounts of hydrogen. This can lead to hydride formation, resulting in irregular volume expansion and reduced ductility (because metallic hydrides are fragile ceramic materials). This is a particular issue when looking for non-palladium-based alloys for use in hydrogen separation membranes.
Fatigue
While most failures in practice have been through fast failure, there is experimental evidence that hydrogen also affects the fatigue properties of steels. This is entirely expected given the nature of the embrittlement mechanisms proposed for fast fracture. In general hydrogen embrittlement has a strong effect on high-stress, low-cycle fatigue and very little effect on high-cycle fatigue.
Environmental embrittlement
Hydrogen embrittlement is a volume effect: it affects the volume of the material. Environmental embrittlement is a surface effect where molecules from the atmosphere surrounding the material under test are adsorbed onto the fresh crack surface. This is most clearly seen from fatigue measurements where the measured crack growth rates can be an order of magnitude higher in hydrogen than in air. That this effect is due to adsorption, which saturates when the crack surface is completely covered, is understood from the weak dependence of the effect on the pressure of hydrogen.
Environmental embrittlement is also observed to reduce fracture toughness in fast fracture tests, but the severity is much reduced compared with the same effect in fatigue.
Hydrogen embrittlement occurs when a previously embrittled material has low fracture toughness regardless of the atmosphere in which it is tested. Environmental embrittlement occurs when the low fracture toughness is only observed in that atmosphere.
Sources of hydrogen
During manufacture, hydrogen can be dissolved into the component by processes such as phosphating, pickling, electroplating, casting, carbonizing, surface cleaning, electrochemical machining, welding, hot roll forming, and heat treatments.
During service use, hydrogen can be dissolved into the metal from wet corrosion or through misapplication of protection measures such as cathodic protection. In one case of failure during construction of the San Francisco–Oakland Bay Bridge galvanized (i.e. zinc-plated) rods were left wet for 5 years before being tensioned. The reaction of the zinc with water introduced hydrogen into the steel.
A common case of embrittlement during manufacture is poor arc welding practice, in which hydrogen is released from moisture, such as in the coating of welding electrodes or from damp welding rods. To avoid atomic hydrogen formation in the high temperature plasma of the arc, welding rods have to be perfectly dried in an oven at the appropriate temperature and duration before use. Another way to minimize the formation of hydrogen is to use special low-hydrogen electrodes for welding high-strength steels.
Apart from arc welding, the most common problems are from chemical or electrochemical processes which, by reduction of hydrogen ions or water, generate hydrogen atoms at the surface, which rapidly dissolve in the metal. One of these chemical reactions involves hydrogen sulfide () in sulfide stress cracking (SSC), a significant problem for the oil and gas industries.
After a manufacturing process or treatment which may cause hydrogen ingress, the component should be baked to remove or immobilize the hydrogen.
Prevention
Hydrogen embrittlement can be prevented through several methods, all of which are centered on minimizing contact between the metal and hydrogen, particularly during fabrication and the electrolysis of water. Embrittling procedures such as acid pickling should be avoided, as should increased contact with elements such as sulfur and phosphate.
If the metal has not yet started to crack, hydrogen embrittlement can be reversed by removing the hydrogen source and causing the hydrogen within the metal to diffuse out through heat treatment. This de-embrittlement process, known as low hydrogen annealing or "baking", is used to overcome the weaknesses of methods such as electroplating which introduce hydrogen to the metal, but is not always entirely effective because a sufficient time and temperature must be reached. Tests such as ASTM F1624 can be used to rapidly identify the minimum baking time (by testing using careful design of experiments, a relatively low number of samples can be used to pinpoint this value). Then the same test can be used as a quality control check to evaluate if baking was sufficient on a per-batch basis.
In the case of welding, often pre-heating and post-heating the metal is applied to allow the hydrogen to diffuse out before it can cause any damage. This is specifically done with high-strength steels and low alloy steels such as the chromium/molybdenum/vanadium alloys. Due to the time needed to re-combine hydrogen atoms into the hydrogen molecules, hydrogen cracking due to welding can occur over 24 hours after the welding operation is completed.
Another way of preventing this problem is through materials selection. This will build an inherent resistance to this process and reduce the need for post-processing or constant monitoring for failure. Certain metals or alloys are highly susceptible to this issue, so choosing a material that is minimally affected while retaining the desired properties would also provide an optimal solution. Much research has been done to catalogue the compatibility of certain metals with hydrogen. Tests such as ASTM F1624 can also be used to rank alloys and coatings during materials selection to ensure (for instance) that the threshold of cracking is below the threshold for hydrogen-assisted stress corrosion cracking. Similar tests can also be used during quality control to more effectively qualify materials being produced in a rapid and comparable manner.
Surface coatings
Coatings act as a barrier between the metal substrate and the surrounding environment, hindering the ingress of hydrogen atoms. Various techniques can be used to apply coatings, such as electroplating, chemical conversion coatings, or organic coatings. The choice of coating depends on factors such as the type of metal, the operating environment, and the specific requirements of the application.
Electroplating is a commonly used method to deposit a protective layer onto the metal surface. This process involves immersing the metal substrate into an electrolyte solution containing metal ions. By applying an electric current, the metal ions are reduced and form a metallic coating on the substrate. Electroplating can provide an excellent protective layer that enhances corrosion resistance and reduces the susceptibility to hydrogen embrittlement.
Chemical conversion coatings are another effective method for surface protection. These coatings are typically formed through chemical reactions between the metal substrate and a chemical solution. The conversion coating chemically reacts with the metal surface, resulting in a thin, tightly adhering protective layer. Examples of conversion coatings include chromate, phosphate, and oxide coatings. These coatings not only provide a barrier against hydrogen diffusion but also enhance the metal's corrosion resistance.
Organic coatings, such as paints or polymer coatings, offer additional protection against hydrogen embrittlement. These coatings form a physical barrier between the metal surface and the environment. They provide excellent adhesion, flexibility, and resistance to environmental factors. Organic coatings can be applied through various methods, including spray coating, dip coating, or powder coating. They can be formulated with additives to further enhance their resistance to hydrogen ingress.
Thermally sprayed coatings offer several advantages in the context of hydrogen embrittlement prevention. The coating materials used in this process are often composed of materials with excellent resistance to hydrogen diffusion, such as ceramics or cermet alloys. These materials have a low permeability to hydrogen, creating a robust barrier against hydrogen ingress into the metal substrate.
Testing
Most analytical methods for hydrogen embrittlement involve evaluating the effects of (1) internal hydrogen from production and/or (2) external sources of hydrogen such as cathodic protection. For steels, it is important to test specimens in the lab that are at least as hard (or harder) as the final parts will be. Ideally, specimens should be made of the final material or the nearest possible representative, as fabrication can have a profound impact on resistance to hydrogen-assisted cracking.
There are numerous ASTM standards for testing for hydrogen embrittlement:
ASTM B577 is the Standard Test Methods for Detection of Cuprous Oxide (Hydrogen Embrittlement Susceptibility) in Copper. The test focuses on hydrogen embrittlement of copper alloys, including a metallographic evaluation (method A), testing in a hydrogen charged chamber followed by metallography (method B), and method C is the same as B but includes a bend test.
ASTM B839 is the Standard Test Method for Residual Embrittlement in Metallic Coated, Externally Threaded Articles, Fasteners, and Rod-Inclined Wedge Method.
ASTM F519 is the Standard Test Method for Mechanical Hydrogen Embrittlement Evaluation of Plating/Coating Processes and Service Environments. There are 7 different samples designs and the two most commons tests are (1) the rapid test, the Rising step load testing (RSL) method per ASTM F1624 and (2) the sustained load test, which takes 200 hours. The sustained load test is still included in many legacy standards, but the RSL method is increasingly being adopted due to speed, repeatability, and the quantitative nature of the test. The RSL method provides an accurate ranking of the effect of hydrogen from both internal and external sources.
ASTM F1459 is the Standard Test Method for Determination of the Susceptibility of Metallic Materials to Hydrogen Gas Embrittlement (HGE) Test. The test uses a diaphragm loaded with a differential pressure.
ASTM G142 is the Standard Test Method for Determination of Susceptibility of Metals to Embrittlement in Hydrogen Containing Environments at High Pressure, High Temperature, or Both. The test uses a cylindrical tensile specimen tested into an enclosure pressurized with hydrogen or helium.
ASTM F1624 is the Standard Test Method for Measurement of Hydrogen Embrittlement Threshold in Steel by the Incremental Step Loading Technique. The test uses the incremental step loading (ISL) or Rising step load testing (RSL) method for quantitatively testing for the Hydrogen Embrittlement threshold stress for the onset of Hydrogen-Induced Cracking due to platings and coatings from Internal Hydrogen Embrittlement (IHE) and Environmental Hydrogen Embrittlement (EHE). F1624 provides a rapid, quantitative measure of the effects of hydrogen both from internal sources and external sources (which is accomplished by applying a selected voltage in an electrochemical cell). The F1624 test is performed by comparing a standard fast-fracture tensile strength to the fracture strength from a Rising step load testing practice where the load is held for hour(s) at each step. In many cases, it can be performed in 30 hours or less.
ASTM F1940 is the Standard Test Method for Process Control Verification to Prevent Hydrogen Embrittlement in Plated or Coated Fasteners. While the title now explicitly includes the word fasteners, F1940 was not originally intended for these purposes. F1940 is based on the F1624 method and is similar to F519 but with different root radius and stress concentration factors. When specimens exhibit a threshold cracking of 75% of the net fracture strength, the plating bath is considered to be 'non-embrittling'.
There are many other related standards for hydrogen embrittlement:
NACE TM0284-2003 (NACE International) Resistance to Hydrogen-Induced Cracking
ISO 11114-4:2005 (ISO)Test methods for selecting metallic materials resistant to hydrogen embrittlement.
Standard Test Method for Mechanical Hydrogen Embrittlement Evaluation of Plating/Coating Processes and Service Environments
Notable failures from hydrogen embrittlement
In 2013, six months prior to opening, the East Span of the Oakland Bay Bridge failed during testing. Catastrophic failures occurred in shear bolts in the span, after only two weeks of service, with the failure attributed to embrittlement (see details above).
In the City of London, 122 Leadenhall Street, generally known as 'the Cheesegrater', suffered from hydrogen embrittlement in steel bolts, with three bolts failing in 2014 and 2015. Most of the 3,000 bolts were replaced at a cost of £6m.
See also
Hydrogen analyzer
Hydrogen damage
Hydrogen piping
Hydrogen safety
Low hydrogen annealing
Nascent hydrogen
Oxygen-free copper
Stress corrosion cracking
White etching cracks
Zircotec
References
External links
Resources on hydrogen embrittlement, Cambridge University
Hydrogen embrittlement
Hydrogen purity plays a critical role
A Sandia National Lab technical reference manual.
Hydrogen embrittlement, NASA
Corrosion
Electrochemistry
Hydrogen
Materials degradation
Metalworking | Hydrogen embrittlement | [
"Chemistry",
"Materials_science",
"Engineering"
] | 4,233 | [
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Materials degradation"
] |
1,063,654 | https://en.wikipedia.org/wiki/Rossby%20parameter | The Rossby parameter (or simply beta ) is a number used in geophysics and meteorology which arises due to the meridional variation of the Coriolis force caused by the spherical shape of the Earth. It is important in the generation of Rossby waves. The Rossby parameter is given by
where is the Coriolis parameter, is the latitude, is the angular speed of the Earth's rotation, and is the mean radius of the Earth. Although both involve Coriolis effects, the Rossby parameter describes the variation of the effects with latitude (hence the latitudinal derivative), and should not be confused with the Rossby number.
See also
Beta plane
References
Atmospheric dynamics | Rossby parameter | [
"Chemistry"
] | 144 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
16,434,453 | https://en.wikipedia.org/wiki/Superconductor%20classification | Superconductors can be classified in accordance with several criteria that depend on physical properties, current understanding, and the expense of cooling them or their material.
By their magnetic properties
Type I superconductors: those having just one critical field (Hc) and changing abruptly from one state to the other when it is reached.
Type II superconductors: having two critical fields, Hc1 and Hc2, being a perfect superconductor under the lower critical field (Hc1) and leaving completely the superconducting state to a normally conducting state above the upper critical field (Hc2), being in a mixed state when between the critical fields.
Type-1.5 superconductors: multicomponent superconductors characterized by two or more coherence lengths.
By their agreement with conventional models
Conventional superconductors: those which can be fully explained with BCS theory or related theories.
Unconventional superconductors: those which fail to be explained using such theories, such as:
Heavy fermion superconductors
This criterion is useful as BCS theory has successfully explained the properties of conventional superconductors since 1957, yet there have been no satisfactory theories to fully explain unconventional superconductors. In most cases conventional superconductors are type I, but there are exceptions such as niobium, which is both conventional and type II.
By their critical temperature
Low-temperature superconductors, or LTS: those whose critical temperature is below 77 K.
High-temperature superconductors, or HTS: those whose critical temperature is above 77 K.
Room-temperature superconductors: those whose critical temperature is above 273 K.
77 K is used as the demarcation point to emphasize whether or not superconductivity in the materials can be achieved with liquid nitrogen (whose boiling point is 77K), which is much more feasible than liquid helium (an alternative to achieve the temperatures needed to get low-temperature superconductors).
By material constituents and structure
Some pure elements, such as lead or mercury (but not all, as some never reach the superconducting phase).
Some allotropes of carbon, such as fullerenes, nanotubes, or diamond.
Most superconductors made of pure elements are type I (except niobium, technetium, vanadium, silicon, and the above-mentioned carbon allotropes).
Alloys, such as
Niobium-titanium (NbTi), whose superconducting properties were discovered in 1962.
Ceramics (often insulators in the normal state), which include
Cuprates i.e. copper oxides (often layered, not isotropic)
The YBCO family, which are several yttrium-barium-copper oxides, especially YBa2Cu3O7. They are arguably the most famous high-temperature superconductors.
Nickelates (RNiO2 R=Rare earth ion) where Sr-doped infinite-layer nickelate NdNiO2 undergo a superconducting transition at 9-15 K. In the family of Ruddlesden-Popper phase analog Nd6Ni5O12 (n=5) becomes superconducting at 13 K. Note that this is not a complete list and is a topic of current research.
Iron-based superconductors, including the oxypnictides.
Magnesium diboride (MgB2), whose critical temperature is 39K, being the conventional superconductor with the highest known temperature.
non-cuprate oxides such as BKBO.
Palladates – palladium compounds.
others, such as the "metallic" compounds and which are both superconductors below .
See also
Conventional superconductor
covalent superconductors
List of superconductors
High-temperature superconductivity
Room temperature superconductor
Superconductivity
Technological applications of superconductivity
Timeline of low-temperature technology
Type-I superconductor
Type-II superconductor
Type-1.5 superconductor
Heavy fermion superconductor
Organic superconductor
Unconventional superconductor
References
Superconductivity | Superconductor classification | [
"Physics",
"Materials_science",
"Engineering"
] | 877 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
16,434,531 | https://en.wikipedia.org/wiki/Perfusion%20scanning | Perfusion is the passage of fluid through the lymphatic system or blood vessels to an organ or a tissue. The practice of perfusion scanning is the process by which this perfusion can be observed, recorded and quantified. The term perfusion scanning encompasses a wide range of medical imaging modalities.
Applications
With the ability to ascertain data on the blood flow to vital organs such as the heart and the brain, doctors are able to make quicker and more accurate choices on treatment for patients. Nuclear medicine has been leading perfusion scanning for some time, although the modality has certain pitfalls. It is often dubbed 'unclear medicine' as the scans produced may appear to the untrained eye as just fluffy and irregular patterns. More recent developments in CT and MRI have meant clearer images and solid data, such as graphs depicting blood flow, and blood volume charted over a fixed period of time.
Methods
Microspheres
CT
MRI
Nuclear medicine or NM
Microsphere perfusion
Using radioactive microspheres is an older method of measuring perfusion than the more recent imaging techniques. This process involves labeling microspheres with radioactive isotopes and injecting these into the test subject. Perfusion measurements are taken by comparing the radioactivity of selected regions within the body to radioactivity of blood samples withdrawn at the time of microsphere injection.
Later, techniques were developed to substitute radioactively labeled microspheres for fluorescent microspheres.
CT perfusion
The method by which perfusion to an organ measured by CT is still a relatively new concept, although the first dynamic imaging studies of cerebral perfusion were reported on in 1979 by E. Ralph Heinz et al. from the Duke University Medical Center, Durham, North Carolina, itself citing a reference on a presentation on "Dynamic Computed Tomography" at the XI. Symposium Neuroradiologicum in Wiesbaden, June 4–10, 1978, which has not been submitted to the conference proceedings. The original framework and principles for CT perfusion analysis were concretely laid out in 1980 by Leon Axel at University of California San Francisco. It is most commonly carried out for neuroimaging using dynamic sequential scanning of a pre-selected region of the brain during the injection of a bolus of iodinated contrast material as it travels through the vasculature. Various mathematical models can then be used to process the raw temporal data to ascertain quantitative information such as rate of cerebral blood flow (CBF) following an ischemic stroke or aneurysmal subarachnoid hemorrhage. Practical CT perfusion as performed on modern CT scanners was first described by Ken Miles, Mike Hayball and Adrian Dixon from Cambridge UK and subsequently developed by many individuals including Matthias Koenig and Ernst Klotz in Germany, and later by Max Wintermark in Switzerland and Ting-Yim Lee in Ontario, Canada.
MRI perfusion
There are different techniques of Perfusion MRI, the most common being dynamic contrast-enhanced (DCE), dynamic susceptibility contrast imaging (DSC), and arterial spin labelling (ASL).
In DSC, Gadolinium contrast agent (Gd) is injected (usually intravenously) and a time series of fast T2*-weighted images is acquired. As Gadolinium passes through the tissues, it induces a reduction of T2* in the nearby water protons; the corresponding decrease in signal intensity observed depends on the local Gd concentration, which may be considered a proxy for perfusion. The acquired time series data are then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
DCE-MRI also uses intravenous Gd contrast, but the time series is T1-weighted and gives increased signal intensity corresponding to local Gd concentration. Modelling of DCE-MRI yields parameters related to vascular permeability and extravasation transfer rate (see main article on perfusion MRI).
Arterial spin labelling (ASL) has the advantage of not relying on an injected contrast agent, instead inferring perfusion from a drop in signal observed in the imaging slice arising from inflowing spins (outside the imaging slice) having been selectively saturated. A number of ASL schemes are possible, the simplest being flow alternating inversion recovery (FAIR) which requires two acquisitions of identical parameters with the exception of the out-of-slice saturation; the difference in the two images is theoretically only from inflowing spins, and may be considered a 'perfusion map'.
NM perfusion
Nuclear medicine uses radioactive isotopes for the diagnosis and treatment of patients. Whereas radiology provides data mostly on structure, nuclear medicine provides complementary information about function.
All nuclear medicine scans give information to the referrering clinician on the function of the system they are imaging.
Specific techniques used are generally either of the following:
Single-photon emission computed tomography (SPECT), which creates 3-dimensional images of the target organ or organ system.
Scintigraphy, creating 2-dimensional images.
Uses of NM perfusion scanning include Ventilation/perfusion scans of lungs, myocardial perfusion imaging of the heart, and functional brain imaging.
Ventilation/perfusion scans
Ventilation/perfusion scans, sometimes called a VQ (V=Ventilation, Q=perfusion) scan, is a way of identifying mismatched areas of blood and air supply to the lungs. It is primarily used to detect a pulmonary embolus.
The perfusion part of the study uses a radioisotope tagged to the blood which shows where in the lungs the blood is perfusing. If the scan shows up any area missing a supply on the scans this means there is a blockage which is not allowing the blood to perfuse that part of the organ.
Myocardial perfusion imaging
Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is that under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test.
A cardiac specific radiopharmaceutical is administered. E.g. 99mTc-tetrofosmin (Myoview, GE healthcare), 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb now Lantheus Medical Imaging). Following this, the heart rate is raised to induce myocardial stress, either by exercise or pharmacologically with adenosine, dobutamine or dipyridamole (aminophylline can be used to reverse the effects of dipyridamole).
SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by comparing stress images to a further set of images obtained at rest. As the radionuclide redistributes slowly, it is not usually possible to perform both sets of images on the same day, hence a second attendance is required 1–7 days later (although, with a Tl-201 myocardial perfusion study with dipyridamole, rest images can be acquired as little as two-hours post stress). However, if stress imaging is normal, it is unnecessary to perform rest imaging, as it too will be normal – thus stress imaging is normally performed first.
MPI has been demonstrated to have an overall accuracy of about 83% (sensitivity: 85%; specificity: 72%), and is comparable (or better) than other non-invasive tests for ischemic heart disease, including stress echocardiography.
Functional brain imaging
Usually the gamma-emitting tracer used in functional brain imaging is technetium (99mTc) exametazime (99mTc-HMPAO, hexamethylpropylene amine oxime). Technetium-99m (99mTc) is a metastable nuclear isomer which emits gamma rays which can be detected by a gamma camera. When it is attached to exametazime, this allows 99mTc to be taken up by brain tissue in a manner proportional to brain blood flow, in turn allowing brain blood flow to be assessed with the nuclear gamma camera.
Because blood flow in the brain is tightly coupled to local brain metabolism and energy use, 99mTc-exametazime (as well as the similar 99mTc-EC tracer) is used to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. Meta analysis of many reported studies suggests that SPECT with this tracer is about 74% sensitive at diagnosing Alzheimer's disease, vs. 81% sensitivity for clinical exam (mental testing, etc.). More recent studies have shown accuracy of SPECT in Alzheimer diagnosis as high as 88%. In meta analysis, SPECT was superior to clinical exam and clinical criteria (91% vs. 70%) in being able to differentiate Alzheimer's disease from vascular dementias. This latter ability relates to SPECT's imaging of local metabolism of the brain, in which the patchy loss of cortical metabolism seen in multiple strokes differs clearly from the more even or "smooth" loss of non-occipital cortical brain function typical of Alzheimer's disease.
99mTc-exametazime SPECT scanning competes with fludeoxyglucose (FDG) PET scanning of the brain, which works to assess regional brain glucose metabolism, to provide very similar information about local brain damage from many processes. SPECT is more widely available, however, for the basic reason that the radioisotope generation technology is longer-lasting and far less expensive in SPECT, and the gamma scanning equipment is less expensive as well. The reason for this is that 99mTc is extracted from relatively simple technetium-99m generators which are delivered to hospitals and scanning centers weekly, to supply fresh radioisotope, whereas FDG PET relies on FDG which must be made in an expensive medical cyclotron and "hot-lab" (automated chemistry lab for radiopharmaceutical manufacture), then must be delivered directly to scanning sites, with delivery-fraction for each trip limited by its natural short 110 minute half-life.
Testicular torsion detection
Radionuclide scanning of the scrotum is the most accurate imaging technique to diagnose testicular torsion, but it is not routinely available. The agent of choice for this purpose is technetium-99m pertechnetate. Initially it provides a radionuclide angiogram, followed by a static image after the radionuclide has perfused the tissue. In the healthy patient, initial images show symmetric flow to the testes, and delayed images show uniformly symmetric activity.
See also
Functional magnetic resonance imaging
Ischemia-reperfusion injury of the appendicular musculoskeletal system
MUGA scan
Perfusion
Positron emission tomography
Stroke
Ventilation/perfusion ratio
References
Medical tests
Medical physics
Medical imaging | Perfusion scanning | [
"Physics"
] | 2,341 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
16,437,835 | https://en.wikipedia.org/wiki/Thermal%20copper%20pillar%20bump | A thermal copper pillar bump, also known as a "thermal bump", is a thermoelectric device made from thin-film thermoelectric material embedded in flip chip interconnects (in particular copper pillar solder bumps) for use in electronics and optoelectronic packaging, including: flip chip packaging of CPU and GPU integrated circuits (chips), laser diodes, and semiconductor optical amplifiers (SOA). Unlike conventional solder bumps that provide an electrical path and a mechanical connection to the package, thermal bumps act as solid-state heat pumps and add thermal management functionality locally on the surface of a chip or to another electrical component. The diameter of a thermal bump is 238 μm and 60 μm high.
Thermal bumps use the thermoelectric effect, which is the direct conversion of temperature differences to electric voltage and vice versa. Simply put, a thermoelectric device creates a voltage when there is a different temperature on each side, or when a voltage is applied to it, it creates a temperature difference. This effect can be used to generate electricity, to measure temperature, to cool objects, or to heat them.
For each bump, thermoelectric cooling (TEC) occurs when a current is passed through the bump. The thermal bump pulls heat from one side of the device and transfers it to the other as current is passed through the material. This is known as the Peltier effect. The direction of heating and cooling is determined by the direction of current flow and the sign of the majority electrical carrier in the thermoelectric material. Thermoelectric power generation (TEG) on the other hand occurs when the thermal bump is subjected to a temperature gradient (i.e., the top is hotter than the bottom). In this instance, the device generates current, converting heat into electrical power. This is termed the Seebeck effect.
The thermal bump was developed by Nextreme Thermal Solutions as a method for integrating active thermal management functionality at the chip level in the same manner that transistors, resistors and capacitors are integrated in conventional circuit designs today. Nextreme chose the copper pillar bump as an integration strategy due to its widespread acceptance by Intel, Amkor and other industry leaders as the method for connecting microprocessors and other advanced electronics devices to various surfaces during a process referred to as “flip-chip” packaging. The thermal bump can be integrated as a part of the standard flip-chip process (Figure 1) or integrated as discrete devices.
The efficiency of a thermoelectric device is measured by the heat moved (or pumped) divided by the amount of electrical power supplied to move this heat. This ratio is termed the coefficient of performance or COP and is a measured characteristic of a thermoelectric device. The COP is inversely related to the temperature difference that the device produces. As you move a cooling device further away from the heat source, parasitic losses between the cooler and the heat source necessitate additional cooling power: the further the distance between source and cooler, the more cooling is required. For this reason, the cooling of electronic devices is most efficient when it occurs closest to the source of the heat generation.
Use of the thermal bump does not displace system level cooling, which is still needed to move heat out of the system; rather it introduces a fundamentally new methodology for achieving temperature uniformity at the chip and board level. In this manner, overall thermal management of the system becomes more efficient. In addition, while conventional cooling solutions scale with the size of the system (bigger fans for bigger systems, etc.), the thermal bump can scale at the chip level by using more thermal bumps in the overall design.
A brief history of solder and flip chip/chip scale packaging
Solder bumping technology (the process of joining a chip to a substrate without shorting using solder) was first conceived and implemented by IBM in the early 1960s. Three versions of this type of solder joining were developed. The first was to embed copper balls in the solder bumps to provide a positive stand-off. The second solution, developed by Delco Electronics (General Motors) in the late 1960s, was similar to embedding copper balls except that the design employed a rigid silver bump. The bump provided a positive stand-off and was attached to the substrate by means of solder that was screen-printed onto the substrate. The third solution was to use a screened glass dam near the electrode tips to act as a ‘‘stop-off’’ to prevent the ball solder from flowing down the electrode. By then the Ball Limiting Metallurgy (BLM) with a high-lead (Pb) solder system and a copper ball had proven to work well. Therefore, the ball was simply removed and the solder evaporation process extended to form pure solder bumps that were approximately 125μm high. This system became known as the controlled collapse chip connection (C3 or C4).
Until the mid-1990s, this type of flip-chip assembly was practiced almost exclusively by IBM and Delco. Around this time, Delco sought to commercialize its technology and formed Flip Chip Technologies with Kulicke & Soffa Industries as a partner. At the same time, MCNC (which had developed a plated version of IBM’s C4 process) received funding from DARPA to commercialize its technology. These two organizations, along with APTOS (Advanced Plating Technologies on Silicon), formed the nascent out-sourcing market.
During this same time, companies began to look at reducing or streamlining their packaging, from the earlier multi-chip-on-ceramic packages that IBM had originally developed C4 to support, to what were referred to as Chip Scale Packages (CSP). There were a number of companies developing products in this area. These products could usually be put into one of two camps: either they were scaled down versions of the multi-chip on ceramic package (of which the Tessera package would be one example); or they were the streamlined versions developed by Unitive Electronics, et al. (where the package wiring had been transferred to the chip, and after bumping, they were ready to be placed).
One of the issues with the CSP type of package (which was intended to be soldered directly to an FR4 or flex circuit) was that for high-density interconnects, the soft solder bump provided less of a stand-off as the solder bump diameter and pitch were decreased. Different solutions were employed including one developed by Focus Interconnect Technology (former APTOS engineers), which used a high aspect ratio plated copper post to provide a larger fixed standoff than was possible for a soft solder collapse joint.
Today, flip chip is a well established technology and collapsed soft solder connections are used in the vast majority of assemblies. The copper post stand-off developed for the CSP market has found a home in high-density interconnects for advanced micro-processors and is used today by IBM for its CPU packaging.
Copper pillar solder bumping
Trends in high-density interconnects have led to the use of copper pillar solder bumps (CPB) for CPU and GPU packaging. CPBs are an attractive replacement for traditional solder bumps because they provide a fixed stand-off independent of pitch. This is extremely important as most of the high-end products are underfilled and a smaller standoff may create difficulties in getting the underfill adhesive to flow under the die.
Figure 2 shows an example of a CPB fabricated by Intel and incorporated into their Presler line of microprocessors among others. The cross section shows copper and a copper pillar (approximately 60 um high) electrically connected through an opening (or via) in the chip passivation layer at the top of the picture. At the bottom is another copper trace on the package substrate with solder between the two copper layers.
Thin-film thermoelectric technology
Thin films are thin material layers ranging from fractions of a nanometer to several micrometers in thickness. Thin-film thermoelectric materials are grown by conventional semiconductor deposition methods and fabricated using conventional semiconductor micro-fabrication techniques.
Thin-film thermoelectrics have been demonstrated to provide high heat pumping capacity that far exceeds the capacities provided by traditional bulk pellet TE products. The benefit of thin-films versus bulk materials for thermoelectric manufacturing is expressed in Equation 1. Here the Qmax (maximum heat pumped by a module) is shown to be inversely proportional to the thickness of the film, L.
Eq. 1
As such, TE coolers manufactured with thin-films can easily have 10x – 20x higher Qmax values for a given active area A. This makes thin-film TECs ideally suited for applications involving high heat-flux flows. In addition to the increased heat pumping capability, the use of thin films allows for truly novel implementation of TE devices. Instead of a bulk module that is 1–3 mm in thickness, a thin-film TEC can be fabricated less than 100 um in thickness.
In its simplest form, the P or N leg of a TE couple (the basic building block of all thin-film TE devices) is a layer of thin-film TE material with a solder layer above and below, providing electrical and thermal functionality.
Thermal copper pillar bump
The thermal bump is compatible with the existing flip-chip manufacturing infrastructure, extending the use of conventional solder bumped interconnects to provide active, integrated cooling of a flip-chipped component using the widely accepted copper pillar bumping process. The result is higher performance and efficiency within the existing semiconductor manufacturing paradigm. The thermal bump also enables power generating capabilities within copper pillar bumps for energy recycling applications.
Thermal bumps have been shown to achieve a temperature differential of 60 °C between the top and bottom headers; demonstrated power pumping capabilities exceeding 150 W/cm2; and when subjected to heat, have demonstrated the capability to generate up to 10 mW of power per bump.
Thermal copper pillar bump structure
Figure 3 shows an SEM cross-section of a TE leg. Here it is demonstrated that the thermal bump is structurally identical to a CPB with an extra layer, the TE layer, incorporated into the stack-up. The addition of the TE layer transforms a standard copper pillar bump into a thermal bump. This element, when properly configured electrically and thermally, provides active thermoelectric heat transfer from one side of the bump to the other side. The direction of heat transfer is dictated by the doping type of the thermoelectric material (either a P-type or N-type semiconductor) and the direction of electric current passing through the material. This type of thermoelectric heat transfer is known as the Peltier effect. Conversely, if heat is allowed to pass from one side of the thermoelectric material to the other, a current will be generated in the material in a phenomenon known as the Seebeck effect. The Seebeck effect is essentially the reverse of the Peltier effect. In this mode, electrical power is generated from the flow of heat in the TE element. The structure shown in Figure 3 is capable of operating in both the Peltier and Seebeck modes, though not simultaneously.
Figure 4 shows a schematic of a typical CPB and a thermal bump for comparison. These structures are similar, with both having copper pillars and solder connections. The primary difference between the two is the introduction of either a P- or N-type thermoelectric layer between two solder layers. The solders used with CPBs and thermal bumps can be any one of a number of commonly used solders including, but not limited to, Sn, SnPb eutectic, SnAg or AuSn.
Figure 5 shows a device equipped with a thermal bump. The thermal flow is shown by the arrows labeled “heat.” Metal traces, which can be several micrometres high, can be stacked or interdigitated to provide highly conductive pathways for collecting heat from the underlying circuit and funneling that heat to the thermal bump.
The metal traces shown in the figure for conducting electric current into the thermal bump may or may not be directly connected to the circuitry of the chip. In the case where there are electrical connections to the chip circuitry, on-board temperature sensors and driver circuitry can be used to control the thermal bump in a closed loop fashion to maintain optimal performance. Second, the heat that is pumped by the thermal bump and the additional heat created by the thermal bump in the course of pumping that heat will need to be rejected into the substrate or board. Since the performance of the thermal bump can be improved by providing a good thermal path for the rejected heat, it is beneficial to provide high thermally conductive pathways on the backside of the thermal bump. The substrate could be a highly conductive ceramic substrate like AlN or a metal (e.g., Cu, CuW, CuMo, etc.) with a dielectric. In this case, the high thermal conductance of the substrate will act as a natural pathway for the rejected heat. The substrate might also be a multilayer substrate like a printed wiring board (PWB) designed to provide a high-density interconnect. In this case, the thermal conductivity of the PWB may be relatively poor, so adding thermal vias (e.g. metal plugs) can provide excellent pathways for the rejected heat.
Applications
Thermal bumps can be used in a number of different ways to provide chip cooling and power generation.
General cooling
Thermal bumps can be evenly distributed across the surface of a chip to provide a uniform cooling effect. In this case, the thermal bumps may be interspersed with standard bumps that are used for signal, power and ground. This allows the thermal bumps to be placed directly under the active circuitry of the chip for maximum effectiveness. The number and density of thermal bumps are based on the heat load from the chip. Each P/N couple can provide a specific heat pumping (Q) at a specific temperature differential (ΔT) at a given electric current. Temperature sensors on the chip (“on board” sensors) can provide direct measurement of the thermal bump performance and provide feedback to the driver circuit.
Precision temperature control
Since thermal bumps can either cool or heat the chip depending on the current direction, they can be used to provide precision control of temperature for chips that must operate within specific temperature ranges irrespective of ambient conditions. For example, this is a common problem for many optoelectronic components.
Hotspot cooling
In microprocessors, graphics processors, and other high-end chips, hotspots can occur as power densities vary significantly across a chip. These hotspots can severely limit the performance of the devices. Because of the small size of the thermal bumps and the relatively high density at which they can be placed on the active surface of the chip, these structures are ideally suited for cooling hotspots. In such a case, the distribution of the thermal bumps may not need to be even. Rather, the thermal bumps would be concentrated in the area of the hotspot while areas of lower heat density would have fewer thermal bumps per unit area. In this way, cooling from the thermal bumps is applied only where needed, thereby reducing the added power necessary to drive the cooling and reducing the general thermal overhead on the system.
Power generation
In addition to chip cooling, thermal bumps can also be applied to high heat-flux interconnects to provide a constant, steady source of power for energy scavenging applications. Such a source of power, typically in the mW range, can trickle charge batteries for wireless sensor networks and other battery operated systems.
References
External links
Kulicke & Soffa
MCNC
Aptos Technology
Nextreme Thermal Solutions
Amkor Technology Inc.
White Papers, Articles and Application Notes
Electronics manufacturing
Semiconductors
Thermodynamics | Thermal copper pillar bump | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,297 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Thermodynamics",
"Electronics manufacturing",
"Solid state engineering",
"Matter",
"Dynamical systems"
] |
16,440,097 | https://en.wikipedia.org/wiki/Prydniprovsky%20Chemical%20Plant%20radioactive%20dumps | The now-defunct Prydniprovsky Chemical Plant (; Prydniprovsky khimichnyi zavod, PHZ, also PChP) in the city of Kamianske, Ukraine, processed uranium ore for the Soviet nuclear program from 1948 through 1991, preparing yellowcake.
Its processing wastes are now stored in nine open-air dumping grounds containing about 36 million tonnes of sand-like low-radioactive residue, occupying an area of 2.5 million square meters. The sites, improperly constructed from the very beginning, have been abandoned by the industry long ago and remain in very poor condition. The top concern is the dumps’ proximity to both the large Dnieper River and city residential areas. According to government experts, the dams separating the grounds from soil water are already leaking, causing the pollution of Dnieper basin. It is believed that further deterioration of the dams, irrespective of any outer accidents, may cause a devastating radioactive mudslide. The Ukrainian government is now tightening control over the grounds and seeking international aid in projects aimed at securing and the gradual re-processing of the PHZ wastes. Recently, the International Atomic Energy Agency has evaluated the condition of the sites and is considering dispatching a major observation and aid mission to Kamianske.
From 1946 to 1972, the company was engaged in uranium enrichment (production of its nitrous oxide) - the plant processed 65% of uranium ores in the Soviet Union. Attempts to recycle fuel elements began in 1974, but due to the growing number of oncological diseases in the city, this idea was abandoned.
The isolated dump grounds (about nine altogether, at a depth of 3 m) of the former plant are now located in different parts of the city and operated by the purposely-created "Barrier" State Enterprise - with an obscure-meaning new name that has yet to be widely known. That is why the sites, the company, and the whole problem is still commonly referred to as the "Prydniprovsky Chemical Plant (PHZ) wastes".
In 1964 the first treatment facilities appeared at the enterprise. In 2003, the Cabinet of Ministers approved an 11-year program on "bringing hazardous facilities of the Prydniprovsky Chemical Plant to an environmentally safe state and ensuring protection of the population from the harmful effects of ionizing radiation".
See also
Threat of the Dnieper reservoirs
References
Environment of Ukraine
Nuclear technology in Ukraine
Kamianske
Radioactive waste
Chemical engineering
Dnieper basin
Chemical companies of Ukraine
Nuclear technology in the Soviet Union
Chemical companies of the Soviet Union
Government-owned companies of Ukraine | Prydniprovsky Chemical Plant radioactive dumps | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 535 | [
"Nuclear physics",
"Chemical engineering",
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"nan",
"Hazardous waste",
"Radioactive waste"
] |
7,451,902 | https://en.wikipedia.org/wiki/Pipe%20network%20analysis | In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the flow rates and pressure drops in the individual sections of the network. This is a common problem in hydraulic design.
Description
To direct water to many users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes. This network creates a special class of problems in hydraulic design, with solution methods typically referred to as pipe network analysis. Water utilities generally make use of specialized software to automatically solve these problems. However, many such problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver, or a modern graphing calculator.
Deterministic network analysis
Once the friction factors of the pipes are obtained (or calculated from pipe friction laws such as the Darcy-Weisbach equation), we can consider how to calculate the flow rates and head losses on the network. Generally the head losses (potential differences) at each node are neglected, and a solution is sought for the steady-state flows on the network, taking into account the pipe specifications (lengths and diameters), pipe friction properties and known flow rates or head losses.
The steady-state flows on the network must satisfy two conditions:
At any junction, the total flow into a junction equals the total flow out of that junction (law of conservation of mass, or continuity law, or Kirchhoff's first law)
Between any two junctions, the head loss is independent of the path taken (law of conservation of energy, or Kirchhoff's second law). This is equivalent mathematically to the statement that on any closed loop in the network, the head loss around the loop must vanish.
If there are sufficient known flow rates, so that the system of equations given by (1) and (2) above is closed (number of unknowns = number of equations), then a deterministic solution can be obtained.
The classical approach for solving these networks is to use the Hardy Cross method. In this formulation, first you go through and create guess values for the flows in the network. The flows are expressed via the volumetric flow rates Q. The initial guesses for the Q values must satisfy the Kirchhoff laws (1). That is, if Q7 enters a junction and Q6 and Q4 leave the same junction, then the initial guess must satisfy Q7 = Q6 + Q4. After the initial guess is made, then, a loop is considered so that we can evaluate our second condition. Given a starting node, we work our way around the loop in a clockwise fashion, as illustrated by Loop 1. We add up the head losses according to the Darcy–Weisbach equation for each pipe if Q is in the same direction as our loop like Q1, and subtract the head loss if the flow is in the reverse direction, like Q4. In other words, we add the head losses around the loop in the direction of the loop; depending on whether the flow is with or against the loop, some pipes will have head losses and some will have head gains (negative losses).
To satisfy the Kirchhoff's second laws (2), we should end up with 0 about each loop at the steady-state solution. If the actual sum of our head loss is not equal to 0, then we will adjust all the flows in the loop by an amount given by the following formula, where a positive adjustment is in the clockwise direction.
where
n is 1.85 for Hazen-Williams and
n is 2 for Darcy–Weisbach.
The clockwise specifier (c) means only the flows that are moving clockwise in our loop, while the counter-clockwise specifier (cc) is only the flows that are moving counter-clockwise.
This adjustment doesn't solve the problem, since most networks have several loops. It is okay to use this adjustment, however, because the flow changes won't alter condition 1, and therefore, the other loops still satisfy condition 1. However, we should use the results from the first loop before we progress to other loops.
An adaptation of this method is needed to account for water reservoirs attached to the network, which are joined in pairs by the use of 'pseudo-loops' in the Hardy Cross scheme. This is discussed further on the Hardy Cross method site.
The modern method is simply to create a set of conditions from the above Kirchhoff laws (junctions and head-loss criteria). Then, use a Root-finding algorithm to find Q values that satisfy all the equations. The literal friction loss equations use a term called Q2, but we want to preserve any changes in direction. Create a separate equation for each loop where the head losses are added up, but instead of squaring Q, use |Q|·Q instead (with |Q| the absolute value of Q) for the formulation so that any sign changes reflect appropriately in the resulting head-loss calculation.
Probabilistic network analysis
In many situations, especially for real water distribution networks in cities (which can extend between thousands to millions of nodes), the number of known variables (flow rates and/or head losses) required to obtain a deterministic solution will be very large. Many of these variables will not be known, or will involve considerable uncertainty in their specification. Furthermore, in many pipe networks, there may be considerable variability in the flows, which can be described by fluctuations about mean flow rates in each pipe. The above deterministic methods are unable to account for these uncertainties, whether due to lack of knowledge or flow variability.
For these reasons, a probabilistic method for pipe network analysis has recently been developed, based on the maximum entropy method of Jaynes. In this method, a continuous relative entropy function is defined over the unknown parameters. This entropy is then maximized subject to the constraints on the system, including Kirchhoff's laws, pipe friction properties and any specified mean flow rates or head losses, to give a probabilistic statement (probability density function) which describes the system. This can be used to calculate mean values (expectations) of the flow rates, head losses or any other variables of interest in the pipe network. This analysis has been extended using a reduced-parameter entropic formulation, which ensures consistency of the analysis regardless of the graphical representation of the network. A comparison of Bayesian and maximum entropy probabilistic formulations for the analysis of pipe flow networks has also been presented, showing that under certain assumptions (Gaussian priors), the two approaches lead to equivalent predictions of mean flow rates.
Other methods of stochastic optimization of water distribution systems rely on metaheuristic algorithms, such as simulated annealing and genetic algorithms.
See also
References
Further reading
N. Hwang, R. Houghtalen, "Fundamentals of Hydraulic Engineering Systems" Prentice Hall, Upper Saddle River, NJ. 1996.
L.F. Moody, "Friction factors for pipe flow," Trans. ASME, vol. 66, 1944.
C. F. Colebrook, "Turbulent flow in pipes, with particular reference to the transition region between smooth and rough pipe laws," Jour. Ist. Civil Engrs., London (Feb. 1939).
Eusuff, Muzaffar M.; Lansey, Kevin E. (2003). "Optimization of Water Distribution Network Design Using the Shuffled Frog Leaping Algorithm". Journal of Water Resources Planning and Management. 129 (3): 210-225.
Fluid dynamics
Hydraulics
Hydraulic engineering
Networks
Piping | Pipe network analysis | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,571 | [
"Hydrology",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Mechanical engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
11,135,100 | https://en.wikipedia.org/wiki/LOCOS | LOCOS, short for LOCal Oxidation of Silicon, is a microfabrication process where silicon dioxide is formed in selected areas on a silicon wafer having the Si-SiO2 interface at a lower point than the rest of the silicon surface. As of 2008 it was largely superseded by shallow trench isolation.
This technology was developed to insulate MOS transistors from each other and limit transistor cross-talk. The main goal is to create a silicon oxide insulating structure that penetrates under the surface of the wafer, so that the Si-SiO2 interface occurs at a lower point than the rest of the silicon surface. This cannot be easily achieved by etching field oxide. Thermal oxidation of selected regions surrounding transistors is used instead. The oxygen penetrates in depth of the wafer, reacts with silicon and transforms it into silicon oxide. In this way, an immersed structure is formed. For process design and analysis purposes, the oxidation of silicon surfaces can be modeled effectively using the Deal–Grove model.
References
See also
Shallow trench isolation
microtechnology
semiconductor technology | LOCOS | [
"Materials_science",
"Engineering"
] | 222 | [
"Semiconductor technology",
"Materials science",
"Microtechnology"
] |
11,136,136 | https://en.wikipedia.org/wiki/CSPD%20%28molecule%29 | CSPD ([3-(1-chloro-3'-methoxyspiro[adamantane-4,4'-dioxetane]-3'-yl)phenyl] dihydrogen phosphate) is a chemical substance with formula C18H22ClO7P. It is a component of enhanced chemiluminescence enzyme-linked immunosorbent assay (ELISA) kits, used for the detection of minute amounts of various substances such as proteins.
Properties
The molecule CSPD has the following functional groups in the structure: phosphate group, phenyl group, spiro group, methyl ether group, and chlorine group. The ones worth noting are the ones above. None of these groups carry a charge. If there was a charge this would have had a change in the compound's pH, 3D structure, mass and bond angles.
The toxin CSPD effect persister cell formation using MqsR (MqsR, a crucial regulator for quorum sensing and biofilm formation, is a GCU-specific mRNA interferase in Escherichia coli) and persister cells are cells that avoid stress and are characterized by reduced metabolism and other factors.
References
Chemiluminescence
Adamantanes
Dioxetanes
Organic peroxides
Organochlorides
Organophosphates
Phenol esters
Spiro compounds | CSPD (molecule) | [
"Chemistry",
"Biology"
] | 288 | [
"Luminescence",
"Biotechnology stubs",
"Biochemistry stubs",
"Organic compounds",
"Chemiluminescence",
"Biochemistry",
"Organic peroxides",
"Spiro compounds"
] |
11,136,939 | https://en.wikipedia.org/wiki/Biochemical%20systems%20theory | Biochemical systems theory is a mathematical modelling framework for biochemical systems, based on ordinary differential equations (ODE), in which biochemical processes are represented using power-law expansions in the variables of the system.
This framework, which became known as Biochemical Systems Theory, has been developed since the 1960s by Michael Savageau, Eberhard Voit and others for the systems analysis of biochemical processes. According to Cornish-Bowden (2007) they "regarded this as a general theory of metabolic control, which includes both metabolic control analysis and flux-oriented theory as special cases".
Representation
The dynamics of a species is represented by a differential equation with the structure:
where Xi represents one of the nd variables of the model (metabolite concentrations, protein concentrations or levels of gene expression). j represents the nf biochemical processes affecting the dynamics of the species. On the other hand, ij (stoichiometric coefficient), j (rate constants) and fjk (kinetic orders) are two different kinds of parameters defining the dynamics of the system.
The principal difference of power-law models with respect to other ODE models used in biochemical systems is that the kinetic orders can be non-integer numbers. A kinetic order can have even negative value when inhibition is modeled. In this way, power-law models have a higher flexibility to reproduce the non-linearity of biochemical systems.
Models using power-law expansions have been used during the last 35 years to model and analyze several kinds of biochemical systems including metabolic networks, genetic networks and recently in cell signalling.
See also
Dynamical systems
Ludwig von Bertalanffy
Systems theory
References
Literature
Books:
M.A. Savageau, Biochemical systems analysis: a study of function and design in molecular biology, Reading, MA, Addison–Wesley, 1976.
E.O. Voit (ed), Canonical Nonlinear Modeling. S-System Approach to Understanding Complexity, Van Nostrand Reinhold, NY, 1991.
E.O. Voit, Computational Analysis of Biochemical Systems. A Practical Guide for Biochemists and Molecular Biologists, Cambridge University Press, Cambridge, U.K., 2000.
N.V. Torres and E.O. Voit, Pathway Analysis and Optimization in Metabolic Engineering, Cambridge University Press, Cambridge, U.K., 2002.
Scientific articles:
M.A. Savageau, Biochemical systems analysis: I. Some mathematical properties of the rate law for the component enzymatic reactions in: J. Theor. Biol. 25, pp. 365–369, 1969.
M.A. Savageau, Development of fractal kinetic theory for enzyme-catalysed reactions and implications for the design of biochemical pathways in: Biosystems 47(1-2), pp. 9–36, 1998.
M.R. Atkinson et al., Design of gene circuits using power-law models, in: Cell 113, pp. 597–607, 2003.
F. Alvarez-Vasquez et al., Simulation and validation of modelled sphingolipid metabolism in Saccharomyces cerevisiae, Nature 27, pp. 433(7024), pp. 425–30, 2005.
J. Vera et al., Power-Law models of signal transduction pathways in: Cellular Signalling ), 2007.
Eberhart O. Voit, Applications of Biochemical Systems Theory, 2006.
External links
Savageau Lab at UC Davis
Voit Lab at GA Tech
Systems biology | Biochemical systems theory | [
"Biology"
] | 724 | [
"Systems biology"
] |
11,139,487 | https://en.wikipedia.org/wiki/Frictionless%20plane | The frictionless plane is a concept from the writings of Galileo Galilei. In his 1638 The Two New Sciences, Galileo presented a formula that predicted the motion of an object moving down an inclined plane. His formula was based upon his past experimentation with free-falling bodies. However, his model was not based upon experimentation with objects moving down an inclined plane, but from his conceptual modeling of the forces acting upon the object. Galileo understood the mechanics of the inclined plane as the combination of horizontal and vertical vectors; the result of gravity acting upon the object, diverted by the slope of the plane.
However, Galileo's equations do not contemplate friction, and therefore do not perfectly predict the results of an actual experiment. This is because some energy is always lost when one mass applies a non-zero normal force to another. Therefore, the observed speed, acceleration and distance traveled should be less than Galileo predicts. This energy is lost in forms like sound and heat. However, from Galileo's predictions of an object moving down an inclined plane in a frictionless environment, he created the theoretical foundation for extremely fruitful real-world experimental prediction.
Frictionless planes do not exist in the real world. However, if they did, one can be almost certain that objects on them would behave exactly as Galileo predicts. Despite their nonexistence, they have considerable value in the design of engines, motors, roadways, and even tow-truck beds, to name a few examples.
The effect of friction on an object moving down an inclined plane can be calculated as
where is the force of friction exerted by the object and the inclined plane on each other, parallel to the surface of the plane, is the normal force exerted by the object and the plane on each other, directed perpendicular to the plane, and is the coefficient of kinetic friction.
Unless the inclined plane is in a vacuum, a (usually) small amount of potential energy is also lost to air drag.
See also
Atwood machine
Spherical cow
References
Abstraction
Physics education | Frictionless plane | [
"Physics"
] | 411 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
11,140,843 | https://en.wikipedia.org/wiki/Townsend%20%28unit%29 | The townsend (symbol Td) is a physical unit of the reduced electric field (ratio E/N), where is electric field and is concentration of neutral particles.
It is named after John Sealy Townsend, who conducted early research into gas ionisation.
Definition
It is defined by the relation
For example, an electric field of
in a medium with the density of an ideal gas at 1 atm, the Loschmidt constant
gives
,
which corresponds to .
Uses
This unit is important in gas discharge physics, where it serves as scaling parameter because the mean energy of electrons (and therefore many other properties of discharge) is typically a function of over broad range of and .
The concentration , which is in ideal gas simply related to pressure and temperature, controls the mean free path and collision frequency. The electric field governs the energy gained between two successive collisions.
Reduced electric field being a scaling factor effectively means, that increasing the electric field intensity E by some factor q has the same consequences as lowering gas density N by factor q.
See also
Electric glow discharge
Vacuum arc
References
A Bankovic´, S Dujko, R D White, J P Marler, S J Buckman, S Marjanovic´, G Malovic´, G Garc´ıa and Z Lj Petrovic, Positron transport in water vapour. 2012 New J. Phys. 14 035003.
Electrical breakdown | Townsend (unit) | [
"Physics"
] | 286 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
11,141,222 | https://en.wikipedia.org/wiki/Tetrad%20formalism | The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary for . In German, "" translates to "four", "" to "many", and "" to "leg".
The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique.
The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions.
The significance of the tetradic formalism appear in the Einstein–Cartan formulation of general relativity. The tetradic formalism of the theory is more fundamental than its metric formulation as one can not convert between the tetradic and metric formulations of the fermionic actions despite this being possible for bosonic actions . This is effectively because Weyl spinors can be very naturally defined on a Riemannian manifold and their natural setting leads to the spin connection. Those spinors take form in the vielbein coordinate system, and not in the manifold coordinate system.
The privileged tetradic formalism also appears in the deconstruction of higher dimensional Kaluza–Klein gravity theories and massive gravity theories, in which the extra-dimension(s) is/are replaced by series of N lattice sites such that the higher dimensional metric is replaced by a set of interacting metrics that depend only on the 4D components. Vielbeins commonly appear in other general settings in physics and mathematics. Vielbeins can be understood as solder forms.
Mathematical formulation
The tetrad formulation is a special case of a more general formulation, known as the vielbein or -bein formulation, with =4. Make note of the spelling: in German, "viel" means "many", not to be confused with "vier", meaning "four".
In the vielbein formalism, an open cover of the spacetime manifold and a local basis for each of those open sets is chosen: a set of independent vector fields
for that together span the -dimensional tangent bundle at each point in the set. Dually, a vielbein (or tetrad in 4 dimensions) determines (and is determined by) a dual co-vielbein (co-tetrad) — a set of independent 1-forms.
such that
where is the Kronecker delta. A vielbein is usually specified by its coefficients with respect to a coordinate basis, despite the choice of a set of (local) coordinates being unnecessary for the specification of a tetrad. Each covector is a solder form.
From the point of view of the differential geometry of fiber bundles, the vector fields define a section of the frame bundle i.e. a parallelization of which is equivalent to an isomorphism . Since not every manifold is parallelizable, a vielbein can generally only be chosen locally (i.e. only on a coordinate chart and not all of .)
All tensors of the theory can be expressed in the vector and covector basis, by expressing them as linear combinations of members of the (co)vielbein. For example, the spacetime metric tensor can be transformed from a coordinate basis to the tetrad basis.
Popular tetrad bases in general relativity include orthonormal tetrads and null tetrads. Null tetrads are composed of four null vectors, so are used frequently in problems dealing with radiation, and are the basis of the Newman–Penrose formalism and the GHP formalism.
Relation to standard formalism
The standard formalism of differential geometry (and general relativity) consists simply of using the coordinate tetrad in the tetrad formalism. The coordinate tetrad is the canonical set of vectors associated with the coordinate chart. The coordinate tetrad is commonly denoted whereas the dual cotetrad is denoted . These tangent vectors are usually defined as directional derivative operators: given a chart which maps a subset of the manifold into coordinate space , and any scalar field , the coordinate vectors are such that:
The definition of the cotetrad uses the usual abuse of notation to define covectors (1-forms) on . The involvement of the coordinate tetrad is not usually made explicit in the standard formalism. In the tetrad formalism, instead of writing tensor equations out fully (including tetrad elements and tensor products as above) only components of the tensors are mentioned. For example, the metric is written as "". When the tetrad is unspecified this becomes a matter of specifying the type of the tensor called abstract index notation. It allows to easily specify contraction between tensors by repeating indices as in the Einstein summation convention.
Changing tetrads is a routine operation in the standard formalism, as it is involved in every coordinate transformation (i.e., changing from one coordinate tetrad basis to another). Switching between multiple coordinate charts is necessary because, except in trivial cases, it is not possible for a single coordinate chart to cover the entire manifold. Changing to and between general tetrads is much similar and equally necessary (except for parallelizable manifolds). Any tensor can locally be written in terms of this coordinate tetrad or a general (co)tetrad.
For example, the metric tensor can be expressed as:
(Here we use the Einstein summation convention). Likewise, the metric can be expressed with respect to an arbitrary (co)tetrad as
Here, we use choice of alphabet (Latin and Greek) for the index variables to distinguish the applicable basis.
We can translate from a general co-tetrad to the coordinate co-tetrad by expanding the covector . We then get
from which it follows that . Likewise expanding with respect to the general tetrad, we get
which shows that .
Manipulation of indices
The manipulation with tetrad coefficients shows that abstract index formulas can, in principle, be obtained from tensor formulas with respect to a coordinate tetrad by "replacing greek by latin indices". However care must be taken that a coordinate tetrad formula defines a genuine tensor when differentiation is involved. Since the coordinate vector fields have vanishing Lie bracket (i.e. commute: ), naive substitutions of formulas that correctly compute tensor coefficients with respect to a coordinate tetrad may not correctly define a tensor with respect to a general tetrad because the Lie bracket is non-vanishing: . Thus, it is sometimes said that tetrad coordinates provide a non-holonomic basis.
For example, the Riemann curvature tensor is defined for general vector fields by
.
In a coordinate tetrad this gives tensor coefficients
The naive "Greek to Latin" substitution of the latter expression
is incorrect because for fixed c and d, is, in general, a first order differential operator rather than a zeroth order operator which defines a tensor coefficient. Substituting a general tetrad basis in the abstract formula we find the proper definition of the curvature in abstract index notation, however:
where . Note that the expression is indeed a zeroth order operator, hence (the (c d)-component of) a tensor. Since it agrees with the coordinate expression for the curvature when specialised to a coordinate tetrad it is clear, even without using the abstract definition of the curvature, that it defines the same tensor as the coordinate basis expression.
Example: Lie groups
Given a vector (or covector) in the tangent (or cotangent) manifold, the exponential map describes the corresponding geodesic of that tangent vector. Writing , the parallel transport of a differential corresponds to
The above can be readily verified simply by taking to be a matrix.
For the special case of a Lie algebra, the can be taken to be an element of the algebra, the exponential is the exponential map of a Lie group, and group elements correspond to the geodesics of the tangent vector. Choosing a basis for the Lie algebra and writing for some functions the commutators can be explicitly written out. One readily computes that
for the structure constants of the Lie algebra. The series can be written more compactly as
with the infinite series
Here, is a matrix whose matrix elements are . The matrix is then the vielbein; it expresses the differential in terms of the "flat coordinates" (orthonormal, at that) .
Given some map from some manifold to some Lie group , the metric tensor on the manifold becomes the pullback of the metric tensor on the Lie group :
The metric tensor on the Lie group is the Cartan metric, aka the Killing form. Note that, as a matrix, the second W is the transpose. For a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric. The above generalizes to the case of symmetric spaces. These vielbeins are used to perform calculations in sigma models, of which the supergravity theories are a special case.
See also
Frame bundle
Orthonormal frame bundle
Principal bundle
Spin bundle
Connection (mathematics)
G-structure
Spin manifold
Spin structure
Dirac equation in curved spacetime
Notes
Citations
References
External links
General Relativity with Tetrads
Differential geometry
Theory of relativity
Mathematical notation | Tetrad formalism | [
"Physics",
"Mathematics"
] | 2,204 | [
"nan",
"Theory of relativity"
] |
11,144,232 | https://en.wikipedia.org/wiki/Gulf%20Publishing%20Company | Gulf Publishing Company is an international publishing and events business dedicated to the hydrocarbon energy sector. In mid-2018 it rebranded as Gulf Energy Information. Founded in 1916 by Ray Lofton Dudley, Gulf Energy Information produces and distributes publications in print and web formats, online news, webcasts and databases; hosts conferences and events designed for the energy industry. The company was a subsidiary of Euromoney Institutional Investor from 2001 until a 2016 management buyout by CEO John Royall and Texas investors. The business and strategy publication Petroleum Economist also transferred to the company in May 2016. In mid-2017 the company acquired 109-year old Oildom Publishing.
The company's flagship magazines, World Oil, Hydrocarbon Processing, Pipeline & Gas Journal, and the Petroleum Economist are published monthly. Gulf is headquartered in Houston, Texas, with sales staff and columnists around the world, due to expansion efforts by William G. Dudley, Sr. The Petroleum Economist publishing and map cartography staff are based in London, UK. Gulf Energy Info's Data Services staff support on-line Energy Web Atlas energy data visualization, and Construction Boxscore downstream project database, from Houston, London and Mumbai.
Since 1916 World Oil has covered the upstream oil and gas industry for conventional, shale, offshore, exploration and production technology in oil and gas.
Since 1922, Hydrocarbon Processing has provided job-relevant information to technical staff, operations, maintenance and management in petroleum refining, gas processing facilities, petrochemical and engineer/constructor companies throughout the world. Bi-monthly supplement Gas Processing & LNG was added in 2012.
Since 1934, the Petroleum Economist has written about oil, its politics and economics - explained some of the industry's biggest disruptions: such as the 1973 oil crisis, the Gulf Wars, the rise of China, the Arab uprisings, and the more recent supply-side shocks from North America's unconventional energy sector.
Since 1859, Pipeline & Gas Journal has been the essential resource for technology and trends in the midstream industry; written and edited to be of service to those involved in moving, marketing and managing hydrocarbons from wellheads to ultimate consumers.
The company formerly published trade books, but spun off the division as TaylorWilson (now part of Taylor Trade) in 2000; sold its professional book list to Elsevier in 2013.
References
External links
Magazine publishing companies of the United States
Publishing companies established in 1916
Companies based in Houston
Petroleum industry | Gulf Publishing Company | [
"Chemistry"
] | 497 | [
"Chemical process engineering",
"Petroleum",
"Petroleum industry"
] |
11,145,013 | https://en.wikipedia.org/wiki/Toxic%20cough%20syrup | Since the 1990s, several mass poisonings from toxic cough syrup have occurred in developing countries. In these cases, an ingredient in cough syrup, glycerine (glycerol), was replaced with diethylene glycol, a cheaper alternative to glycerine for industrial applications. Diethylene glycol is nephrotoxic and can result in multiple organ dysfunction syndrome (MODS), especially in children.
History
There have been poisonings in Bangladesh, Indonesia, Marshall Islands, Pakistan, Panama, The Gambia, India (twice), Uzbekistan, and Cameroon between 1992 and 2022, due to contaminated cough syrup and other medications that incorporated inexpensive diethylene glycol instead of glycerine.
Bangladesh
Discovering and tracing a toxic syrup to its source has been difficult for health care providers and governmental agencies due to difficult communication between the governments of developed countries and developing countries. For example, Michael L. Bennish, an American pediatrician who works in developing countries, had been volunteering in Bangladesh as a physician and had noticed a number of deaths that seemed to coincide with the distribution of the government-issued cough syrup. The government rebuffed his attempts at investigating the medication. In response, Bennish smuggled bottles of the syrup in his suitcase when returning to the United States, allowing pharmaceutical laboratories in Massachusetts to identify the poisonous diethylene glycol, which can appear very similar to the less dangerous glycerine. Bennish went on to author a 1995 article in the British Medical Journal about his experience, writing that, given the amount of medication prescribed, death tolls "must [already] be in the tens of thousands".
Indonesia
In 2022, deaths of nearly 100 children in Indonesia, were reported to be linked to cough syrup and liquid medication. The syrup contained "unacceptable amounts" of diethylene glycol and ethylene glycol, linked to acute kidney injuries (AKI). In October, health officials reported around 200 cases of AKI in children, most of who were aged under five. Indonesia temporarily banned the sale and prescription of all syrup and liquid medicines as it was not clear if these medicines were imported or locally produced.
In November 2023, Afi Farma's chief executive and three other officials, whose cough syrup was linked to the deaths were sentenced to two-year prison sentences and fined 1bn Indonesian rupiah ($63,029; £51,7130) each.
Marshall Islands and Micronesia
In April 2023, World Health Organization (WHO) reported that, Guaifenesin TG syrup manufactured by QP Pharmachem Ltd in Punjab, India, had been found to contain "unacceptable amounts of diethylene glycol and ethylene glycol" in tested samples. Sudhir Pathak, managing director of QP Pharmachem, claimed that the batch of 18,346 bottles had been exported to Cambodia after obtaining all necessary regulatory approvals and that he was unaware of how the product had ended up in the Marshall Islands and Micronesia.
Pakistan
In December 2012, toxic cough syrup led to a death toll of between 16 and 30 in Gujranwala, while in November of that year, at least 19 individuals in Lahore suffered the same fate. Following an inquiry, Tyno cough syrup, produced and distributed by Reko Pharma in Lahore, was identified as the cause of the fatalities in Lahore. Many of the victims from the two incidents were drug addicts seeking intoxication. The syrup was later found to contain too much dextromethorphan, a cough suppressant.
Panama
In May 2007, 365 deaths were reported in Panama. The diethylene glycol originated from a Chinese manufacturer, which exported it as industrial "TD glycerin" under a shelf life of one year. The letters "TD" were shorthand for "substitute" in Chinese. When Panama-based Medicom received the product from a Spanish trader, it changed the name to "glycerine" and the expiration date to four years before selling it to the government of Panama. Neither the trading companies involved nor the government lab in Panama that processed the ingredient tested the substance for verification. Chinese authorities said they would no longer allow the name "TD glycerin" to be used. One of the country's officials overseeing food and drug safety was sentenced to death in late May on charges related to the scandal. The Panama government detained several officials as well as employees of Medicom and set up a $6-million fund for the victims.
The Gambia
In October 2022, the WHO announced a link between four paediatric cough syrups from one Indian pharmaceutical company and the deaths of 66 children in The Gambia from kidney failure. The products (Promethazine Oral Solution, Kofexmalin Baby Cough Syrup, Makoff Baby Cough Syrup, and Magrip N Cold Syrup) are believed to be contaminated with diethylene glycol and/or ethylene glycol. The products involved were manufactured by Maiden Pharmaceuticals of India in December 2021.
This has led to Maiden Pharmaceuticals' products being banned in The Gambia; a probe by the CDSCO and volunteers from health agencies in The Gambia going door to door in an urgent recall.
In December 2022, a parliamentary committee in The Gambia recommended prosecution of the Indian company, Maiden Pharmaceuticals. It also recommended banning all products by the firm in the country.
Indian authorities started conducting an inquiry into an April 2023 allegation that a pharmaceutical regulator in Haryana state, who holds a senior position in the state health department, accepted a bribe and switched samples of contaminated cough syrup before the state government laboratory tested them. The cough syrup in question was produced by Maiden Pharmaceuticals, and it has been implicated in child deaths in Gambia. Tests conducted by two independent laboratories on behalf of the WHO confirmed the presence of lethal toxins—ethylene glycol and diethylene glycol in the syrup. Indian authorities, however, did not find any toxins, but did identify labeling issues with Maiden Pharmaceuticals' cough syrup. Naresh Kumar Goyal, the founder of Maiden Pharmaceuticals, has previously denied any wrongdoing in the production of the syrup.
Uzbekistan
In December 2022, Uzbekistan's health ministry said that 18 children died from renal problems and acute respiratory disease after drinking cough syrup manufactured by Indian drug maker Marion Biotech. The statement did not specify over what time period the deaths occurred. As a result, Marion Biotech, was suspended from Pharmexcil, an Indian government-linked trade group. As a result, state security police in Uzbekistan arrested four people.
Sources told Reuters that Marion purchased industrial-grade propylene glycol as an ingredient from Maya Chemtech India, which is not licensed to sell pharmaceutical-grade materials. Maya is not facing charges but the investigation is ongoing. Marion did not test the ingredient it purchased.
The Indian government has mandated that after June 2023, cough syrup manufacturers must have their products tested before exporting them. These companies are required to obtain a certificate of analysis from a government-approved laboratory. A list of approved laboratories, both at the central and state government level, was provided where the samples can be tested.
Cameroon
The Naturcold brand of cough syrup from India was associated with the tragic deaths of multiple children in Cameroon. WHO testing on June 27, 2023, revealed alarming levels of diethylene glycol in Naturcold, reaching as high as 28.6% – over 200 times the acceptable limit, which should not exceed 0.1%. This highly toxic solvent, normally used in air-conditioners and fridges, can lead to severe symptoms, including acute kidney injury and even death if ingested.
The packaging of the deadly medicine falsely claimed that it was produced by a British company called Fraken International (England), but no such company exists in the UK. The actual manufacturer was Riemann Private Ltd, an Indian company based in Indore, and it appeared to be exported to global markets, including Cameroon, by another Indian company, Wellona Pharma, based in Surat, Gujarat. The UK’s Medicines and Healthcare products Regulatory Agency keeps an eye on counterfeit claims of UK origin made by foreign pharmaceutical companies, as such claims are used to add credibility to otherwise adulterated, unlicensed, or substandard medicines.
Riemann Pvt Ltd is under investigation and faces potential disciplinary action from the Indian drug regulator, the Central Drugs Standard Control Organisation. Despite the ongoing investigation, the company continues its operations and drug manufacturing activities.
Worldwide
The World Health Organization (WHO) is addressing the global threat of toxic cough syrups that have caused the deaths of more than 300 children across multiple countries in 2022 and 2023. The WHO is working with six additional countries, bringing the total to 15 countries, to track these dangerous medicines. The WHO team lead said that tainted syrups are an ongoing risk. He cautioned that the presence of contaminated medicines could persist for several years, as warehouses may still contain barrels of adulterated propylene glycol. The manufacturers that exported the syrup to other countries in the current spate of incidents are four Indian manufacturers (Maiden Pharmaceuticals, Marion Biotech, QP Pharmachem, and Synercar), one Chinese manufacturer (Fraken Group) and one Pakistani manufacturer (Pharmix Laboratories). Safety alerts have been issued by government agencies in the affected countries, as well as by countries conducting tests on their behalf and the WHO, while investigations into the matter continue. The WHO has urged countries to enhance surveillance and offer support to countries lacking testing resources.
See also
List of medicine contamination incidents
References
2007 in Panama
2007 health disasters
Medical scandals
Health in Panama
Antitussives
Drug safety
Health disasters in North America
Adulteration
Mass poisoning
2022 in Indonesia
2022 in Uzbekistan
2022 health disasters
2022 in the Gambia | Toxic cough syrup | [
"Chemistry"
] | 2,033 | [
"Adulteration",
"Drug safety"
] |
11,145,154 | https://en.wikipedia.org/wiki/Photodisintegration | Photodisintegration (also called phototransmutation, or a photonuclear reaction) is a nuclear process in which an atomic nucleus absorbs a high-energy gamma ray, enters an excited state, and immediately decays by emitting a subatomic particle. The incoming gamma ray effectively knocks one or more neutrons, protons, or an alpha particle out of the nucleus. The reactions are called (γ,n), (γ,p), and (γ,α), respectively.
Photodisintegration is endothermic (energy absorbing) for atomic nuclei lighter than iron and sometimes exothermic (energy releasing) for atomic nuclei heavier than iron. Photodisintegration is responsible for the nucleosynthesis of at least some heavy, proton-rich elements via the p-process in supernovae of type Ib, Ic, or II.
This causes the iron to further fuse into the heavier elements.
Photodisintegration of deuterium
A photon carrying 2.22 MeV or more energy can photodisintegrate an atom of deuterium:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|}
James Chadwick and Maurice Goldhaber used this reaction to measure the proton-neutron mass difference. This experiment proves that a neutron is not a bound state of a proton and an electron, as had been proposed by Ernest Rutherford.
Photodisintegration of beryllium
A photon carrying 1.67 MeV or more energy can photodisintegrate an atom of beryllium-9 (100% of natural beryllium, its only stable isotope):
{| border="0"
|- style="height:2em;"
| ||+ || ||→ ||2|| ||+ ||
|}
Antimony-124 is assembled with beryllium to make laboratory neutron sources and startup neutron sources. Antimony-124 (half-life 60.20 days) emits β− and 1.690 MeV gamma rays (also 0.602 MeV and 9 fainter emissions from 0.645 to 2.090 MeV), yielding stable tellurium-124. Gamma rays from antimony-124 split beryllium-9 into two alpha particles and a neutron with an average kinetic energy of 24 keV (a so-called intermediate neutron in terms of energy):
{| border="0"
|- style="height:2em;"
| ||→ ||||+ || ||+ ||
|}
Other isotopes have higher thresholds for photoneutron production, as high as 18.72 MeV, for carbon-12.
Hypernovae
In explosions of very large stars (250 or more solar masses), photodisintegration is a major factor in the supernova event. As the star reaches the end of its life, it reaches temperatures and pressures where photodisintegration's energy-absorbing effects temporarily reduce pressure and temperature within the star's core. This causes the core to start to collapse as energy is taken away by photodisintegration, and the collapsing core leads to the formation of a black hole. A portion of mass escapes in the form of relativistic jets, which could have "sprayed" the first metals into the universe.
Photodisintegration in lightning
Terrestrial lightnings produce high-speed electrons that create bursts of gamma-rays as bremsstrahlung. The energy of these rays is sometimes sufficient to start photonuclear reactions resulting in emitted neutrons. One such reaction, (γ,n), is the only natural process other than those induced by cosmic rays in which is produced on Earth. The unstable isotopes remaining from the reaction may subsequently emit positrons by β+ decay.
Photofission
Photofission is a similar but distinct process, in which a nucleus, after absorbing a gamma ray, undergoes nuclear fission (splits into two fragments of nearly equal mass).
See also
Pair-instability supernova
Silicon-burning process
References
Nuclear physics
Nucleosynthesis
Neutron sources | Photodisintegration | [
"Physics",
"Chemistry"
] | 879 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
9,662,955 | https://en.wikipedia.org/wiki/Convection%20%28heat%20transfer%29 | Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases.
Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two.
Overview
Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when the fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate.
The convection heat transfer mode comprises two mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.
Types
Two types of convective heat transfer may be distinguished:
Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below.
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.
In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).
Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.
Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces.
For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated.
Newton's law of cooling
Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling.
Newton's law states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze. The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment.
In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference.
Convective heat transfer
The basic relationship for heat transfer by convection is:
where is the heat transferred per unit time, A is the area of the object, h is the heat transfer coefficient, T is the object's surface temperature, and Tf is the fluid temperature.
The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of h have been measured and tabulated for commonly encountered fluids and flow situations.
See also
Conjugate convective heat transfer
Convection
Forced convection
Natural convection
Mixed convection
Heat transfer coefficient
Heat transfer enhancement
Heisler chart
Thermal conductivity
Convection–diffusion equation
References
Thermodynamics
Heat transfer | Convection (heat transfer) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,326 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"Dynamical systems"
] |
9,667,364 | https://en.wikipedia.org/wiki/Energy%20drift | In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in simulations the energy might fluctuate on a short time scale and increase or decrease on a very long time scale due to numerical integration artifacts that arise with the use of a finite time step Δt. This is somewhat similar to the flying ice cube problem, whereby numerical errors in handling equipartition of energy can change vibrational energy into translational energy.
More specifically, the energy tends to increase exponentially; its increase can be understood intuitively because each step introduces a small perturbation δv to the true velocity vtrue, which (if uncorrelated with v, which will be true for simple integration methods) results in a second-order increase in the energy
(The cross term in v · δv is zero because of no correlation.)
Energy drift - usually damping - is substantial for numerical integration schemes that are not symplectic, such as the Runge-Kutta family. Symplectic integrators usually used in molecular dynamics, such as the Verlet integrator family, exhibit increases in energy over very long time scales, though the error remains roughly constant. These integrators do not in fact reproduce the actual Hamiltonian mechanics of the system; instead, they reproduce a closely related "shadow" Hamiltonian whose value they conserve many orders of magnitude more closely. The accuracy of the energy conservation for the true Hamiltonian is dependent on the time step. The energy computed from the modified Hamiltonian of a symplectic integrator is from the true Hamiltonian.
Energy drift is similar to parametric resonance in that a finite, discrete timestepping scheme will result in nonphysical, limited sampling of motions with frequencies close to the frequency of velocity updates. Thus the restriction on the maximum step size that will be stable for a given system is proportional to the period of the fastest fundamental modes of the system's motion. For a motion with a natural frequency ω, artificial resonances are introduced when the frequency of velocity updates, is related to ω as
where n and m are integers describing the resonance order. For Verlet integration, resonances up to the fourth order frequently lead to numerical instability, leading to a restriction on the timestep size of
where ω is the frequency of the fastest motion in the system and p is its period. The fastest motions in most biomolecular systems involve the motions of hydrogen atoms; it is thus common to use constraint algorithms to restrict hydrogen motion and thus increase the maximum stable time step that can be used in the simulation. However, because the time scales of heavy-atom motions are not widely divergent from those of hydrogen motions, in practice this allows only about a twofold increase in time step. Common practice in all-atom biomolecular simulation is to use a time step of 1 femtosecond (fs) for unconstrained simulations and 2 fs for constrained simulations, although larger time steps may be possible for certain systems or choices of parameters.
Energy drift can also result from imperfections in evaluating the energy function, usually due to simulation parameters that sacrifice accuracy for computational speed. For example, cutoff schemes for evaluating the electrostatic forces introduce systematic errors in the energy with each time step as particles move back and forth across the cutoff radius if sufficient smoothing is not used. Particle mesh Ewald summation is one solution for this effect, but introduces artifacts of its own. Errors in the system being simulated can also induce energy drifts characterized as "explosive" that are not artifacts, but are reflective of the instability of the initial conditions; this may occur when the system has not been subjected to sufficient structural minimization before beginning production dynamics. In practice, energy drift may be measured as a percent increase over time, or as a time needed to add a given amount of energy to the system.
The practical effects of energy drift depend on the simulation conditions, the thermodynamic ensemble being simulated, and the intended use of the simulation under study; for example, energy drift has much more severe consequences for simulations of the microcanonical ensemble than the canonical ensemble where the temperature is held constant. However, it has been shown that long microcanonical ensemble simulations can be performed with insignificant energy drift, including those of flexible molecules which incorporate constraints and Ewald summations. Energy drift is often used as a measure of the quality of the simulation, and has been proposed as one quality metric to be routinely reported in a mass repository of molecular dynamics trajectory data analogous to the Protein Data Bank.
References
Further reading
Sanz-Serna JM, Calvo MP. (1994). Numerical Hamiltonian Problems. Chapman & Hall, London, England.
Molecular dynamics
Numerical differential equations
Numerical artifacts | Energy drift | [
"Physics",
"Chemistry"
] | 1,003 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
9,668,094 | https://en.wikipedia.org/wiki/United%20Nations%20Scientific%20Committee%20on%20the%20Effects%20of%20Atomic%20Radiation | The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) was set up by resolution of the United Nations General Assembly in 1955. Twenty-one states are designated to provide scientists to serve as members of the committee which holds formal meetings (sessions) annually and submits a report to the General Assembly. The organisation has no power to set radiation standards nor to make recommendations in regard to nuclear testing. It was established solely to "define precisely the present exposure of the population of the world to ionizing radiation". A small secretariat, located in Vienna and functionally linked to the United Nations Environment Programme (UNEP), organizes the annual sessions and manages the preparation of documents for the committee's scrutiny.
Function
UNSCEAR issues major public reports on Sources and Effects of Ionizing Radiation from time to time. As of 2017, there have been 28 major publications from 1958 to 2017. The reports are all available from the UNSCEAR website. These works are very highly regarded as sources of authoritative information and are used throughout the world as a scientific basis for the evaluation of radiation risk. The publications review studies undertaken separately from a range of sources. Reports from UN member states and other international organisations on data from survivors of the atomic bombings of Hiroshima and Nagasaki, the Chernobyl disaster, accidental, occupational, and medical exposure to ionizing radiation.
Administration
Originally, in 1955, India and the Soviet Union wanted to add several neutral and communist states, such as mainland China. Eventually, a compromise with the US was made and Argentina, Belgium, Egypt and Mexico were permitted to join. The organisation was charged with collecting all available data on the effects of "ionising radiation upon man and his environment". (James J. Wadsworth - American representative to the General Assembly).
The committee was originally based in the Secretariat Building in New York City but moved to the United Nations Office at Vienna in 1974.
The Secretaries of the Committee have been:
Dr. Ray K. Appleyard (UK) (1956–1961)
Dr. Francesco Sella (Italy) (1961–1974)
Dr. Dan Jacobo Beninson (Argentina) (1974–1979)
Dr. Giovanni Silini (Italy) (1980–1988)
Dr. Burton Bennett (1988 acting; 1991–2000)
Dr. Norman Gentner (2001–2004; 2005 acting)
Dr. Malcolm Crick (2005–2018)
Dr. Ferid Shannoun (2018–2019 acting)
Ms. Borislava Batandjieva-Metcalf (Bulgaria) (2019–)
Contents of UNSCEAR 2008 report
UNSCEAR has published 20 major reports. The latest is the 2010 Summary Report (14 pages), while the last full report was the 2008 Report Vol. I and Vol. II with scientific annexes (A to E).
"UNSCEAR 2008 REPORT Vol.I" main report and 2 scientific annexes
Report to the General Assembly (without scientific annexes; 24 pages)
Includes short overviews of the materials and conclusions contained in the scientific annexes
Scientific Annex
Annex A: "Medical radiation exposures" (202 pages)
Annex B: "Exposures of the public and workers from various sources of radiation" (245 pages)
Tables (downloadable) "Public.xls" (A1 to A14), "Worker.xls" (A15 to A31)
"UNSCEAR 2008 REPORT Vol.II" 3 scientific annexes
Annex C: "Radiation exposures in accidents" (49 pages)
Annex D:"Health effects due to radiation from the Chernobyl accident" (179 pages)
Annex E: "Effects of ionizing radiation on non-human biota" (97 pages)
Contents of UNSCEAR 2020/2021 report
UNSCEAR has published in 2022 its last full report, the UNSCEAR 2020/2021 Report Vol. I, Vol. II, Vol. III and Vol. IV with scientific annexes (A to D).
See also
European Committee on Radiation Risk
International Commission on Radiological Protection
Radiation protection
References
External links
UNSCEAR Website
UNSCEAR Publications
Radiation health effects
Nuclear organizations
United Nations General Assembly subsidiary organs
Radiation protection organizations
United Nations organizations based in Vienna
1955 establishments in New York City | United Nations Scientific Committee on the Effects of Atomic Radiation | [
"Chemistry",
"Materials_science",
"Engineering"
] | 860 | [
"Radiation health effects",
"Nuclear organizations",
"Radiation protection organizations",
"Radiation effects",
"Energy organizations",
"Radioactivity"
] |
13,693,851 | https://en.wikipedia.org/wiki/Specified%20minimum%20yield%20strength | Specified Minimum Yield Strength (SMYS) means the specified minimum yield strength for steel pipe manufactured in accordance with a listed specification1. This is a common term used in the oil and gas industry for steel pipe used under the jurisdiction of the United States Department of Transportation. It is an indication of the minimum stress a pipe may experience that will cause plastic (permanent) deformation.
The SMYS is required to determine the maximum allowable operating pressure (MAOP) of a pipeline, as determined by Barlow's Formula which is P = (2 * S * T)/(OD * SF), where P is pressure, OD is the pipe’s outside diameter, S is the SMYS, T is its wall thickness, and SF is a [Safety Factor].
See also
History of the petroleum industry in the United States
References
ASME B31G-2012 "Manual for Determining the Remaining Strength of Corroded Pipelines pg. 2
Mechanical standards
Petroleum in the United States
Plasticity (physics) | Specified minimum yield strength | [
"Materials_science",
"Engineering"
] | 206 | [
"Deformation (mechanics)",
"Mechanical standards",
"Mechanical engineering",
"Plasticity (physics)"
] |
13,696,274 | https://en.wikipedia.org/wiki/Goldeneye%20Gas%20Platform | Goldeneye Gas Platform was an unmanned and now demolished offshore gas production platform in North Sea block 14/29, in the South Halibut basin area of the outer Moray Firth, 105 km northeast of St Fergus Gas Plant in Scotland.
Field
The field was discovered in October 1996 with well 14/29a-3. The field extends into blocks 14/28b, 20/30b and 20/4b. The reservoir is a lower cretaceous Captain sandstone with high rates of production, up to per day at standard conditions per well and lies at a depth of .
Infrastructure
The jacket was a four-legged piled structure weighing 3,500 tonnes, anchored by eight piles weighing 2,500 tonnes. The jacket supported the 1,000 tonnes topsides. The topsides were designed by SLP Engineering Ltd. Production was from five wells with well fluids separated by a single vertical separator vessel. Separated liquids were re-injected into the export gas stream without further treatment. Facilities were provided for the installation of a future produced water coalescer and flash drum. Gas and liquid were piped ashore to St Fergus under well pressure, via a 20 inch pipeline without using compressors, where it was processed. It operated in of water.
Production
Production started in 2004 and ceased in 2011. It had a potential use for carbon dioxide storage. The topsides and jacket were removed in September 2021 and taken to Vats, Norway for dismantling and recycling.
Like most North Sea fields operated by Shell, it is named after a bird - in this case Bucephala clangula, a small duck found in Scotland and elsewhere.
See also
Operation Goldeneye - World War II operation involving Ian Fleming
References
North Sea energy
Natural gas platforms
Buildings and structures in Aberdeenshire
Oil and gas industry in Scotland
2004 establishments in Scotland | Goldeneye Gas Platform | [
"Engineering"
] | 367 | [
"Structural engineering",
"Natural gas platforms"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.