id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
59,366 | https://en.wikipedia.org/wiki/Large%20intestine | The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in tetrapods. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon (progressing from the ascending colon to the transverse, the descending and finally the sigmoid colon) is the longest portion of the large intestine, and the terms "large intestine" and "colon" are often used interchangeably, but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal.
In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about long, which is about one-fifth of the whole length of the human gastrointestinal tract.
Structure
The colon of the large intestine is the last part of the digestive system. It has a segmented appearance due to a series of saccules called haustra. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which the fermentation of unabsorbed material by the gut microbiota occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day.
The colon is the longest part of the large intestine and its average length in the adult human is 65 inches or 166 cm (range of 80 to 313 cm) for males, and 61 inches or 155 cm (range of 80 to 214 cm) for females.
Sections
In mammals, the large intestine consists of the cecum (including the appendix), colon (the longest part), rectum, and anal canal.
The four sections of the colon are: the ascending colon, transverse colon, descending colon, and sigmoid colon. These sections turn at the colic flexures.
The parts of the colon are either intraperitoneal or behind it in the retroperitoneum. Retroperitoneal organs, in general, do not have a complete covering of peritoneum, so they are fixed in location. Intraperitoneal organs are completely surrounded by peritoneum and are therefore mobile. Of the colon, the ascending colon, descending colon and rectum are retroperitoneal, while the cecum, appendix, transverse colon and sigmoid colon are intraperitoneal. This is important as it affects which organs can be easily accessed during surgery, such as a laparotomy.
In terms of diameter, the cecum is the widest, averaging slightly less than 9 cm in healthy individuals, and the transverse colon averages less than 6 cm in diameter. The descending and sigmoid colon are slightly smaller, with the sigmoid colon averaging in diameter. Diameters larger than certain thresholds for each colonic section can be diagnostic for megacolon.
Cecum and appendix
The cecum is the first section of the large intestine and is involved in digestion, while the appendix which develops embryologically from it, is not involved in digestion and is considered to be part of the gut-associated lymphoid tissue. The function of the appendix is uncertain, but some sources believe that it has a role in housing a sample of the gut microbiota, and is able to help to repopulate the colon with microbiota if depleted during the course of an immune reaction. The appendix has also been shown to have a high concentration of lymphatic cells.
Ascending colon
The ascending colon is the first of four main sections of the large intestine. It is connected to the small intestine by a section of bowel called the cecum. The ascending colon runs upwards through the abdominal cavity toward the transverse colon for approximately eight inches (20 cm).
One of the main functions of the colon is to remove the water and other key nutrients from waste material and recycle it. As the waste material exits the small intestine through the ileocecal valve, it will move into the cecum and then to the ascending colon where this process of extraction starts. The waste material is pumped upwards toward the transverse colon by peristalsis. The ascending colon is sometimes attached to the appendix via Gerlach's valve. In ruminants, the ascending colon is known as the spiral colon.
Taking into account all ages and sexes, colon cancer occurs here most often (41%).
Transverse colon
The transverse colon is the part of the colon from the hepatic flexure, also known as the right colic, (the turn of the colon by the liver) to the splenic flexure also known as the left colic, (the turn of the colon by the spleen). The transverse colon hangs off the stomach, attached to it by a large fold of peritoneum called the greater omentum. On the posterior side, the transverse colon is connected to the posterior abdominal wall by a mesentery known as the transverse mesocolon.
The transverse colon is encased in peritoneum, and is therefore mobile (unlike the parts of the colon immediately before and after it).
The proximal two-thirds of the transverse colon is perfused by the middle colic artery, a branch of the superior mesenteric artery (SMA), while the latter third is supplied by branches of the inferior mesenteric artery (IMA). The "watershed" area between these two blood supplies, which represents the embryologic division between the midgut and hindgut, is an area sensitive to ischemia.
Descending colon
The descending colon is the part of the colon from the splenic flexure to the beginning of the sigmoid colon. One function of the descending colon in the digestive system is to store feces that will be emptied into the rectum. It is retroperitoneal in two-thirds of humans. In the other third, it has a (usually short) mesentery. The arterial supply comes via the left colic artery. The descending colon is also called the distal gut, as it is further along the gastrointestinal tract than the proximal gut. Gut flora are very dense in this region.
Sigmoid colon
The sigmoid colon is the part of the large intestine after the descending colon and before the rectum. The name sigmoid means S-shaped (see sigmoid; cf. sigmoid sinus). The walls of the sigmoid colon are muscular and contract to increase the pressure inside the colon, causing the stool to move into the rectum.
The sigmoid colon is supplied with blood from several branches (usually between 2 and 6) of the sigmoid arteries, a branch of the IMA. The IMA terminates as the superior rectal artery.
Sigmoidoscopy is a common diagnostic technique used to examine the sigmoid colon.
Rectum
The rectum is the last section of the large intestine. It holds the formed feces awaiting elimination via defecation.
It is about 12 cm long.
Appearance
The cecum – the first part of the large intestine
Taeniae coli – three bands of smooth muscle
Haustra – bulges caused by contraction of taeniae coli
Epiploic appendages – small fat accumulations on the viscera
The taenia coli run the length of the large intestine. Because the taenia coli are shorter than the large bowel itself, the colon becomes sacculated, forming the haustra of the colon which are the shelf-like intraluminal projections.
Blood supply
Arterial supply to the colon comes from branches of the superior mesenteric artery (SMA) and inferior mesenteric artery (IMA). Flow between these two systems communicates via the marginal artery of the colon that runs parallel to the colon for its entire length. Historically, a structure variously identified as the arc of Riolan or meandering mesenteric artery (of Moskowitz) was thought to connect the proximal SMA to the proximal IMA. This variably present structure would be important if either vessel were occluded. However, at least one review of the literature questions the existence of this vessel, with some experts calling for the abolition of these terms from future medical literature.
Venous drainage usually mirrors colonic arterial supply, with the inferior mesenteric vein draining into the splenic vein, and the superior mesenteric vein joining the splenic vein to form the hepatic portal vein that then enters the liver. Middle rectal veins are an exception, delivering blood to inferior vena cava and bypassing the liver.
Lymphatic drainage
Lymphatic drainage from the ascending colon and proximal two-thirds of the transverse colon is to the ileocolic lymph nodes and the superior mesenteric lymph nodes, which drain into the cisterna chyli. The lymph from the distal one-third of the transverse colon, the descending colon, the sigmoid colon, and the upper rectum drain into the inferior mesenteric and colic lymph nodes. The lower rectum to the anal canal above the pectinate line drain to the internal ileocolic nodes. The anal canal below the pectinate line drains into the superficial inguinal nodes. The pectinate line only roughly marks this transition.
Nerve supply
Sympathetic supply: superior & inferior mesenteric ganglia;
parasympathetic supply: vagus & sacral plexus (S2-S4)
Development
The endoderm, mesoderm and ectoderm are germ layers that develop in a process called gastrulation. Gastrulation occurs early in human development. The gastrointestinal tract is derived from these layers.
Variation
One variation on the normal anatomy of the colon occurs when extra loops form, resulting in a colon that is up to five metres longer than normal. This condition, referred to as redundant colon, typically has no direct major health consequences, though rarely volvulus occurs, resulting in obstruction and requiring immediate medical attention. A significant indirect health consequence is that use of a standard adult colonoscope is difficult and in some cases impossible when a redundant colon is present, though specialized variants on the instrument (including the pediatric variant) are useful in overcoming this problem.
Microanatomy
Colonic crypts
The wall of the large intestine is lined with simple columnar epithelium with invaginations. The invaginations are called the intestinal glands or colonic crypts.
The colon crypts are shaped like microscopic thick walled test tubes with a central hole down the length of the tube (the crypt lumen). Four tissue sections are shown here, two cut across the long axes of the crypts and two cut parallel to the long axes. In these images the cells have been stained by immunohistochemistry to show a brown-orange color if the cells produce a mitochondrial protein called cytochrome c oxidase subunit I (CCOI). The nuclei of the cells (located at the outer edges of the cells lining the walls of the crypts) are stained blue-gray with haematoxylin. As seen in panels C and D, crypts are about 75 to about 110 cells long. Baker et al. found that the average crypt circumference is 23 cells. Thus, by the images shown here, there are an average of about 1,725 to 2,530 cells per colonic crypt. Nooteboom et al. measuring the number of cells in a small number of crypts reported a range of 1,500 to 4,900 cells per colonic crypt. Cells are produced at the crypt base and migrate upward along the crypt axis before being shed into the colonic lumen days later. There are 5 to 6 stem cells at the bases of the crypts.
As estimated from the image in panel A, there are about 100 colonic crypts per square millimeter of the colonic epithelium. Since the average length of the human colon is 160.5 cm and the average inner circumference of the colon is 6.2 cm, the inner surface epithelial area of the human colon has an average area of about 995 cm2, which includes 9,950,000 (close to 10 million) crypts.
In the four tissue sections shown here, many of the intestinal glands have cells with a mitochondrial DNA mutation in the CCOI gene and appear mostly white, with their main color being the blue-gray staining of the nuclei. As seen in panel B, a portion of the stem cells of three crypts appear to have a mutation in CCOI, so that 40% to 50% of the cells arising from those stem cells form a white segment in the cross cut area.
Overall, the percent of crypts deficient for CCOI is less than 1% before age 40, but then increases linearly with age. Colonic crypts deficient for CCOI in women reaches, on average, 18% in women and 23% in men by 80–84 years of age.
Crypts of the colon can reproduce by fission, as seen in panel C, where a crypt is fissioning to form two crypts, and in panel B where at least one crypt appears to be fissioning. Most crypts deficient in CCOI are in clusters of crypts (clones of crypts) with two or more CCOI-deficient crypts adjacent to each other (see panel D).
Mucosa
About 150 of the many thousands of protein coding genes expressed in the large intestine, some are specific to the mucous membrane in different regions and include CEACAM7.
Function
The large intestine absorbs water and any remaining absorbable nutrients from the food before sending the indigestible matter to the rectum. The colon absorbs vitamins that are created by the colonic bacteria, such as thiamine, riboflavin, and vitamin K (especially important as the daily ingestion of vitamin K is not normally enough to maintain adequate blood coagulation). It also compacts feces, and stores fecal matter in the rectum until it can be discharged via the anus in defecation.
The large intestine also secretes K+ and Cl-. Chloride secretion increases in cystic fibrosis.
Recycling of various nutrients takes place in the colon. Examples include fermentation of carbohydrates, short chain fatty acids, and urea cycling.
The appendix contains a small amount of mucosa-associated lymphoid tissue which gives the appendix an undetermined role in immunity. However, the appendix is known to be important in fetal life as it contains endocrine cells that release biogenic amines and peptide hormones important for homeostasis during early growth and development.
By the time the chyme has reached this tube, most nutrients and 90% of the water have been absorbed by the body. Indeed, as demonstrated by the commonality of ileostomy procedures, it is possible for many people to live without large portions of their large intestine, or even without it completely. At this point only some electrolytes like sodium, magnesium, and chloride are left as well as indigestible parts of ingested food (e.g., a large part of ingested amylose, starch which has been shielded from digestion heretofore, and dietary fiber, which is largely indigestible carbohydrate in either soluble or insoluble form). As the chyme moves through the large intestine, most of the remaining water is removed, while the chyme is mixed with mucus and bacteria (known as gut flora), and becomes feces. The ascending colon receives fecal material as a liquid. The muscles of the colon then move the watery waste material forward and slowly absorb all the excess water, causing the stools to gradually solidify as they move along into the descending colon.
The bacteria break down some of the fiber for their own nourishment and create acetate, propionate, and butyrate as waste products, which in turn are used by the cell lining of the colon for nourishment. No protein is made available. In humans, perhaps 10% of the undigested carbohydrate thus becomes available, though this may vary with diet; in other animals, including other apes and primates, who have proportionally larger colons, more is made available, thus permitting a higher portion of plant material in the diet. The large intestine produces no digestive enzymes — chemical digestion is completed in the small intestine before the chyme reaches the large intestine. The pH in the colon varies between 5.5 and 7 (slightly acidic to neutral).
Standing gradient osmosis
Water absorption at the colon typically proceeds against a transmucosal osmotic pressure gradient. The standing gradient osmosis is the reabsorption of water against the osmotic gradient in the intestines. Cells occupying the intestinal lining pump sodium ions into the intercellular space, raising the osmolarity of the intercellular fluid. This hypertonic fluid creates an osmotic pressure that drives water into the lateral intercellular spaces by osmosis via tight junctions and adjacent cells, which then in turn moves across the basement membrane and into the capillaries, while more sodium ions are pumped again into the intercellular fluid. Although water travels down an osmotic gradient in each individual step, overall, water usually travels against the osmotic gradient due to the pumping of sodium ions into the intercellular fluid. This allows the large intestine to absorb water despite the blood in capillaries being hypotonic compared to the fluid within the intestinal lumen.
Gut flora
The large intestine houses over 700 species of bacteria that perform a variety of functions, as well as fungi, protozoa, and archaea. Species diversity varies by geography and diet. The microbes in a human distal gut often number in the vicinity of 100 trillion, and can weigh around 200 grams (0.44 pounds). This mass of mostly symbiotic microbes has recently been called the latest human organ to be "discovered" or in other words, the "forgotten organ".
The large intestine absorbs some of the products formed by the bacteria inhabiting this region. Undigested polysaccharides (fiber) are metabolized to short-chain fatty acids by bacteria in the large intestine and absorbed by passive diffusion. The bicarbonate that the large intestine secretes helps to neutralize the increased acidity resulting from the formation of these fatty acids.
These bacteria also produce large amounts of vitamins, especially vitamin K and biotin (a B vitamin), for absorption into the blood. Although this source of vitamins, in general, provides only a small part of the daily requirement, it makes a significant contribution when dietary vitamin intake is low. An individual who depends on absorption of vitamins formed by bacteria in the large intestine may become vitamin-deficient if treated with antibiotics that inhibit the vitamin producing species of bacteria as well as the intended disease-causing bacteria.
Other bacterial products include gas (flatus), which is a mixture of nitrogen and carbon dioxide, with small amounts of the gases hydrogen, methane, and hydrogen sulfide. Bacterial fermentation of undigested polysaccharides produces these. Some of the fecal odor is due to indoles, metabolized from the amino acid tryptophan. The normal flora is also essential in the development of certain tissues, including the cecum and lymphatics.
They are also involved in the production of cross-reactive antibodies. These are antibodies produced by the immune system against the normal flora, that are also effective against related pathogens, thereby preventing infection or invasion.
The two most prevalent phyla of the colon are Bacillota and Bacteroidota. The ratio between the two seems to vary widely as reported by the Human Microbiome Project. Bacteroides are implicated in the initiation of colitis and colon cancer. Bifidobacteria are also abundant, and are often described as 'friendly bacteria'.
A mucus layer protects the large intestine from attacks from colonic commensal bacteria.
Clinical significance
Disease
Following are the most common diseases or disorders of the colon:
Colonoscopy
Colonoscopy is the endoscopic examination of the large intestine and the distal part of the small bowel with a CCD camera or a fiber optic camera on a flexible tube passed through the anus. It can provide a visual diagnosis (e.g. ulceration, polyps) and grants the opportunity for biopsy or removal of suspected colorectal cancer lesions. Colonoscopy can remove polyps as small as one millimetre or less. Once polyps are removed, they can be studied with the aid of a microscope to determine if they are precancerous or not. It takes 15 years or fewer for a polyp to turn cancerous.
Colonoscopy is similar to sigmoidoscopy—the difference being related to which parts of the colon each can examine. A colonoscopy allows an examination of the entire colon (1200–1500 mm in length). A sigmoidoscopy allows an examination of the distal portion (about 600 mm) of the colon, which may be sufficient because benefits to cancer survival of colonoscopy have been limited to the detection of lesions in the distal portion of the colon.
A sigmoidoscopy is often used as a screening procedure for a full colonoscopy, often done in conjunction with a stool-based test such as a fecal occult blood test (FOBT), fecal immunochemical test (FIT), or multi-target stool DNA test (Cologuard) or blood-based test, SEPT9 DNA methylation test (Epi proColon). About 5% of these screened patients are referred to colonoscopy.
Virtual colonoscopy, which uses 2D and 3D imagery reconstructed from computed tomography (CT) scans or from nuclear magnetic resonance (MR) scans, is also possible, as a totally non-invasive medical test, although it is not standard and still under investigation regarding its diagnostic abilities. Furthermore, virtual colonoscopy does not allow for therapeutic maneuvers such as polyp/tumour removal or biopsy nor visualization of lesions smaller than 5 millimeters. If a growth or polyp is detected using CT colonography, a standard colonoscopy would still need to be performed. Additionally, surgeons have lately been using the term pouchoscopy to refer to a colonoscopy of the ileo-anal pouch.
Other animals
The large intestine is truly distinct only in tetrapods, in which it is almost always separated from the small intestine by an ileocaecal valve. In most vertebrates, however, it is a relatively short structure running directly to the anus, although noticeably wider than the small intestine. Although the caecum is present in most amniotes, only in mammals does the remainder of the large intestine develop into a true colon.
In some small mammals, the colon is straight, as it is in other tetrapods, but, in the majority of mammalian species, it is divided into ascending and descending portions; a distinct transverse colon is typically present only in primates. However, the taeniae coli and accompanying haustra are not found in either carnivorans or ruminants. The rectum of mammals (other than monotremes) is derived from the cloaca of other vertebrates, and is, therefore, not truly homologous with the "rectum" found in these species.
In some fish, there is no true large intestine, but simply a short rectum connecting the end of the digestive part of the gut to the cloaca. In sharks, this includes a rectal gland that secretes salt to help the animal maintain osmotic balance with the seawater. The gland somewhat resembles a caecum in structure but is not a homologous structure.
Additional images
See also
Colectomy
Colonic ulcer
Large intestine (Chinese medicine)
References
External links
Digestive system
Organs (anatomy) | Large intestine | [
"Biology"
] | 5,228 | [
"Digestive system",
"Organ systems"
] |
59,414 | https://en.wikipedia.org/wiki/Nitrogen%20cycle | The nitrogen cycle is the biogeochemical cycle by which nitrogen is converted into multiple chemical forms as it circulates among atmospheric, terrestrial, and marine ecosystems. The conversion of nitrogen can be carried out through both biological and physical processes. Important processes in the nitrogen cycle include fixation, ammonification, nitrification, and denitrification. The majority of Earth's atmosphere (78%) is atmospheric nitrogen, making it the largest source of nitrogen. However, atmospheric nitrogen has limited availability for biological use, leading to a scarcity of usable nitrogen in many types of ecosystems.
The nitrogen cycle is of particular interest to ecologists because nitrogen availability can affect the rate of key ecosystem processes, including primary production and decomposition. Human activities such as fossil fuel combustion, use of artificial nitrogen fertilizers, and release of nitrogen in wastewater have dramatically altered the global nitrogen cycle. Human modification of the global nitrogen cycle can negatively affect the natural environment system and also human health.
Processes
Nitrogen is present in the environment in a wide variety of chemical forms including organic nitrogen, ammonium (), nitrite (), nitrate (), nitrous oxide (), nitric oxide (NO) or inorganic nitrogen gas (). Organic nitrogen may be in the form of a living organism, humus or in the intermediate products of organic matter decomposition. The processes in the nitrogen cycle is to transform nitrogen from one form to another. Many of those processes are carried out by microbes, either in their effort to harvest energy or to accumulate nitrogen in a form needed for their growth. For example, the nitrogenous wastes in animal urine are broken down by nitrifying bacteria in the soil to be used by plants. The diagram alongside shows how these processes fit together to form the nitrogen cycle.
Nitrogen fixation
The conversion of nitrogen gas () into nitrates and nitrites through atmospheric, industrial and biological processes is called nitrogen fixation. Atmospheric nitrogen must be processed, or "fixed", into a usable form to be taken up by plants. Between 5 and 10 billion kg per year are fixed by lightning strikes, but most fixation is done by free-living or symbiotic bacteria known as diazotrophs. These bacteria have the nitrogenase enzyme that combines gaseous nitrogen with hydrogen to produce ammonia, which is converted by the bacteria into other organic compounds. Most biological nitrogen fixation occurs by the activity of molybdenum (Mo)-nitrogenase, found in a wide variety of bacteria and some Archaea. Mo-nitrogenase is a complex two-component enzyme that has multiple metal-containing prosthetic groups. An example of free-living bacteria is Azotobacter. Symbiotic nitrogen-fixing bacteria such as Rhizobium usually live in the root nodules of legumes (such as peas, alfalfa, and locust trees). Here they form a mutualistic relationship with the plant, producing ammonia in exchange for carbohydrates. Because of this relationship, legumes will often increase the nitrogen content of nitrogen-poor soils. A few non-legumes can also form such symbioses. Today, about 30% of the total fixed nitrogen is produced industrially using the Haber-Bosch process, which uses high temperatures and pressures to convert nitrogen gas and a hydrogen source (natural gas or petroleum) into ammonia.
Assimilation
Plants can absorb nitrate or ammonium from the soil by their root hairs. If nitrate is absorbed, it is first reduced to nitrite ions and then ammonium ions for incorporation into amino acids, nucleic acids, and chlorophyll. In plants that have a symbiotic relationship with rhizobia, some nitrogen is assimilated in the form of ammonium ions directly from the nodules. It is now known that there is a more complex cycling of amino acids between Rhizobia bacteroids and plants. The plant provides amino acids to the bacteroids so ammonia assimilation is not required and the bacteroids pass amino acids (with the newly fixed nitrogen) back to the plant, thus forming an interdependent relationship. While many animals, fungi, and other heterotrophic organisms obtain nitrogen by ingestion of amino acids, nucleotides, and other small organic molecules, other heterotrophs (including many bacteria) are able to utilize inorganic compounds, such as ammonium as sole N sources. Utilization of various N sources is carefully regulated in all organisms.
Ammonification
When a plant or animal dies or an animal expels waste, the initial form of nitrogen is organic. Bacteria or fungi convert the organic nitrogen within the remains back into ammonium (), a process called ammonification or mineralization. Enzymes involved are:
GS: Gln Synthetase (cytosolic & plastic)
GOGAT: Glu 2-oxoglutarate aminotransferase (Ferredoxin & NADH-dependent)
GDH: Glu Dehydrogenase:
Minor role in ammonium assimilation.
Important in amino acid catabolism.
Nitrification
The conversion of ammonium to nitrate is performed primarily by soil-living bacteria and other nitrifying bacteria. In the primary stage of nitrification, the oxidation of ammonium () is performed by bacteria such as the Nitrosomonas species, which converts ammonia to nitrites (). Other bacterial species such as Nitrobacter, are responsible for the oxidation of the nitrites () into nitrates (). It is important for the ammonia () to be converted to nitrates or nitrites because ammonia gas is toxic to plants.
Due to their very high solubility and because soils are highly unable to retain anions, nitrates can enter groundwater. Elevated nitrate in groundwater is a concern for drinking water use because nitrate can interfere with blood-oxygen levels in infants and cause methemoglobinemia or blue-baby syndrome. Where groundwater recharges stream flow, nitrate-enriched groundwater can contribute to eutrophication, a process that leads to high algal population and growth, especially blue-green algal populations. While not directly toxic to fish life, like ammonia, nitrate can have indirect effects on fish if it contributes to this eutrophication. Nitrogen has contributed to severe eutrophication problems in some water bodies. Since 2006, the application of nitrogen fertilizer has been increasingly controlled in Britain and the United States. This is occurring along the same lines as control of phosphorus fertilizer, restriction of which is normally considered essential to the recovery of eutrophied waterbodies.
Denitrification
Denitrification is the reduction of nitrates back into nitrogen gas (), completing the nitrogen cycle. This process is performed by bacterial species such as Pseudomonas and Paracoccus, under anaerobic conditions. They use the nitrate as an electron acceptor in the place of oxygen during respiration. These facultatively (meaning optionally) anaerobic bacteria can also live in aerobic conditions. Denitrification happens in anaerobic conditions e.g. waterlogged soils. The denitrifying bacteria use nitrates in the soil to carry out respiration and consequently produce nitrogen gas, which is inert and unavailable to plants. Denitrification occurs in free-living microorganisms as well as obligate symbionts of anaerobic ciliates.
Dissimilatory nitrate reduction to ammonium
Dissimilatory nitrate reduction to ammonium (DNRA), or nitrate/nitrite ammonification, is an anaerobic respiration process. Microbes which undertake DNRA oxidise organic matter and use nitrate as an electron acceptor, reducing it to nitrite, then ammonium (). Both denitrifying and nitrate ammonification bacteria will be competing for nitrate in the environment, although DNRA acts to conserve bioavailable nitrogen as soluble ammonium rather than producing dinitrogen gas.
Anaerobic ammonia oxidation
The ANaerobic AMMonia OXidation process is also known as the ANAMMOX process, an abbreviation coined by joining the first syllables of each of these three words. This biological process is a redox comproportionation reaction, in which ammonia (the reducing agent giving electrons) and nitrite (the oxidizing agent accepting electrons) transfer three electrons and are converted into one molecule of diatomic nitrogen () gas and two water molecules. This process makes up a major proportion of nitrogen conversion in the oceans. The stoichiometrically balanced formula for the ANAMMOX chemical reaction can be written as following, where an ammonium ion includes the ammonia molecule, its conjugated base:
(ΔG° = ).
This an exergonic process (here also an exothermic reaction) releasing energy, as indicated by the negative value of ΔG°, the difference in Gibbs free energy between the products of reaction and the reagents.
Other processes
Though nitrogen fixation is the primary source of plant-available nitrogen in most ecosystems, in areas with nitrogen-rich bedrock, the breakdown of this rock also serves as a nitrogen source. Nitrate reduction is also part of the iron cycle, under anoxic conditions Fe(II) can donate an electron to and is oxidized to Fe(III) while is reduced to , and depending on the conditions and microbial species involved. The fecal plumes of cetaceans also act as a junction in the marine nitrogen cycle, concentrating nitrogen in the epipelagic zones of ocean environments before its dispersion through various marine layers, ultimately enhancing oceanic primary productivity.
Marine nitrogen cycle
The nitrogen cycle is an important process in the ocean as well. While the overall cycle is similar, there are different players and modes of transfer for nitrogen in the ocean. Nitrogen enters the water through the precipitation, runoff, or as from the atmosphere. Nitrogen cannot be utilized by phytoplankton as so it must undergo nitrogen fixation which is performed predominately by cyanobacteria. Without supplies of fixed nitrogen entering the marine cycle, the fixed nitrogen would be used up in about 2000 years. Phytoplankton need nitrogen in biologically available forms for the initial synthesis of organic matter. Ammonia and urea are released into the water by excretion from plankton. Nitrogen sources are removed from the euphotic zone by the downward movement of the organic matter. This can occur from sinking of phytoplankton, vertical mixing, or sinking of waste of vertical migrators. The sinking results in ammonia being introduced at lower depths below the euphotic zone. Bacteria are able to convert ammonia to nitrite and nitrate but they are inhibited by light so this must occur below the euphotic zone. Ammonification or Mineralization is performed by bacteria to convert organic nitrogen to ammonia. Nitrification can then occur to convert the ammonium to nitrite and nitrate. Nitrate can be returned to the euphotic zone by vertical mixing and upwelling where it can be taken up by phytoplankton to continue the cycle. can be returned to the atmosphere through denitrification.
Ammonium is thought to be the preferred source of fixed nitrogen for phytoplankton because its assimilation does not involve a redox reaction and therefore requires little energy. Nitrate requires a redox reaction for assimilation but is more abundant so most phytoplankton have adapted to have the enzymes necessary to undertake this reduction (nitrate reductase). There are a few notable and well-known exceptions that include most Prochlorococcus and some Synechococcus that can only take up nitrogen as ammonium.
The nutrients in the ocean are not uniformly distributed. Areas of upwelling provide supplies of nitrogen from below the euphotic zone. Coastal zones provide nitrogen from runoff and upwelling occurs readily along the coast. However, the rate at which nitrogen can be taken up by phytoplankton is decreased in oligotrophic waters year-round and temperate water in the summer resulting in lower primary production. The distribution of the different forms of nitrogen varies throughout the oceans as well.
Nitrate is depleted in near-surface water except in upwelling regions. Coastal upwelling regions usually have high nitrate and chlorophyll levels as a result of the increased production. However, there are regions of high surface nitrate but low chlorophyll that are referred to as HNLC (high nitrogen, low chlorophyll) regions. The best explanation for HNLC regions relates to iron scarcity in the ocean, which may play an important part in ocean dynamics and nutrient cycles. The input of iron varies by region and is delivered to the ocean by dust (from dust storms) and leached out of rocks. Iron is under consideration as the true limiting element to ecosystem productivity in the ocean.
Ammonium and nitrite show a maximum concentration at 50–80 m (lower end of the euphotic zone) with decreasing concentration below that depth. This distribution can be accounted for by the fact that nitrite and ammonium are intermediate species. They are both rapidly produced and consumed through the water column. The amount of ammonium in the ocean is about 3 orders of magnitude less than nitrate. Between ammonium, nitrite, and nitrate, nitrite has the fastest turnover rate. It can be produced during nitrate assimilation, nitrification, and denitrification; however, it is immediately consumed again.
New vs. regenerated nitrogen
Nitrogen entering the euphotic zone is referred to as new nitrogen because it is newly arrived from outside the productive layer. The new nitrogen can come from below the euphotic zone or from outside sources. Outside sources are upwelling from deep water and nitrogen fixation. If the organic matter is eaten, respired, delivered to the water as ammonia, and re-incorporated into organic matter by phytoplankton it is considered recycled/regenerated production.
New production is an important component of the marine environment. One reason is that only continual input of new nitrogen can determine the total capacity of the ocean to produce a sustainable fish harvest. Harvesting fish from regenerated nitrogen areas will lead to a decrease in nitrogen and therefore a decrease in primary production. This will have a negative effect on the system. However, if fish are harvested from areas of new nitrogen the nitrogen will be replenished.
Future acidification
As illustrated by the diagram on the right, additional carbon dioxide () is absorbed by the ocean and reacts with water, carbonic acid () is formed and broken down into both bicarbonate () and hydrogen () ions (gray arrow), which reduces bioavailable carbonate () and decreases ocean pH (black arrow). This is likely to enhance nitrogen fixation by diazotrophs (gray arrow), which utilize ions to convert nitrogen into bioavailable forms such as ammonia () and ammonium ions (). However, as pH decreases, and more ammonia is converted to ammonium ions (gray arrow), there is less oxidation of ammonia to nitrite (NO), resulting in an overall decrease in nitrification and denitrification (black arrows). This in turn would lead to a further build-up of fixed nitrogen in the ocean, with the potential consequence of eutrophication. Gray arrows represent an increase while black arrows represent a decrease in the associated process.
Human influences on the nitrogen cycle
As a result of extensive cultivation of legumes (particularly soy, alfalfa, and clover), growing use of the Haber–Bosch process in the production of chemical fertilizers, and pollution emitted by vehicles and industrial plants, human beings have more than doubled the annual transfer of nitrogen into biologically available forms. In addition, humans have significantly contributed to the transfer of nitrogen trace gases from Earth to the atmosphere and from the land to aquatic systems. Human alterations to the global nitrogen cycle are most intense in developed countries and in Asia, where vehicle emissions and industrial agriculture are highest.
Generation of Nr, reactive nitrogen, has increased over 10 fold in the past century due to global industrialisation. This form of nitrogen follows a cascade through the biosphere via a variety of mechanisms, and is accumulating as the rate of its generation is greater than the rate of denitrification.
Nitrous oxide () has risen in the atmosphere as a result of agricultural fertilization, biomass burning, cattle and feedlots, and industrial sources. has deleterious effects in the stratosphere, where it breaks down and acts as a catalyst in the destruction of atmospheric ozone. Nitrous oxide is also a greenhouse gas and is currently the third largest contributor to global warming, after carbon dioxide and methane. While not as abundant in the atmosphere as carbon dioxide, it is, for an equivalent mass, nearly 300 times more potent in its ability to warm the planet.
Ammonia () in the atmosphere has tripled as the result of human activities. It is a reactant in the atmosphere, where it acts as an aerosol, decreasing air quality and clinging to water droplets, eventually resulting in nitric acid (HNO3) that produces acid rain. Atmospheric ammonia and nitric acid also damage respiratory systems.
The very high temperature of lightning naturally produces small amounts of , , and , but high-temperature combustion has contributed to a 6- or 7-fold increase in the flux of to the atmosphere. Its production is a function of combustion temperature - the higher the temperature, the more is produced. Fossil fuel combustion is a primary contributor, but so are biofuels and even the burning of hydrogen. However, the rate that hydrogen is directly injected into the combustion chambers of internal combustion engines can be controlled to prevent the higher combustion temperatures that produce .
Ammonia and nitrous oxides actively alter atmospheric chemistry. They are precursors of tropospheric (lower atmosphere) ozone production, which contributes to smog and acid rain, damages plants and increases nitrogen inputs to ecosystems. Ecosystem processes can increase with nitrogen fertilization, but anthropogenic input can also result in nitrogen saturation, which weakens productivity and can damage the health of plants, animals, fish, and humans.
Decreases in biodiversity can also result if higher nitrogen availability increases nitrogen-demanding grasses, causing a degradation of nitrogen-poor, species-diverse heathlands.
Consequence of human modification of the nitrogen cycle
Impacts on natural systems
Increasing levels of nitrogen deposition is shown to have several adverse effects on both terrestrial and aquatic ecosystems. Nitrogen gases and aerosols can be directly toxic to certain plant species, affecting the aboveground physiology and growth of plants near large point sources of nitrogen pollution. Changes to plant species may also occur as nitrogen compound accumulation increases availability in a given ecosystem, eventually changing the species composition, plant diversity, and nitrogen cycling. Ammonia and ammonium – two reduced forms of nitrogen – can be detrimental over time due to increased toxicity toward sensitive species of plants, particularly those that are accustomed to using nitrate as their source of nitrogen, causing poor development of their roots and shoots. Increased nitrogen deposition also leads to soil acidification, which increases base cation leaching in the soil and amounts of aluminum and other potentially toxic metals, along with decreasing the amount of nitrification occurring and increasing plant-derived litter. Due to the ongoing changes caused by high nitrogen deposition, an environment's susceptibility to ecological stress and disturbance – such as pests and pathogens – may increase, thus making it less resilient to situations that otherwise would have little impact on its long-term vitality.
Additional risks posed by increased availability of inorganic nitrogen in aquatic ecosystems include water acidification; eutrophication of fresh and saltwater systems; and toxicity issues for animals, including humans. Eutrophication often leads to lower dissolved oxygen levels in the water column, including hypoxic and anoxic conditions, which can cause death of aquatic fauna. Relatively sessile benthos, or bottom-dwelling creatures, are particularly vulnerable because of their lack of mobility, though large fish kills are not uncommon. Oceanic dead zones near the mouth of the Mississippi in the Gulf of Mexico are a well-known example of algal bloom-induced hypoxia. The New York Adirondack Lakes, Catskills, Hudson Highlands, Rensselaer Plateau and parts of Long Island display the impact of nitric acid rain deposition, resulting in the killing of fish and many other aquatic species.
Ammonia () is highly toxic to fish, and the level of ammonia discharged from wastewater treatment facilities must be closely monitored. Nitrification via aeration before discharge is often desirable to prevent fish deaths. Land application can be an attractive alternative to aeration.
Impacts on human health: nitrate accumulation in drinking water
Leakage of Nr (reactive nitrogen) from human activities can cause nitrate accumulation in the natural water environment, which can create harmful impacts on human health. Excessive use of N-fertilizer in agriculture has been a significant source of nitrate pollution in groundwater and surface water. Due to its high solubility and low retention by soil, nitrate can easily escape from the subsoil layer to the groundwater, causing nitrate pollution. Some other non-point sources for nitrate pollution in groundwater originate from livestock feeding, animal and human contamination, and municipal and industrial waste. Since groundwater often serves as the primary domestic water supply, nitrate pollution can be extended from groundwater to surface and drinking water during potable water production, especially for small community water supplies, where poorly regulated and unsanitary waters are used.
The WHO standard for drinking water is 50 mg L−1 for short-term exposure, and for 3 mg L−1 chronic effects. Once it enters the human body, nitrate can react with organic compounds through nitrosation reactions in the stomach to form nitrosamines and nitrosamides, which are involved in some types of cancers (e.g., oral cancer and gastric cancer).
Impacts on human health: air quality
Human activities have also dramatically altered the global nitrogen cycle by producing nitrogenous gases associated with global atmospheric nitrogen pollution. There are multiple sources of atmospheric reactive nitrogen (Nr) fluxes. Agricultural sources of reactive nitrogen can produce atmospheric emission of ammonia (), nitrogen oxides () and nitrous oxide (). Combustion processes in energy production, transportation, and industry can also form new reactive nitrogen via the emission of , an unintentional waste product. When those reactive nitrogens are released into the lower atmosphere, they can induce the formation of smog, particulate matter, and aerosols, all of which are major contributors to adverse health effects on human health from air pollution. In the atmosphere, can be oxidized to nitric acid (), and it can further react with to form ammonium nitrate (), which facilitates the formation of particulate nitrate. Moreover, can react with other acid gases (sulfuric and hydrochloric acids) to form ammonium-containing particles, which are the precursors for the secondary organic aerosol particles in photochemical smog.
See also
References
Cycle
Biogeochemical cycle
Soil biology
Metabolism
Biogeography | Nitrogen cycle | [
"Chemistry",
"Biology"
] | 4,787 | [
"Biogeography",
"Biogeochemical cycle",
"Nitrogen cycle",
"Biogeochemistry",
"Soil biology",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
59,438 | https://en.wikipedia.org/wiki/Thermal%20conductivity%20and%20resistivity | The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by , , or and is measured in W·m−1·K−1.
Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials such as mineral wool or Styrofoam. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation. The reciprocal of thermal conductivity is called thermal resistivity.
The defining equation for thermal conductivity is , where is the heat flux, is the thermal conductivity, and is the temperature gradient. This is known as Fourier's law for heat conduction. Although commonly expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials which are anisotropic.
Definition
Simple definition
Consider a solid material placed between two environments of different temperatures. Let be the temperature at and be the temperature at , and suppose . An example of this scenario is a building on a cold winter day; the solid material in this case is the building wall, separating the cold outdoor environment from the warm indoor environment.
According to the second law of thermodynamics, heat will flow from the hot environment to the cold one as the temperature difference is equalized by diffusion. This is quantified in terms of a heat flux , which gives the rate, per unit area, at which heat flows in a given direction (in this case minus x-direction). In many materials, is observed to be directly proportional to the temperature difference and inversely proportional to the separation distance :
The constant of proportionality is the thermal conductivity; it is a physical property of the material. In the present scenario, since heat flows in the minus x-direction and is negative, which in turn means that . In general, is always defined to be positive. The same definition of can also be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation, are eliminated or accounted for.
The preceding derivation assumes that the does not change significantly as temperature is varied from to . Cases in which the temperature variation of is non-negligible must be addressed using the more general definition of discussed below.
General definition
Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient. It is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses.
Energy flow due to thermal conduction is classified as heat and is quantified by the vector , which gives the heat flux at position and time . According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it is reasonable to postulate that is proportional to the gradient of the temperature field , i.e.
where the constant of proportionality, , is the thermal conductivity. This is called Fourier's law of heat conduction. Despite its name, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities and . As such, its usefulness depends on the ability to determine for a given material under given conditions. The constant itself usually depends on and thereby implicitly on space and time. An explicit space and time dependence could also occur if the material is inhomogeneous or changing with time.
In some solids, thermal conduction is anisotropic, i.e. the heat flux is not always parallel to the temperature gradient. To account for such behavior, a tensorial form of Fourier's law must be used:
where is symmetric, second-rank tensor called the thermal conductivity tensor.
An implicit assumption in the above description is the presence of local thermodynamic equilibrium, which allows one to define a temperature field . This assumption could be violated in systems that are unable to attain local equilibrium, as might happen in the presence of strong nonequilibrium driving or long-ranged interactions.
Other quantities
In engineering practice, it is common to work in terms of quantities which are derivative to thermal conductivity and implicitly take into account design-specific features such as component dimensions.
For instance, thermal conductance is defined as the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity , area and thickness , the conductance is , measured in W⋅K−1. The relationship between thermal conductivity and conductance is analogous to the relationship between electrical conductivity and electrical conductance.
Thermal resistance is the inverse of thermal conductance. It is a convenient measure to use in multicomponent design since thermal resistances are additive when occurring in series.
There is also a measure known as the heat transfer coefficient: the quantity of heat that passes per unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. In ASTM C168-15, this area-independent quantity is referred to as the "thermal conductance". The reciprocal of the heat transfer coefficient is thermal insulance. In summary, for a plate of thermal conductivity , area and thickness ,
thermal conductance = , measured in W⋅K−1.
thermal resistance = , measured in K⋅W−1.
heat transfer coefficient = , measured in W⋅K−1⋅m−2.
thermal insulance = , measured in K⋅m2⋅W−1.
The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow.
An additional term, thermal transmittance, quantifies the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is also used.
Finally, thermal diffusivity combines thermal conductivity with density and specific heat:
.
As such, it quantifies the thermal inertia of a material, i.e. the relative difficulty in heating a material to a given temperature using heat sources applied at the boundary.
Units
In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(m⋅K)). Some papers report in watts per centimeter-kelvin [W/(cm⋅K)].
However, physicists use other convenient units as well, e.g., in cgs units, where esu/(cm-sec-K) is used.
The Lorentz number, defined as L=κ/σT is a quantity independent of the carrier density and the scattering mechanism. Its value for a gas of non-interacting electrons (typical carriers in good metallic conductors) is 2.72×10−13 esu/K2, or equivalently, 2.44×10−8 Watt-Ohm/K2.
In imperial units, thermal conductivity is measured in BTU/(h⋅ft⋅°F).
The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ).
Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of measures such as the R-value (resistance) and the U-value (transmittance or conductance). Although related to the thermal conductivity of a material used in an insulation product or assembly, R- and U-values are measured per unit area, and depend on the specified thickness of the product or assembly.
Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry.
Measurement
There are several ways to measure thermal conductivity; each is suitable for a limited range of materials. Broadly speaking, there are two categories of measurement techniques: steady-state and transient. Steady-state techniques infer the thermal conductivity from measurements on the state of a material once a steady-state temperature profile has been reached, whereas transient techniques operate on the instantaneous state of a system during the approach to steady state. Lacking an explicit time component, steady-state techniques do not require complicated signal analysis (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed, and the time required to reach steady state precludes rapid measurement.
In comparison with solid materials, the thermal properties of fluids are more difficult to study experimentally. This is because in addition to thermal conduction, convective and radiative energy transport are usually present unless measures are taken to limit these processes. The formation of an insulating boundary layer can also result in an apparent reduction in the thermal conductivity.
Experimental values
The thermal conductivities of common substances span at least four orders of magnitude. Gases generally have low thermal conductivity, and pure metals have high thermal conductivity. For example, under standard conditions the thermal conductivity of copper is over times that of air.
Of all materials, allotropes of carbon, such as graphite and diamond, are usually credited with having the highest thermal conductivities at room temperature. The thermal conductivity of natural diamond at room temperature is several times higher than that of a highly conductive metal such as copper (although the precise value varies depending on the diamond type).
Thermal conductivities of selected substances are tabulated below; an expanded list can be found in the list of thermal conductivities. These values are illustrative estimates only, as they do not account for measurement uncertainties or variability in material definitions.
Influencing factors
Temperature
The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply. In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K.
On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high-quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects.
Chemical phase
When a material undergoes a phase change (e.g. from solid to liquid), the thermal conductivity may change abruptly. For instance, when ice melts to form liquid water at 0 °C, the thermal conductivity changes from 2.18 W/(m⋅K) to 0.56 W/(m⋅K).
Even more dramatically, the thermal conductivity of a fluid diverges in the vicinity of the vapor-liquid critical point.
Thermal anisotropy
Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the c axis and 32 W/(m⋅K) along the a axis.
Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures.
When anisotropy is present, the direction of heat flow may differ from the direction of the thermal gradient.
Electrical conductivity
In metals, thermal conductivity is approximately correlated with electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator but conducts heat via phonons due to its orderly array of atoms.
Magnetic field
The influence of magnetic fields on thermal conductivity is known as the thermal Hall effect or Righi–Leduc effect.
Gaseous phases
In the absence of convection, air and other gases are good insulators. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which obstruct heat conduction pathways. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by trapping air in pores, pockets, or voids.
Low density gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacity. Argon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics.
The thermal conductivity through bulk materials in porous or granular form is governed by the type of gas in the gaseous phase, and its pressure. At low pressures, the thermal conductivity of a gaseous phase is reduced, with this behaviour governed by the Knudsen number, defined as , where is the mean free path of gas molecules and is the typical gap size of the space filled by the gas. In a granular material corresponds to the characteristic size of the gaseous phase in the pores or intergranular spaces.
Isotopic purity
The thermal conductivity of a crystal can depend strongly on isotopic purity, assuming other lattice defects are negligible. A notable example is diamond: at a temperature of around 100 K the thermal conductivity increases from 10,000 W·m−1·K−1 for natural type IIa diamond (98.9% 12C), to 41,000 for 99.9% enriched synthetic diamond. A value of 200,000 is predicted for 99.999% 12C at 80 K, assuming an otherwise pure crystal. The thermal conductivity of 99% isotopically enriched cubic boron nitride is ~ 1400 W·m−1·K−1, which is 90% higher than that of natural boron nitride.
Molecular origins
The molecular mechanisms of thermal conduction vary among different materials, and in general depend on details of the microscopic structure and molecular interactions. As such, thermal conductivity is difficult to predict from first-principles. Any expressions for thermal conductivity which are exact and general, e.g. the Green-Kubo relations, are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions. A notable exception is a monatomic dilute gas, for which a well-developed theory exists expressing thermal conductivity accurately and explicitly in terms of molecular parameters.
In a gas, thermal conduction is mediated by discrete molecular collisions. In a simplified picture of a solid, thermal conduction occurs by two mechanisms: 1) the migration of free electrons and 2) lattice vibrations (phonons). The first mechanism dominates in pure metals and the second in non-metallic solids. In liquids, by contrast, the precise microscopic mechanisms of thermal conduction are poorly understood.
Gases
In a simplified model of a dilute monatomic gas, molecules are modeled as rigid spheres which are in constant motion, colliding elastically with each other and with the walls of their container. Consider such a gas at temperature and with density , specific heat and molecular mass . Under these assumptions, an elementary calculation yields for the thermal conductivity
where is a numerical constant of order , is the Boltzmann constant, and is the mean free path, which measures the average distance a molecule travels between collisions. Since is inversely proportional to density, this equation predicts that thermal conductivity is independent of density for fixed temperature. The explanation is that increasing density increases the number of molecules which carry energy but decreases the average distance a molecule can travel before transferring its energy to a different molecule: these two effects cancel out. For most gases, this prediction agrees well with experiments at pressures up to about 10 atmospheres. At higher densities, the simplifying assumption that energy is only transported by the translational motion of particles no longer holds, and the theory must be modified to account for the transfer of energy across a finite distance at the moment of collision between particles, as well as the locally non-uniform density in a high density gas. This modification has been carried out, yielding Revised Enskog Theory, which predicts a density dependence of the thermal conductivity in dense gases.
Typically, experiments show a more rapid increase with temperature than (here, is independent of ). This failure of the elementary theory can be traced to the oversimplified "hard sphere" model, which both ignores the "softness" of real molecules, and the attractive forces present between real molecules, such as dispersion forces.
To incorporate more complex interparticle interactions, a systematic approach is necessary. One such approach is provided by Chapman–Enskog theory, which derives explicit expressions for thermal conductivity starting from the Boltzmann equation. The Boltzmann equation, in turn, provides a statistical description of a dilute gas for generic interparticle interactions. For a monatomic gas, expressions for derived in this way take the form
where is an effective particle diameter and is a function of temperature whose explicit form depends on the interparticle interaction law. For rigid elastic spheres, is independent of and very close to . More complex interaction laws introduce a weak temperature dependence. The precise nature of the dependence is not always easy to discern, however, as is defined as a multi-dimensional integral which may not be expressible in terms of elementary functions, but must be evaluated numerically. However, for particles interacting through a Mie potential (a generalisation of the Lennard-Jones potential) highly accurate correlations for in terms of reduced units have been developed.
An alternate, equivalent way to present the result is in terms of the gas viscosity , which can also be calculated in the Chapman–Enskog approach:
where is a numerical factor which in general depends on the molecular model. For smooth spherically symmetric molecules, however, is very close to , not deviating by more than for a variety of interparticle force laws. Since , , and are each well-defined physical quantities which can be measured independent of each other, this expression provides a convenient test of the theory. For monatomic gases, such as the noble gases, the agreement with experiment is fairly good.
For gases whose molecules are not spherically symmetric, the expression still holds. In contrast with spherically symmetric molecules, however, varies significantly depending on the particular form of the interparticle interactions: this is a result of the energy exchanges between the internal and translational degrees of freedom of the molecules. An explicit treatment of this effect is difficult in the Chapman–Enskog approach. Alternately, the approximate expression was suggested by Eucken, where is the heat capacity ratio of the gas.
The entirety of this section assumes the mean free path is small compared with macroscopic (system) dimensions. In extremely dilute gases this assumption fails, and thermal conduction is described instead by an apparent thermal conductivity which decreases with density. Ultimately, as the density goes to the system approaches a vacuum, and thermal conduction ceases entirely.
Liquids
The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. An example of a simple but very rough theory is that of Bridgman, in which a liquid is ascribed a local molecular structure similar to that of a solid, i.e. with molecules located approximately on a lattice. Elementary calculations then lead to the expression
where is the Avogadro constant, is the volume of a mole of liquid, and is the speed of sound in the liquid. This is commonly called Bridgman's equation.
Metals
For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So
with k0 a constant. For pure metals, k0 is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k, are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation.
Lattice waves, phonons, in dielectric solids
Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (i.e., phonons). This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm.
The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg is the group velocity of a phonon wave packet, then the relaxation length is defined as:
where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, Vlong is much greater than Vtrans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons.
Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering.
Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL (L) is small.
Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3p(q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λL.
From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly.
Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity.
Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression
where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹n› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation
states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time (τ) approximation
which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation
Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λL can be determined. The temperature dependence for λL originates from the variety of processes, whose significance for λL depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation
where Λ is the mean free path for phonon and denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that for cubic or isotropic systems and .
At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3.
Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ), the conservation of energy and quasimomentum , where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport.
Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution . To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved.
Therefore, these phonons have to possess energy of , which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to , with . Temperature dependence of the mean free path has an exponential form . The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL, as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance.
At high temperatures (T > Θ), the mean free path and therefore λL has a temperature dependence T−1, to which one arrives from formula by making the following approximation and writing . This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur.
Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids.
Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles or structures.
Prediction
Because thermal conductivity depends continuously on quantities like temperature and material composition, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available under the physical conditions of interest. This capability is important in thermophysical simulations, where quantities like temperature and pressure vary continuously with space and time, and may encompass extreme conditions inaccessible to direct measurement.
In fluids
For the simplest fluids, such as monatomic gases and their mixtures at low to moderate densities, ab initio quantum mechanical computations can accurately predict thermal conductivity in terms of fundamental atomic properties—that is, without reference to existing measurements of thermal conductivity or other transport properties. This method uses Chapman-Enskog theory or Revised Enskog Theory to evaluate the thermal conductivity, taking fundamental intermolecular potentials as input, which are computed ab initio from a quantum mechanical description.
For most fluids, such high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing thermal conductivity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures
and pressures, then it is called a "reference correlation" for that material. Reference correlations have been published for many pure materials; examples are carbon dioxide, ammonia, and benzene. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases.
Thermophysical modeling software often relies on reference correlations for predicting thermal conductivity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source).
Thermal conductivity can also be computed using the Green-Kubo relations, which express transport coefficients in terms of the statistics of molecular trajectories. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules.
History
Jan Ingenhousz and the thermal conductivity of different metals
In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities:
See also
Copper in heat exchangers
Heat pump
Heat transfer
Heat transfer mechanisms
Insulated pipe
Interfacial thermal resistance
Laser flash analysis
List of thermal conductivities
Phase-change material
R-value (insulation)
Specific heat capacity
Thermal bridge
Thermal conductance quantum
Thermal contact conductance
Thermal diffusivity
Thermal effusivity
Thermal entrance length
Thermal interface material
Thermal diode
Thermal resistance
Thermistor
Thermocouple
Thermodynamics
Thermal conductivity measurement
Refractory metals
References
Notes
Citations
Sources
Further reading
Undergraduate-level texts (engineering)
. A standard, modern reference.
Undergraduate-level texts (physics)
Halliday, David; Resnick, Robert; & Walker, Jearl (1997). Fundamentals of Physics (5th ed.). John Wiley and Sons, New York . An elementary treatment.
. A brief, intermediate-level treatment.
. An advanced treatment.
Graduate-level texts
. A very advanced but classic text on the theory of transport processes in gases.
Reid, C. R., Prausnitz, J. M., Poling B. E., Properties of gases and liquids, IV edition, Mc Graw-Hill, 1987
Srivastava G. P (1990), The Physics of Phonons. Adam Hilger, IOP Publishing Ltd, Bristol
External links
Thermopedia THERMAL CONDUCTIVITY
Contribution of Interionic Forces to the Thermal Conductivity of Dilute Electrolyte Solutions The Journal of Chemical Physics 41, 3924 (1964)
The importance of Soil Thermal Conductivity for power companies
Thermal Conductivity of Gas Mixtures in Chemical Equilibrium. II The Journal of Chemical Physics 32, 1005 (1960)
Heat conduction
Heat transfer
Physical quantities
Thermodynamic properties | Thermal conductivity and resistivity | [
"Physics",
"Chemistry",
"Mathematics"
] | 7,090 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Heat conduction",
"Physical properties"
] |
59,442 | https://en.wikipedia.org/wiki/Baryte | Baryte, barite or barytes ( or ) is a mineral consisting of barium sulfate (BaSO4). Baryte is generally white or colorless, and is the main source of the element barium. The baryte group consists of baryte, celestine (strontium sulfate), anglesite (lead sulfate), and anhydrite (calcium sulfate). Baryte and celestine form a solid solution .
Names and history
The radiating form, sometimes referred to as Bologna Stone, attained some notoriety among alchemists for specimens found in the 17th century near Bologna by Vincenzo Casciarolo. These became phosphorescent upon being calcined.
Carl Scheele determined that baryte contained a new element in 1774, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England.
The American Petroleum Institute specification API 13/ISO 13500, which governs baryte for drilling purposes, does not refer to any specific mineral, but rather a material that meets that specification. In practice, however, this is usually the mineral baryte.
The term "primary barytes" refers to the first marketable product, which includes crude baryte (run of mine) and the products of simple beneficiation methods, such as washing, jigging, heavy media separation, tabling, and flotation. Most crude baryte requires some upgrading to minimum purity or density. Baryte that is used as an aggregate in a "heavy" cement is crushed and screened to a uniform size. Most baryte is ground to a small, uniform size before it is used as a filler or extender, an addition to industrial products, in the production of barium chemicals or as a weighting agent in petroleum well drilling mud.
Name
The name baryte is derived from the , 'heavy'. The American spelling is barite. The International Mineralogical Association initially adopted "barite" as the official spelling, but recommended adopting the older "baryte" spelling later. This move was controversial and was notably ignored by American mineralogists.
Other names have been used for baryte, including barytine, barytite, barytes, heavy spar, tiff, and blanc fixe.
Mineral associations and locations
Baryte occurs in many depositional environments, and is deposited through many processes including biogenic, hydrothermal, and evaporation, among others. Baryte commonly occurs in lead-zinc veins in limestones, in hot spring deposits, and with hematite ore. It is often associated with the minerals anglesite and celestine. It has also been identified in meteorites.
Baryte has been found at locations in Australia, Brazil, Nigeria, Canada, Chile, China, India, Pakistan, Germany, Greece, Guatemala, Iran, Ireland (where it was mined on Benbulben), Liberia, Mexico, Morocco, Peru, Romania (Baia Sprie), Turkey, South Africa (Barberton Mountain Land), Thailand, United Kingdom (Cornwall, Cumbria, Dartmoor/Devon, Derbyshire, Durham, Shropshire, Perthshire, Argyllshire, and Surrey) and in the US from Cheshire, Connecticut, De Kalb, New York, and Fort Wallace, New Mexico. It is mined in Arkansas, Connecticut, Virginia, North Carolina, Georgia, Tennessee, Kentucky, Nevada, and Missouri.
The global production of baryte in 2019 was estimated to be around 9.5 million metric tons, down from 9.8 million metric tons in 2012. The major barytes producers (in thousand tonnes, data for 2017) are as follows: China (3,600), India (1,600), Morocco (1,000), Mexico (400), United States (330), Iran (280), Turkey (250), Russia (210), Kazakhstan (160), Thailand (130) and Laos (120).
The main users of barytes in 2017 were (in million tonnes) US (2.35), China (1.60), Middle East (1.55), the European Union and Norway (0.60), Russia and CIS (0.5), South America (0.35), Africa (0.25), and Canada (0.20). 70% of barytes was destined for oil and gas well drilling muds. 15% for barium chemicals, 14% for filler applications in automotive, construction, and paint industries, and 1% other applications.
Natural baryte formed under hydrothermal conditions may be associated with quartz or silica. In hydrothermal vents, the baryte-silica mineralisation can also be accompanied by precious metals.
Information about the mineral resource base of baryte ores is presented in some scientific articles.
Uses
In oil and gas drilling
Worldwide, 69–77% of baryte is used as a weighting agent for drilling fluids in oil and gas exploration to suppress high formation pressures and prevent blowouts. As a well is drilled, the bit passes through various formations, each with different characteristics. The deeper the hole, the more baryte is needed as a percentage of the total mud mix. An additional benefit of baryte is that it is non-magnetic and thus does not interfere with magnetic measurements taken in the borehole, either during logging-while-drilling or in separate drill hole logging. Baryte used for drilling petroleum wells can be black, blue, brown or gray depending on the ore body. The baryte is finely ground so that at least 97% of the material, by weight, can pass through a 200-mesh (75 μm) screen, and no more than 30%, by weight, can be less than 6 μm diameter. The ground baryte also must be dense enough so that its specific gravity is 4.2 or greater, soft enough to not damage the bearings of a tricone drill bit, chemically inert, and containing no more than 250 milligrams per kilogram of soluble alkaline salts. In August 2010, the American Petroleum Institute published specifications to modify the 4.2 drilling grade standards for baryte to include 4.1 SG materials.
In oxygen and sulfur isotopic analysis
In the deep ocean, away from continental sources of sediment, pelagic baryte precipitates and forms a significant amount of the sediments. Since baryte has oxygen, systematics in the δ18O of these sediments have been used to help constrain paleotemperatures for oceanic crust.
The variations in sulfur isotopes (34S/32S) are being examined in evaporite minerals containing sulfur (e.g. baryte) and carbonate associated sulfates (CAS) to determine past seawater sulfur concentrations which can help identify specific depositional periods such as anoxic or oxic conditions. The use of sulfur isotope reconstruction is often paired with oxygen when a molecule contains both elements.
Geochronological dating
Dating the baryte in hydrothermal vents has been one of the major methods to determine their ages. Common methods to date hydrothermal baryte include radiometric dating and electron spin resonance dating.
Other uses
Baryte is used in added-value applications which include filler in paint and plastics, sound reduction in engine compartments, coat of automobile finishes for smoothness and corrosion resistance, friction products for automobiles and trucks, radiation shielding concrete, glass ceramics, and medical applications (for example, a barium meal before a contrast CT scan). Baryte is supplied in a variety of forms and the price depends on the amount of processing; filler applications commanding higher prices following intense physical processing by grinding and micronising, and there are further premiums for whiteness and brightness and color. It is also used to produce other barium chemicals, notably barium carbonate which is used for the manufacture of LED glass for television and computer screens (historically in cathode-ray tubes); and for dielectrics.
Historically, baryte was used for the production of barium hydroxide for sugar refining, and as a white pigment for textiles, paper, and paint.
Although baryte contains the toxic alkaline earth metal barium, it is not detrimental for human health, animals, plants and the environment because barium sulfate is extremely insoluble in water.
It is also sometimes used as a gemstone.
See also
Hokutolite
Rose rock
References
Further reading
Barium minerals
Sulfate minerals
Evaporite
Gemstones
Industrial minerals
Luminescent minerals
Orthorhombic minerals
Baryte group
Minerals in space group 62 | Baryte | [
"Physics",
"Chemistry"
] | 1,794 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
59,444 | https://en.wikipedia.org/wiki/Energy%20level | A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers ( = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...).
Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.
If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it.
Explanation
Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator.
Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy.
History
The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926.
Atoms
Intrinsic energy levels
In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number . When the electron is bound to the atom in any closer value of , the electron's energy is lower and is considered negative.
Orbital state energy level: atom/ion with nucleus + one electron
Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by:
(typically between 1 eV and 103 eV), where is the Rydberg constant, is the atomic number, is the principal quantum number, is the Planck constant, and is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number .
This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with assuming that the principal quantum number above = in the Rydberg formula and (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data.
An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.
Electron–electron interactions in atoms
If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.
For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where is substituted with an effective nuclear charge symbolized as that depends strongly on the principal quantum number.
In such cases, the orbital types (determined by the azimuthal quantum number ) as well as their levels within the molecule affect and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule.
Fine structure splitting
Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV.
Hyperfine structure
This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV.
Energy levels due to external fields
Zeeman effect
There is an interaction energy associated with the magnetic dipole moment, , arising from the electronic orbital angular momentum, , given by
with
.
Additionally taking into account the magnetic momentum arising from the electron spin.
Due to relativistic effects (Dirac equation), there is a magnetic momentum, , arising from the electron spin
,
with the electron-spin g-factor (about 2), resulting in a total magnetic moment, ,
.
The interaction energy therefore becomes
.
Stark effect
Molecules
Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs.
In polyatomic molecules, different vibrational and rotational energy levels are also involved.
Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that:
where is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.
The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance.
Energy level diagrams
There are various types of energy level diagrams for bonds between atoms in a molecule.
Examples Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams.
Energy level transitions
Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels.
Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.
If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant () times its frequency () and thus is proportional to its frequency, or inversely to its wavelength ().
,
since , the speed of light, equals to
Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum.
An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n.
A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics.
Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow.
An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.
Crystalline materials
Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal.
See also
Perturbation theory (quantum mechanics)
Atomic clock
Computational chemistry
References
Chemical properties
Atomic physics
Molecular physics
Quantum chemistry
Theoretical chemistry
Computational chemistry
Spectroscopy
pl:Powłoka elektronowa | Energy level | [
"Physics",
"Chemistry"
] | 3,181 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Quantum chemistry",
"Instrumental analysis",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
"Atomic physics",
" molecular",
"nan",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
59,497 | https://en.wikipedia.org/wiki/Solubility | In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property depends on many other variables, such as the physical form of the two substances and the manner and intensity of mixing.
The concept and measure of solubility are extremely important in many sciences besides chemistry, such as geology, biology, physics, and oceanography, as well as in engineering, medicine, agriculture, and even in non-technical activities like painting, cleaning, cooking, and brewing. Most chemical reactions of scientific, industrial, or practical interest only happen after the reagents have been dissolved in a suitable solvent. Water is by far the most common such solvent.
The term "soluble" is sometimes used for materials that can form colloidal suspensions of very fine solid particles in a liquid. The quantitative solubility of such substances is generally not well-defined, however.
Quantification of solubility
The solubility of a specific solute in a specific solvent is generally expressed as the concentration of a saturated solution of the two. Any of the several ways of expressing concentration of solutions can be used, such as the mass, volume, or amount in moles of the solute for a specific mass, volume, or mole amount of the solvent or of the solution.
Per quantity of solvent
In particular, chemical handbooks often express the solubility as grams of solute per 100 millilitres of solvent (g/(100 mL), often written as g/100 ml), or as grams of solute per decilitre of solvent (g/dL); or, less commonly, as grams of solute per litre of solvent (g/L). The quantity of solvent can instead be expressed in mass, as grams of solute per 100 grams of solvent (g/(100 g), often written as g/100 g), or as grams of solute per kilogram of solvent (g/kg). The number may be expressed as a percentage in this case, and the abbreviation "w/w" may be used to indicate "weight per weight". (The values in g/L and g/kg are similar for water, but that may not be the case for other solvents.)
Alternatively, the solubility of a solute can be expressed in moles instead of mass. For example, if the quantity of solvent is given in kilograms, the value is the molality of the solution (mol/kg).
Per quantity of solution
The solubility of a substance in a liquid may also be expressed as the quantity of solute per quantity of solution, rather than of solvent. For example, following the common practice in titration, it may be expressed as moles of solute per litre of solution (mol/L), the molarity of the latter.
In more specialized contexts the solubility may be given by the mole fraction (moles of solute per total moles of solute plus solvent) or by the mass fraction at equilibrium (mass of solute per mass of solute plus solvent). Both are dimensionless numbers between 0 and 1 which may be expressed as percentages (%).
Liquid and gaseous solutes
For solutions of liquids or gases in liquids, the quantities of both substances may be given volume rather than mass or mole amount; such as litre of solute per litre of solvent, or litre of solute per litre of solution. The value may be given as a percentage, and the abbreviation "v/v" for "volume per volume" may be used to indicate this choice.
Conversion of solubility values
Conversion between these various ways of measuring solubility may not be trivial, since it may require knowing the density of the solution — which is often not measured, and cannot be predicted. While the total mass is conserved by dissolution, the final volume may be different from both the volume of the solvent and the sum of the two volumes.
Moreover, many solids (such as acids and salts) will dissociate in non-trivial ways when dissolved; conversely, the solvent may form coordination complexes with the molecules or ions of the solute. In those cases, the sum of the moles of molecules of solute and solvent is not really the total moles of independent particles solution. To sidestep that problem, the solubility per mole of solution is usually computed and quoted as if the solute does not dissociate or form complexes—that is, by pretending that the mole amount of solution is the sum of the mole amounts of the two substances.
Qualifiers used to describe extent of solubility
The extent of solubility ranges widely, from infinitely soluble (without limit, i.e. miscible) such as ethanol in water, to essentially insoluble, such as titanium dioxide in water. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms, according to the mass msv of solvent required to dissolve one unit of mass msu of solute: (The solubilities of the examples are approximate, for water at 20–25 °C.)
The thresholds to describe something as insoluble, or similar terms, may depend on the application. For example, one source states that substances are described as "insoluble" when their solubility is less than 0.1 g per 100 mL of solvent.
Molecular view
Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase joining (e.g. precipitation of solids). A stable state of the solubility equilibrium occurs when the rates of dissolution and re-joining are equal, meaning the relative amounts of dissolved and non-dissolved materials are equal. If the solvent is removed, all of the substance that had dissolved is recovered.
The term solubility is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid", although in fact the aqueous acid irreversibly degrades the solid to give soluble products. Most ionic solids dissociate when dissolved in polar solvents. In those cases where the solute is not recovered upon evaporation of the solvent, the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis.
When a solute dissolves, it may form several species in the solution. For example, an aqueous solution of cobalt(II) chloride can afford , each of which interconverts.
Factors affecting solubility
Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula.
The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility.
Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium.
For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher (i.e. about ten times higher) when the redox potential is controlled using a highly oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately oxidizing Ni-NiO buffer.
Solubility (metastable, at concentrations approaching saturation) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific surface area or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time).
Temperature
The solubility of a given solute in a given solvent is function of temperature. Depending on the change in enthalpy (ΔH) of the dissolution reaction, i.e., on the endothermic (ΔH > 0) or exothermic (ΔH < 0) character of the dissolution reaction, the solubility of a given compound may increase or decrease with temperature. The van 't Hoff equation relates the change of solubility equilibrium constant (Ksp) to temperature change and to reaction enthalpy change. For most solids and liquids, their solubility increases with temperature because their dissolution reaction is endothermic (ΔH > 0). In liquid water at high temperatures, (e.g. that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent and in a change of hydration energy affecting the ΔG of the dissolution reaction.
Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water (exothermic dissolution reaction related to their hydration) (to a minimum, which is below 120 °C for most permanent gases), but more soluble in organic solvents (endothermic dissolution reaction related to their solvation).
The chart shows solubility curves for some typical solid inorganic salts in liquid water (temperature is in degrees Celsius, i.e. kelvins minus 273.15). Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature (ΔH > 0). Some solutes (e.g. sodium chloride in water) exhibit solubility that is fairly independent of temperature (ΔH ≈ 0). A few, such as calcium sulfate (gypsum) and cerium(III) sulfate, become less soluble in water as temperature increases (ΔH < 0). This is also the case for calcium hydroxide (portlandite), whose solubility at 70 °C is about half of its value at 25 °C. The dissolution of calcium hydroxide in water is also an exothermic process (ΔH < 0). As dictated by the van 't Hoff equation and Le Chatelier's principle, low temperatures favor dissolution of Ca(OH)2. Portlandite solubility increases at low temperature. This temperature dependence is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal (mirabilite) loses water of crystallization at 32 °C to form a more soluble anhydrous phase (thenardite) with a smaller change in Gibbs free energy (ΔG) in the dissolution reaction.
The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins.
Pressure
For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as:
where the index iterates the components, is the mole fraction of the -th component in the solution, is the pressure, the index refers to constant temperature, is the partial molar volume of the -th component in the solution, is the partial molar volume of the -th component in the dissolving solid, and is the universal gas constant.
The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.
Solubility of gases
Henry's law is used to quantify the solubility of gases in solvents. The solubility of a gas in a solvent is directly proportional to the partial pressure of that gas above the solvent. This relationship is similar to Raoult's law and can be written as:
where is a temperature-dependent constant (for example, 769.2 L·atm/mol for dioxygen (O2) in water at 298 K), is the partial pressure (in atm), and is the concentration of the dissolved gas in the liquid (in mol/L).
The solubility of gases is sometimes also quantified using Bunsen solubility coefficient.
In the presence of small bubbles, the solubility of the gas does not depend on the bubble radius in any other way than through the effect of the radius on pressure (i.e. the solubility of gas in the liquid in contact with small bubbles is increased due to pressure increase by Δp = 2γ/r; see Young–Laplace equation).
Henry's law is valid for gases that do not undergo change of chemical speciation on dissolution. Sieverts' law shows a case when this assumption does not hold.
The carbon dioxide solubility in seawater is also affected by temperature, pH of the solution, and by the carbonate buffer. The decrease of solubility of carbon dioxide in seawater when temperature increases is also an important retroaction factor (positive feedback) exacerbating past and future climate changes as observed in ice cores from the Vostok site in Antarctica. At the geological time scale, because of the Milankovich cycles, when the astronomical parameters of the Earth orbit and its rotation axis progressively change and modify the solar irradiance at the Earth surface, temperature starts to increase. When a deglaciation period is initiated, the progressive warming of the oceans releases CO2 into the atmosphere because of its lower solubility in warmer sea water. In turn, higher levels of CO2 in the atmosphere increase the greenhouse effect and carbon dioxide acts as an amplifier of the general warming.
Polarity
A popular aphorism used for predicting solubility is "like dissolves like" also expressed in the Latin language as "Similia similibus solventur". This statement indicates that a solute will dissolve best in a solvent that has a similar chemical structure to itself, based on favorable entropy of mixing. This view is simplistic, but it is a useful rule of thumb. The overall solvation capacity of a solvent depends primarily on its polarity. For example, a very polar (hydrophilic) solute such as urea is very soluble in highly polar water, less soluble in fairly polar methanol, and practically insoluble in non-polar solvents such as benzene. In contrast, a non-polar or lipophilic solute such as naphthalene is insoluble in water, fairly soluble in methanol, and highly soluble in non-polar benzene.
In even more simple terms a simple ionic compound (with positive and negative ions) such as sodium chloride (common salt) is easily soluble in a highly polar solvent (with some separation of positive (δ+) and negative (δ-) charges in the covalent molecule) such as water, as thus the sea is salty as it accumulates dissolved salts since early geological ages.
The solubility is favored by entropy of mixing (ΔS) and depends on enthalpy of dissolution (ΔH) and the hydrophobic effect. The free energy of dissolution (Gibbs energy) depends on temperature and is given by the relationship: ΔG = ΔH – TΔS. Smaller ΔG means greater solubility.
Chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. This applies in vast areas of chemistry from drug synthesis to spent nuclear fuel reprocessing.
Rate of dissolution
Dissolution is not an instantaneous process. The rate of solubilization (in kg/s) is related to the solubility product and the surface area of the material. The speed at which a solid dissolves may depend on its crystallinity or lack thereof in the case of amorphous solids and the surface area (crystallite size) and the presence of polymorphism. Many practical systems illustrate this effect, for example in designing methods for controlled drug delivery. In some cases, solubility equilibria can take a long time to establish (hours, days, months, or many years; depending on the nature of the solute and other factors).
The rate of dissolution can be often expressed by the Noyes–Whitney equation or the Nernst and Brunner equation of the form:
where:
= mass of dissolved material
= time
= surface area of the interface between the dissolving substance and the solvent
= diffusion coefficient
= thickness of the boundary layer of the solvent at the surface of the dissolving substance
= mass concentration of the substance on the surface
= mass concentration of the substance in the bulk of the solvent
For dissolution limited by diffusion (or mass transfer if mixing is present), is equal to the solubility of the substance. When the dissolution rate of a pure substance is normalized to the surface area of the solid (which usually changes with time during the dissolution process), then it is expressed in kg/m2s and referred to as "intrinsic dissolution rate". The intrinsic dissolution rate is defined by the United States Pharmacopeia.
Dissolution rates vary by orders of magnitude between different systems. Typically, very low dissolution rates parallel low solubilities, and substances with high solubilities exhibit high dissolution rates, as suggested by the Noyes-Whitney equation.
Theories of solubility
Solubility product
Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. Since it is a product of ion concentrations in equilibrium, it is also known as the solubility product. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. While the solubility constant is not as simple as solubility, the value of this constant is generally independent of the presence of other species in the solvent.
Other theories
The Flory–Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen solubility parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion.
The octanol-water partition coefficient, usually expressed as its logarithm (Log P), is a measure of differential solubility of a compound in a hydrophobic solvent (1-octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity).
The energy change associated with dissolving is usually given per mole of solute as the enthalpy of solution.
Applications
Solubility is of fundamental importance in a large number of scientific disciplines and practical applications, ranging from ore processing and nuclear reprocessing to the use of medicines, and the transport of pollutants.
Solubility is often said to be one of the "characteristic properties of a substance", which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid".
Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation.
Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid–liquid extraction, is an important technique in synthetic chemistry. Recycling is used to ensure maximum extraction.
Differential solubility
In flowing systems, differences in solubility often determine the dissolution-precipitation driven transport of species. This happens when different parts of the system experience different conditions. Even slightly different conditions can result in significant effects, given sufficient time.
For example, relatively low solubility compounds are found to be soluble in more extreme environments, resulting in geochemical and geological effects of the activity of hydrothermal fluids in the Earth's crust. These are often the source of high quality economic mineral deposits and precious or semi-precious gems. In the same way, compounds with low solubility will dissolve over extended time (geological time), resulting in significant effects such as extensive cave systems or Karstic land surfaces.
Solubility of ionic compounds in water
Some ionic compounds (salts) dissolve in water, which arises because of the attraction between positive and negative charges (see: solvation). For example, the salt's positive ions (e.g. Ag+) attract the partially negative oxygen atom in . Likewise, the salt's negative ions (e.g. Cl−) attract the partially positive hydrogens in . Note: the oxygen atom is partially negative because it is more electronegative than hydrogen, and vice versa (see: chemical polarity).
However, there is a limit to how much salt can be dissolved in a given volume of water. This concentration is the solubility and related to the solubility product, Ksp. This equilibrium constant depends on the type of salt ( vs. , for example), temperature, and the common ion effect.
One can calculate the amount of that will dissolve in 1 liter of pure water as follows:
Ksp = [Ag+] × [Cl−] / M2 (definition of solubility product; M = mol/L)
Ksp = 1.8 × 10−10 (from a table of solubility products)
[Ag+] = [Cl−], in the absence of other silver or chloride salts, so
[Ag+]2 = 1.8 × 10−10 M2
[Ag+] = 1.34 × 10−5 mol/L
The result: 1 liter of water can dissolve 1.34 × 10−5 moles of at room temperature. Compared with other salts, is poorly soluble in water. For instance, table salt () has a much higher Ksp = 36 and is, therefore, more soluble. The following table gives an overview of solubility rules for various ionic compounds.
Solubility of organic compounds
The principle outlined above under polarity, that like dissolves like, is the usual guide to solubility with organic systems. For example, petroleum jelly will dissolve in gasoline because both petroleum jelly and gasoline are non-polar hydrocarbons. It will not, on the other hand, dissolve in ethyl alcohol or water, since the polarity of these solvents is too high. Sugar will not dissolve in gasoline, since sugar is too polar in comparison with gasoline. A mixture of gasoline and sugar can therefore be separated by filtration or extraction with water.
Solid solution
This term is often used in the field of metallurgy to refer to the extent that an alloying element will dissolve into the base metal without forming a separate phase. The solvus or solubility line (or curve) is the line (or lines) on a phase diagram that give the limits of solute addition. That is, the lines show the maximum amount of a component that can be added to another component and still be in solid solution. In the solid's crystalline structure, the 'solute' element can either take the place of the matrix within the lattice (a substitutional position; for example, chromium in iron) or take a place in a space between the lattice points (an interstitial position; for example, carbon in iron).
In microelectronic fabrication, solid solubility refers to the maximum concentration of impurities one can place into the substrate.
In solid compounds (as opposed to elements), the solubility of a solute element can also depend on the phases separating out in equilibrium. For example, amount of Sn soluble in the ZnSb phase can depend significantly on whether the phases separating out in equilibrium are (Zn4Sb3+Sn(L)) or (ZnSnSb2+Sn(L)). Besides these, the ZnSb compound with Sn as a solute can separate out into other combinations of phases after the solubility limit is reached depending on the initial chemical composition during synthesis. Each combination produces a different solubility of Sn in ZnSb. Hence solubility studies in compounds, concluded upon the first instance of observing secondary phases separating out might underestimate solubility. While the maximum number of phases separating out at once in equilibrium can be determined by the Gibb's phase rule, for chemical compounds there is no limit on the number of such phase separating combinations itself. Hence, establishing the "maximum solubility" in solid compounds experimentally can be difficult, requiring equilibration of many samples. If the dominant crystallographic defect (mostly interstitial or substitutional point defects) involved in the solid-solution can be chemically intuited beforehand, then using some simple thermodynamic guidelines can considerably reduce the number of samples required to establish maximum solubility.
Incongruent dissolution
Many substances dissolve congruently (i.e. the composition of the solid and the dissolved solute stoichiometrically match). However, some substances may dissolve incongruently, whereby the composition of the solute in solution does not match that of the solid. This solubilization is accompanied by alteration of the "primary solid" and possibly formation of a secondary solid phase. However, in general, some primary solid also remains and a complex solubility equilibrium establishes. For example, dissolution of albite may result in formation of gibbsite.
.
In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks.
In principle, both congruent and incongruent dissolution can lead to the formation of secondary solid phases in equilibrium. So, in the field of Materials Science, the solubility for both cases is described more generally on chemical composition phase diagrams.
Solubility prediction
Solubility is a property of interest in many aspects of science, including but not limited to: environmental predictions, biochemistry, pharmacy, drug-design, agrochemical design, and protein ligand binding. Aqueous solubility is of fundamental interest owing to the vital biological and transportation functions played by water. In addition, to this clear scientific interest in water solubility and solvent effects; accurate predictions of solubility are important industrially. The ability to accurately predict a molecule's solubility represents potentially large financial savings in many chemical product development processes, such as pharmaceuticals. In the pharmaceutical industry, solubility predictions form part of the early stage lead optimisation process of drug candidates. Solubility remains a concern all the way to formulation. A number of methods have been applied to such predictions including quantitative structure–activity relationships (QSAR), quantitative structure–property relationships (QSPR) and data mining. These models provide efficient predictions of solubility and represent the current standard. The draw back such models is that they can lack physical insight. A method founded in physical theory, capable of achieving similar levels of accuracy at an sensible cost, would be a powerful tool scientifically and industrially.
Methods founded in physical theory tend to use thermodynamic cycles, a concept from classical thermodynamics. The two common thermodynamic cycles used involve either the calculation of the free energy of sublimation (solid to gas without going through a liquid state) and the free energy of solvating a gaseous molecule (gas to solution), or the free energy of fusion (solid to a molten phase) and the free energy of mixing (molten to solution). These two process are represented in the following diagrams.
These cycles have been used for attempts at first principles predictions (solving using the fundamental physical equations) using physically motivated solvent models, to create parametric equations and QSPR models and combinations of the two. The use of these cycles enables the calculation of the solvation free energy indirectly via either gas (in the sublimation cycle) or a melt (fusion cycle). This is helpful as calculating the free energy of solvation directly is extremely difficult. The free energy of solvation can be converted to a solubility value using various formulae, the most general case being shown below, where the numerator is the free energy of solvation, R is the gas constant and T is the temperature in kelvins.
Well known fitted equations for solubility prediction are the general solubility equations. These equations stem from the work of Yalkowsky et al. The original formula is given first, followed by a revised formula which takes a different assumption of complete miscibility in octanol.
These equations are founded on the principles of the fusion cycle.
See also
Notes
References
External links
Chemical properties
Physical properties
Solutions
Underwater diving physics | Solubility | [
"Physics",
"Chemistry"
] | 6,979 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Underwater diving physics",
"Homogeneous chemical mixtures",
"nan",
"Solutions",
"Physical properties"
] |
59,524 | https://en.wikipedia.org/wiki/Next-Generation%20Secure%20Computing%20Base | The Next-Generation Secure Computing Base (NGSCB; codenamed Palladium and also known as Trusted Windows) is a software architecture designed by Microsoft which claimed to provide users of the Windows operating system with better privacy, security, and system integrity. NGSCB was the result of years of research and development within Microsoft to create a secure computing solution that equaled the security of closed platforms such as set-top boxes while simultaneously preserving the backward compatibility, flexibility, and openness of the Windows operating system. Microsoft's primary stated objective with NGSCB was to "protect software from software."
Part of the Trustworthy Computing initiative when unveiled in 2002, NGSCB was to be integrated with Windows Vista, then known as "Longhorn." NGSCB relied on hardware designed by the Trusted Computing Group to produce a parallel operation environment hosted by a new hypervisor (referred to as a sort of kernel in documentation) called the "Nexus" that existed alongside Windows and provided new applications with features such as hardware-based process isolation, data encryption based on integrity measurements, authentication of a local or remote machine or software configuration, and encrypted paths for user authentication and graphics output. NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information.
NGSCB was subject to much controversy during its development, with critics contending that it would impose restrictions on users, enforce vendor lock-in, and undermine fair use rights and open-source software. It was first demonstrated by Microsoft at WinHEC 2003 before undergoing a revision in 2004 that would enable earlier applications to benefit from its functionality. Reports indicated in 2005 that Microsoft would change its plans with NGSCB so that it could ship Windows Vista by its self-imposed deadline year, 2006; instead, Microsoft would ship only part of the architecture, BitLocker, which can optionally use the Trusted Platform Module to validate the integrity of boot and system files prior to operating system startup. Development of NGSCB spanned approximately a decade before its cancellation, the lengthiest development period of a major feature intended for Windows Vista.
NGSCB differed from technologies Microsoft billed as "pillars of Windows Vista"—Windows Presentation Foundation, Windows Communication Foundation, and WinFS—during its development in that it was not built with the .NET Framework and did not focus on managed code software development. NGSCB has yet to fully materialize; however, aspects of it are available in features such as BitLocker of Windows Vista, Measured Boot and UEFI of Windows 8, Certificate Attestation of Windows 8.1, Device Guard of Windows 10. and Device Encryption in Windows 11 Home editions, with TPM 2.0 mandatory for installation.
History
Early development
Development of NGSCB began in 1997 after Peter Biddle conceived of new ways to protect content on personal computers. Biddle enlisted assistance from members from the Microsoft Research division and other core contributors eventually included Blair Dillaway, Brian LaMacchia, Bryan Willman, Butler Lampson, John DeTreville, John Manferdelli, Marcus Peinado, and Paul England. Adam Barr, a former Microsoft employee who worked to secure the remote boot feature during development of Windows 2000 was approached by Biddle and colleagues during his tenure with an initiative tentatively known as "Trusted Windows," which aimed to protect DVD content from being copied. To this end, Lampson proposed the use of a hypervisor to execute a limited operating system dedicated to DVD playback alongside Windows 2000. Patents for a DRM operating system were later filed in 1999 by England, DeTreville and Lampson; Lampson noted that these patents were for NGSCB. Biddle and colleagues realized by 1999 that NGSCB was more applicable to privacy and security than content protection, and the project was formally given the green-light by Microsoft in October, 2001.
During WinHEC 1999, Biddle discussed intent to create a "trusted" architecture for Windows to leverage new hardware to promote confidence and security while preserving backward compatibility with previous software. On October 11, 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies including Compaq, Hewlett-Packard, IBM, Intel, and Microsoft was formed in an effort to promote personal computing confidence and security. The TCPA released detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware-based key storage, and machine authentication; these features required a new hardware component designed by the TCPA called the "Trusted Platform Module" (referred to as a "Security Support Component", "Security CoProcessor", or "Security Support Processor" in early NGSCB documentation).
At WinHEC 2000, Microsoft released a technical presentation on the topics of protection of privacy, security, and intellectual property titled "Privacy, Security, and Content in Windows Platforms", which focused on turning Windows into a "platform of trust" for computer security, user content, and user privacy. Notable in the presentation is the contention that "there is no difference between privacy protection, computer security, and content protection"—"assurances of trust must be universally true". Microsoft reiterated these claims at WinHEC 2001. NGSCB intended to protect all forms of content, unlike traditional rights management schemes which focus only on the protection of audio tracks or movies instead of users they have the potential to protect which made it, in Biddle's words, "egalitarian".
As "Palladium"
Microsoft held its first design review for the NGSCB in April 2002, with approximately 37 companies under a non-disclosure agreement. NGSCB was publicly unveiled under its codename "Palladium" in a June 2002 article by Steven Levy for Newsweek that focused on its design, feature set, and origin. Levy briefly described potential features: access control, authentication, authorization, DRM, encryption, as well as protection from junk mail and malware, with example policies being email accessible only to an intended recipient and Microsoft Word documents readable for only a week after their creation; Microsoft later release a guide clarifying these assertions as being hyperbolic; namely, that NGSCB would not intrinsically enforce content protection, or protect against junk mail or malware. Instead, it would provide a platform on which developers could build new solutions that did not exist by isolating applications and store secrets for them. Microsoft was not sure whether to "expose the feature in the Control Panel or present it as a separate utility," but NGSCB would be an opt-in solution—disabled by default.
Microsoft PressPass later interviewed John Manferdelli, who restated and expanded on many of the key points discussed in the article by Newsweek. Manferdelli described it as evolutionary platform for Windows in July, articulating how "'Palladium' will not require DRM, and DRM will not require 'Palladium'. Microsoft sought a group program manager in August to assist in leading the development of several Microsoft technologies including NGSCB. Paul Otellini announced Intel's support for NGSCB with a set of chipset, platform, and processor codenamed "LaGrande" at Intel Developer Forum 2002, which would provide an NGSCB hardware foundation and preserve backward compatibility with previous software.
As NGSCB
NGSCB was known as "Palladium" until January 24, 2003 when Microsoft announced it had been renamed as "Next-Generation Secure Computing Base." Project manager Mario Juarez stated this name was chosen to avoid legal action from an unnamed company which had acquired the rights to the "Palladium" name, as well as to reflect Microsoft's commitment to NGSCB in the upcoming decade. Juarez acknowledged the previous name was controversial, but denied it was changed by Microsoft to dodge criticism.
The Trusted Computing Platform Alliance was superseded by the Trusted Computing Group in April 2003. A principal goal of the new consortium was to produce a Trusted Platform Module (TPM) specification compatible with NGSCB; the previous specification, TPM 1.1 did not meet its requirements. TPM 1.2 was designed for compliance with NGSCB and introduced many features for such platforms. The first TPM 1.2 specification, Revision 62 was released in 2003.
Biddle emphasized in June 2003 that hardware vendors and software developers were vital to NGSCB. Microsoft publicly demonstrated NGSCB for the first time at WinHEC 2003, where it protected data in memory from an attacker; prevented access to—and alerted the user of—an application that had been changed; and prevented a remote administration tool from capturing an instant messaging conversation. Despite Microsoft's desire to demonstrate NGSCB on hardware, software emulation was required for as few hardware components were available. Biddle reiterated that NGSCB was a set of evolutionary enhancements to Windows, basing this assessment on preserved backward compatibility and employed concepts in use before its development, but said the capabilities and scenarios it would enable would be revolutionary. Microsoft also revealed its multi-year roadmap for NGSCB, with the next major development milestone scheduled for the Professional Developers Conference, indicating that subsequent versions would ship concurrently with pre-release builds of Windows Vista; however, news reports suggested that NGSCB would not be integrated with Windows Vista when release, but it would instead be made available as separate software for the operating system.
Microsoft also announced details related to adoption and deployment of NGSCB at WinHEC 2003, stating that it would create a new value proposition for customers without significantly increasing the cost of computers; NGSCB adoption during the year of its introductory release was not anticipated and immediate support for servers was not expected. On the last day of the conference, Biddle said NGSCB needed to provide users with a way to differentiate between secured and unsecured windows—that a secure window should be "noticeably different" to help protect users from spoofing attacks; Nvidia was the earliest to announce this feature. WinHEC 2003 represented an important development milestone for NGSCB. Microsoft dedicated several hours to presentations and released many technical whitepapers, and companies including Atmel, Comodo Group, Fujitsu, and SafeNet produced preliminary hardware for the demonstration. Microsoft also demonstrated NGSCB at several U.S. campuses in California and in New York in June 2003.
NGSCB was among the topics discussed during Microsoft's PDC 2003 with a pre-beta software development kit, known as the Developer Preview, being distributed to attendees. The Developer Preview was the first time that Microsoft made NGSCB code available to the developer community and was offered by the company as an educational opportunity for NGSCB software development. With this release, Microsoft stated that it was primarily focused on supporting business and enterprise applications and scenarios with the first version of the NGSCB scheduled to ship with Windows Vista, adding that it intended to address consumers with a subsequent version of the technology, but did not provide an estimated time of delivery for this version. At the conference, Jim Allchin said that Microsoft was continuing to work with hardware vendors so that they would be able to support the technology, and Bill Gates expected a new generation of central processing units (CPUs) to offer full support. Following PDC 2003, NGSCB was demonstrated again on prototype hardware during the annual RSA Security conference in November.
Microsoft announced at WinHEC 2004 that it would revise NSCB in response to feedback from customers and independent software vendors who did not desire to rewrite their existing programs in order to benefit from its functionality; the revision would also provide more direct support for Windows with protected environments for the operating system, its components, and applications, instead of it being an environment to itself and new applications. The NGSCB secure input feature would also undergo a significant revision based on cost assessments, hardware requirements, and usability issues of the previous implementation. There were subsequent reports that Microsoft would cease developing NGSCB; Microsoft denied these reports and reaffirmed its commitment to delivery. Additional reports published later that year suggested that Microsoft would make even additional changes based on feedback from the industry.
Microsoft's absence of continual updates on NGSCB progress in 2005 had caused industry insiders to speculate that NGSCB had been cancelled. At the Microsoft Management Summit event, Steve Ballmer said that the company would build on the security foundation it had started with the NGSCB to create a new set of virtualization technologies for Windows, which were later Hyper-V. Reports during WinHEC 2005 indicated Microsoft scaled back its plans for NGSCB, so that it could to ship Windows Vista—which had already been beset by numerous delays and even a "development reset"—within a reasonable timeframe; instead of isolating components, NGSCB would offer "Secure Startup" ("BitLocker Drive Encryption") to encrypt disk volumes and validate both pre-boot firmware and operating system components. Microsoft intended to deliver other aspects of NGSCB later. Jim Allchin stated NGSCB would "marry hardware and software to gain better security", which was instrumental in the development of BitLocker.
Architecture and technical details
A complete Microsoft-based Trusted Computing-enabled system will consist not only of software components developed by Microsoft but also of hardware components developed by the Trusted Computing Group. The majority of features introduced by NGSCB are heavily reliant on specialized hardware and so will not operate on PCs predating 2004.
In current Trusted Computing specifications, there are two hardware components: the Trusted Platform Module (TPM), which will provide secure storage of cryptographic keys and a secure cryptographic co-processor, and a curtained memory feature in the CPU. In NGSCB, there are two software components, the Nexus, a security kernel that is part of the Operating System that provides a secure environment (Nexus mode) for trusted code to run in, and Nexus Computing Agents (NCAs), trusted modules which run in Nexus mode within NGSCB-enabled applications.
Secure storage and attestation
At the time of manufacture, a cryptographic key is generated and stored within the TPM. This key is never transmitted to any other component, and the TPM is designed in such a way that it is extremely difficult to retrieve the stored key by reverse engineering or any other method, even to the owner. Applications can pass data encrypted with this key to be decrypted by the TPM, but the TPM will only do so under certain strict conditions. Specifically, decrypted data will only ever be passed to authenticated, trusted applications, and will only ever be stored in curtained memory, making it inaccessible to other applications and the Operating System. Although the TPM can only store a single cryptographic key securely, secure storage of arbitrary data is by extension possible by encrypting the data such that it may only be decrypted using the securely stored key.
The TPM is also able to produce a cryptographic signature based on its hidden key. This signature may be verified by the user or by any third party, and so can therefore be used to provide remote attestation that the computer is in a secure state.
Curtained memory
NGSCB also relies on a curtained memory feature provided by the CPU. Data within curtained memory can only be accessed by the application to which it belongs, and not by any other application or the Operating System. The attestation features of the TPM can be used to confirm to a trusted application that it is genuinely running in curtained memory; it is therefore very difficult for anyone, including the owner, to trick a trusted application into running outside of curtained memory. This in turn makes reverse engineering of a trusted application extremely difficult.
Applications
NGSCB-enabled applications are to be split into two distinct parts, the NCA, a trusted module with access to a limited Application Programming Interface (API), and an untrusted portion, which has access to the full Windows API. Any code which deals with NGSCB functions must be located within the NCA.
The reason for this split is that the Windows API has developed over many years and is as a result extremely complex and difficult to audit for security bugs. To maximize security, trusted code is required to use a smaller, carefully audited API. Where security is not paramount, the full API is available.
Uses and scenarios
NGSCB enables new categories of applications and scenarios. Examples of uses cited by Microsoft include decentralized access control policies; digital rights management services for consumers, content providers, and enterprises; protected instant messaging conversations and online transactions; and more secure forms of machine health compliance, network authentication, and remote access. NGSCB-secured virtual private network access was one of the earliest scenarios envisaged by Microsoft. NGSCB can also strengthen software update mechanisms such as those belonging to antivirus software or Windows Update.
An early NGSCB privacy scenario conceived of by Microsoft is the "wine purchase scenario," where a user can safely conduct a transaction with an online merchant without divulging personally identifiable information during the transaction. With the release of the NGSCB Developer Preview during PDC 2003, Microsoft emphasized the following enterprise applications and scenarios: document signing, secured data viewing, secured instant messaging, and secured plug-ins for emailing.
WinHEC 2004 scenarios
During WinHEC 2004, Microsoft revealed two features based on its revision of NGSCB, Cornerstone and Code Integrity Rooting:
Cornerstone would protect a user's login and authentication information by securely transmitting it to NGSCB-protected Windows components for validation, finalizing the user authentication process by releasing access to the SYSKEY if validation was successful. It was intended to protect data on laptops that had been lost or stolen to prevent hackers or thieves from accessing it even if they had performed a software-based attack or booted into an alternative operating system.
Code Integrity Rooting would validate boot and system files prior to the startup of Microsoft Windows. If validation of these components failed, the SYSKEY would not be released.
BitLocker is the combination of these features; "Cornerstone" was the codename of BitLocker, and BitLocker validates pre-boot firmware and operating system components before boot, which protects SYSKEY from unauthorized access; an unsuccessful validation prohibits access to a protected system.
Reception
Reaction to NGSCB after its unveiling by Newsweek was largely negative. While its security features were praised, critics contended that NGSCB could be used to impose restrictions on users; lock-out competing software vendors; and undermine fair use rights and open source software such as Linux. Microsoft's characterization of NGSCB as a security technology was subject to criticism as its origin focused on DRM. NGSCB's announcement occurred only a few years after Microsoft was accused of anti-competitive practices during the United States v. Microsoft Corporation antitrust case, a detail which called the company's intentions for the technology into question—NGSCB was regarded as an effort by the company to maintain its dominance in the personal computing industry. The notion of a "Trusted Windows" architecture—one that implied Windows itself was untrustworthy—would also be a source of contention within the company itself.
After NGSCB's unveiling, Microsoft drew frequent comparisons to Big Brother, an oppressive dictator of a totalitarian state in George Orwell's dystopian novel Nineteen Eighty-Four. The Electronic Privacy Information Center legislative counsel, Chris Hoofnagle, described Microsoft's characterization of the NGSCB as "Orwellian." Big Brother Awards bestowed Microsoft with an award because of NGSCB. Bill Gates addressed these comments at a homeland security conference by stating that NGSCB "can make our country more secure and prevent the nightmare vision of George Orwell at the same time." Steven Levy—the author who unveiled the existence of the NGSCB—claimed in a 2004 front-page article for Newsweek that NGSCB could eventually lead to an "information infrastructure that encourages censorship, surveillance, and suppression of the creative impulse where anonymity is outlawed and every penny spent is accounted for." However, Microsoft outlined a scenario enabled by NGSCB that allows a user to conduct a transaction without divulging personally identifiable information.
Ross Anderson of Cambridge University was among the most vocal critics of NGSCB and of Trusted Computing. Anderson alleged that the technologies were designed to satisfy federal agency requirements; enable content providers and other third-parties to remotely monitor or delete data in users' machines; use certificate revocation lists to ensure that only content deemed "legitimate" could be copied; and use unique identifiers to revoke or validate files; he compared this to the attempts by the Soviet Union to "register and control all typewriters and fax machines." Anderson also claimed that the TPM could control the execution of applications on a user's machine and, because of this, bestowed to it a derisive "Fritz Chip" name in reference to United States Senator Ernest "Fritz" Hollings, who had recently proposed DRM legislation such as the Consumer Broadband and Digital Television Promotion Act for consumer electronic devices. Anderson's report was referenced extensively in the news media and appeared in publications such as BBC News, The New York Times, and The Register. David Safford of IBM Research stated that Anderson presented several technical errors within his report, namely that the proposed capabilities did not exist within any specification and that many were beyond the scope of trusted platform design. Anderson later alleged that BitLocker was designed to facilitate DRM and to lock out competing software on an encrypted system, and, in spite of his allegation that NGSCB was designed for federal agencies, advocated for Microsoft to add a backdoor to BitLocker. Similar sentiments were expressed by Richard Stallman, founder of the GNU Project and Free Software Foundation, who alleged that Trusted Computing technologies were designed to enforce DRM and to prevent users from running unlicensed software. In 2015, Stallman stated that "the TPM has proved a total failure" for DRM and that "there are reasons to think that it will not be feasible to use them for DRM."
After the release of Anderson's report, Microsoft stated in an NGSCB FAQ that "enhancements to Windows under the NGSCB architecture have no mechanism for filtering content, nor do they provide a mechanism for proactively searching the Internet for 'illegal' content [...] Microsoft is firmly opposed to putting 'policing functions' into nexus-aware PCs and does not intend to do so" and that the idea was in direct opposition with the design goals set forth for NGSCB, which was "built on the premise that no policy will be imposed that is not approved by the user." Concerns about the NGSCB TPM were also raised in that it would use what are essentially unique machine identifiers, which drew comparisons to the Intel Pentium III processor serial number, a unique hardware identification number of the 1990s viewed as a risk to end-user privacy. NGSCB, however, mandates that disclosure or use of the keys provided by the TPM be based solely on user discretion; in contrast, Intel's Pentium III included a unique serial number that could potentially be revealed to any application. NGSCB, also unlike Intel's Pentium III, would provide optional features to allow users to indirectly identify themselves to external requestors.
In response to concerns that NGSCB would take control away from users for the sake of content providers, Bill Gates stated that the latter should "provide their content in easily accessible forms or else it ends up encouraging piracy." Bryan Willman, Marcus Peinado, Paul England, and Peter Biddle—four NGSCB engineers—realized early during the development of NGSCB that DRM would ultimately fail in its efforts to prevent piracy. In 2002, the group released a paper titled "The Darknet and the Future of Content Distribution" that outlined how content protection mechanisms are demonstrably futile. The paper's premise circulated within Microsoft during the late 1990s and was a source of controversy within Microsoft; Biddle stated that the company almost terminated his employment as a result of the paper's release. A 2003 report published by Harvard University researchers suggested that NGSCB and similar technologies could facilitate the secure distribution of copyrighted content across peer-to-peer networks.
Not all assessments were negative. Paul Thurrott praised NGSCB, stating that it was "Microsoft's Trustworthy Computing initiative made real" and that it would "form the basis of next-generation computer systems." Scott Bekker of Redmond Magazine stated that NGSCB was misunderstood because of its controversy and that it appeared to be a "promising, user-controlled defense against privacy intrusions and security violations." In February 2004, In-Stat/MDR, publisher of the Microprocessor Report, bestowed NGSCB with its Best Technology award. Malcom Crompton, Australian Privacy Commissioner, stated that "NGSCB has great privacy enhancing potential [...] Microsoft has recognised there is a privacy issue [...] we should all work with them, give them the benefit of the doubt and urge them to do the right thing." When Microsoft announced at WinHEC 2004 that it would be revising NGSCB so that previous applications would not have to be rewritten, Martin Reynolds of Gartner praised the company for this decision as it would create a "more sophisticated" version of NGSCB that would simplify development. David Wilson, writing for South China Morning Post, defended NGSCB by saying that "attacking the latest Microsoft monster is an international blood sport" and that "even if Microsoft had a new technology capable of ending Third World hunger and First World obesity, digital seers would still lambaste it because they view Bill Gates as a grey incarnation of Satan." Microsoft noted that negative reaction to NGSCB gradually waned after events such as the USENIX Annual Technical Conference in 2003, and several Fortune 500 companies also expressed interest in it.
When reports announced in 2005 that Microsoft would scale back its plans and incorporate only BitLocker with Windows Vista, concerns pertaining digital rights management, erosion of user rights, and vendor lock-in remained. In 2008, Biddle stated that negative perception was the most significant contributing factor responsible for the cessation of NGSCB's development.
Vulnerability
In a 2003 article, Dan Boneh and David Brumley indicated that projects like NGSCB may be vulnerable to timing attacks.
See also
Microsoft Pluton
Secure Boot
Trusted Execution Technology
Trusted Computing
Trusted Platform Module
Intel Management Engine
References
External links
Microsoft's NGSCB home page (Archived on 2006-07-05)
Trusted Computing Group home page
System Integrity Team blog — team blog for NGSCB technologies (Archived on 2008-10-21)
Security WMI Providers Reference on MSDN, including BitLocker Drive Encryption and Trusted Platform Module (both components of NGSCB)
TPM Base Services on MSDN
Development Considerations for Nexus Computing Agents
Cryptographic software
Discontinued Windows components
Disk encryption
Microsoft criticisms and controversies
Microsoft initiatives
Microsoft Windows security technology
Trusted computing
Windows Vista | Next-Generation Secure Computing Base | [
"Mathematics",
"Engineering"
] | 5,595 | [
"Cybersecurity engineering",
"Cryptographic software",
"Trusted computing",
"Mathematical software"
] |
59,611 | https://en.wikipedia.org/wiki/Ionization | Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected.
Uses
Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application.
Production of ions
Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization.
Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years.
Adiabatic ionization
Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state.
The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche.
Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used.
Ionization energy of atoms
The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells.
Semi-classical description of ionization
Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier.
Quantum mechanical description of ionization
The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC), but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines, generalized Sturmians or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice.
In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible.
Tunnel ionization
Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI).
Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter.
The rate of MPI on atom with an ionization potential in a linearly polarized laser with frequency is given by
where
is the Keldysh parameter,
,
is the peak electric field of the laser and
.
The coefficients , and are given by
The coefficient is given by
where
Quasi-static tunnel ionization
The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when approaches zero. The rate of QST is given by
As compared to the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable.
Strong field approximation for the ionization rate
The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by
where:
with being the ponderomotive energy,
is the minimum number of photons necessary to ionize the atom,
is the double Bessel function,
with the angle between the momentum of the electron, p, and the electric field of the laser, F,
FT is the three-dimensional Fourier transform, and
incorporates the Coulomb correction in the SFA model.
Population trapping
In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by photons and the resonant state undergo an avoided crossing at the resonance intensity . The minimum distance, , at the avoided crossing is proportional to the generalized Rabi frequency, coupling the two states. According to Story et al., the probability of remaining in the ground state, , is given by
where is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where , then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized.
In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping".
We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin.
Non-sequential multiple ionization
The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel there is another channel which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states;
where is the rate of quasi-static tunneling to i'th charge state and are some constants depending on the wavelength of the laser (but not on the pulse duration).
Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative.
The electron rescattering model was independently developed by Kuchiev, Schafer et al, Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential () of the laser. Corkum's model places a cut-off limit on the minimum intensity ( is proportional to intensity) where ionization due to re-scattering can occur.
The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization.
Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process.
Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules
The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism:
MPI of electrons from the inner orbitals of the molecule which results in a molecular ion in ro-vibrational levels of an excited electronic state;
Rapid radiationless transition to the high-lying ro-vibrational levels of a lower electronic state; and
Subsequent dissociation of the ion to different fragments through various fragmentation channels.
The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required.
Kramers–Henneberger frame
The Kramers–Henneberger(KF) frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian:
In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field:
where for a monochromatic plane wave.
By applying a transformation to the laboratory frame equal to the quiver motion one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point :
The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are and (sometimes called the “excursion amplitude’, obtained from ).
From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ (the cycle-average of the oscillating potential). The interpretation of the presence of is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory and can be seen as the potential of the smeared out nuclear charge along its trajectory.
The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory.
The KF frame was successfully applied for different problems as well e.g. for higher-hamonic generation from a metal surface in a powerful laser field
Dissociation – distinction
A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs.
See also
Above threshold ionization
Double ionization
Chemical ionization
Electron ionization
Ionization chamber – Instrument for detecting gaseous ionization, used in ionizing radiation measurements
Ion source
Photoionization
Thermal ionization
Townsend avalanche – The chain reaction of ionization occurring in a gas with an applied electric field
Poole–Frenkel effect
Table
References
External links
Ions
Molecular physics
Atomic physics
Physical chemistry
Quantum chemistry
Mass spectrometry | Ionization | [
"Physics",
"Chemistry"
] | 4,774 | [
"Ionization",
"Physical phenomena",
"Mass",
"Phases of matter",
"Quantum mechanics",
"Theoretical chemistry",
"Statistical mechanics",
"Physical chemistry",
"Ions",
"Phase transitions",
"Instrumental analysis",
"Mass spectrometry",
" molecular",
" and optical physics",
"Molecular physics... |
59,616 | https://en.wikipedia.org/wiki/Fermentation%20theory | In biochemistry, fermentation theory refers to the historical study of models of natural fermentation processes, especially alcoholic and lactic acid fermentation. Notable contributors to the theory include Justus Von Liebig and Louis Pasteur, the latter of whom developed a purely microbial basis for the fermentation process based on his experiments. Pasteur's work on fermentation later led to his development of the germ theory of disease, which put the concept of spontaneous generation to rest. Although the fermentation process had been used extensively throughout history prior to the origin of Pasteur's prevailing theories, the underlying biological and chemical processes were not fully understood. In the contemporary, fermentation is used in the production of various alcoholic beverages, foodstuffs, and medications.
Overview of fermentation
Fermentation is the anaerobic metabolic process that converts sugar into acids, gases, or alcohols in oxygen starved environments. Yeast and many other microbes commonly use fermentation to carry out anaerobic respiration necessary for survival. Even the human body carries out fermentation processes from time to time, such as during long-distance running; lactic acid will build up in muscles over the course of long-term exertion. Within the human body, lactic acid is the by-product of ATP-producing fermentation, which produces energy so the body can continue to exercise in situations where oxygen intake cannot be processed fast enough. Although fermentation yields less ATP than aerobic respiration, it can occur at a much higher rate. Fermentation has been used by humans consciously since around 5000 BCE, evidenced by jars recovered in the Iran Zagros Mountains area containing remnants of microbes similar those present in the wine-making process.
History
Prior to Pasteur's research on fermentation, there existed some preliminary competing notions of it. One scientist who had a substantial degree of influence on the theory of fermentation was Justus von Liebig. Liebig believed that fermentation was largely a process of decomposition as a consequence of the exposure of yeast to air and water. This theory was corroborated by Liebig's observation that other decomposing matter, such as rotten plant and animal parts, interacted with sugar in a similar manner as yeast. That is, the decomposition of albuminous matter (i.e. water-soluble proteins) caused sugar to transform to alcohol. Liebig held this view until his death in 1873. A different theory was supported by Charles Cagniard de la Tour and cell theorist Theodor Schwann, who claimed that alcoholic fermentation depended on the biological processes carried out by brewer's yeast.
Louis Pasteur's interest in fermentation began when he noticed some remarkable properties of amyl alcohol—a by-product of lactic acid and alcohol fermentation—during his biochemical studies. In particular, Pasteur noted its ability to “rotate the plane of polarized light”, and its “unsymmetric arrangement of atoms." These behaviors were characteristic of organic compounds Pasteur had previously examined, but also presented a hurdle to his own research about a "law of hemihedral correlation". Pasteur had previously been attempting to derive connections between substances' chemical structures and external shape, and the optically active amyl alcohol did not follow his expectations according to the proposed 'law'. Pasteur sought a reason for why there happened to be this exception, and why such a chemical compound was generated during the fermentation process in the first place. In a series of lectures later in 1860, Pasteur attempted to link optical activity and molecular asymmetry to organic origins of substances, asserting that no chemical processes were capable of converting symmetric substances (inorganic) into asymmetric ones (organic). Hence, the amyl alcohol observation provided some of the first motivations for a biological explanation of fermentation.
In 1856, Pasteur was able to observe the microbes responsible for alcoholic fermentation under a microscope, as a professor of science in the University of Lille. According to a legend originating in the 1900 biography of Pasteur, one of his chemistry students—an owner of a beetroot alcohol factory in Lille—sought aid from him after an unsuccessful year of brewing. Pasteur performed experiments at the factory in observation of the fermentation process, noticing that yeast globules became elongated after lactic acid was formed, but round and full when alcohol was fermenting correctly.
In a different observation, Pasteur inspected particles originating on grapevines under the microscope and revealed the presence of living cells. Leaving these cells immersed in grape juice resulted in active alcoholic fermentation. This observation provided evidence for ending the distinction between ‘artificial’ fermentation in wine and ‘true’ fermentation in yeast products. The previous incorrect distinction had stemmed in part from the fact that yeast had to be added to beer wort in order to provoke desired alcoholic fermentation, while the fermenting catalysts for wine occurred naturally on grapevines; the fermentation of wine had been viewed as 'artificial' since it did not require additional catalyst, but the natural catalyst had been present on the grapevine itself. These observations provided Pasteur with a working hypothesis for future experiments.
One of the chemical processes that Pasteur studied was the fermentation of sugar into lactic acid, as occurs in the souring of milk. In an 1857 experiment, Pasteur was able to isolate microorganisms present in lactic acid ferment after the chemical process had taken place. Pasteur then cultivated the microorganisms in a culture with his laboratory. He was then able to accelerate the lactic acid fermentation process in fresh milk by administering the cultivated sample to it. This was an important step in proving his hypothesis that lactic acid fermentation was catalyzed by microorganisms.
Pasteur also experimented with the mechanisms of brewer's yeast in the absence of organic nitrogen. By adding pure brewer's yeast to a solution of cane sugar, ammonium salt, and yeast ash, Pasteur was able to observe the alcoholic fermentation process with all of its usual byproducts: glycerin, succinic acid, and small amounts of cellulose and fatty matters. However, if any of the ingredients were removed from the solution, no fermentation would occur. To Pasteur, this was proof that yeast required the nitrogen, minerals, and carbon from the medium for its metabolic processes, releasing carbonic acid and ethyl alcohol as byproducts. This also disproved Liebig's theory, since there was no albuminous matter present in the medium; the decomposition of the yeast was not the driving force for the observed fermentation.
Pasteur on spontaneous generation
Before the 1860s and 1870s—when Pasteur published his work on this theory—it was believed that microorganisms and even some small animals such as frogs would spontaneously generate. Spontaneous generation was historically explained in a variety of ways. Aristotle, an ancient Greek philosopher, theorized that creatures appeared out of certain concoctions of earthly elements, such as clay or mud mixing with water and sunlight. Later on, Felix Pouchet argued for the existence of 'plastic forces' within plant and animal debris capable of spontaneously generating eggs, and new organisms were born from these eggs. On top of this, a common piece of evidence that seemed to corroborate the theory was the appearance of maggots on raw meat after it was left exposed to open air.
In the 1860s and 1870s, Pasteur's interest in spontaneous generation led him to criticize Pouchet's theories and conduct experiments of his own. In his first experiment, he took boiled sugared yeast-water and sealed it in an airtight contraption. Feeding hot, sterile air into the mixture left it unaltered, while introducing atmospheric dust resulted in microbes and mold appearing within the mixture. This result was also strengthened by the fact that Pasteur used asbestos, a form of totally inorganic matter, to carry the atmospheric dust. In a second experiment, Pasteur used the same flasks and sugar-yeast mixture, but left it idle in 'swan-neck' flasks instead of introducing any extraneous matter. Some flasks were kept open to the common air as the control group, and these exhibited mold and microbial growths within a day or two. When the swan-neck flasks failed to show these same microbial growths, Pasteur concluded that the structure of the necks blocked the passage of atmospheric dust into the solution. From the two experiments, Pasteur concluded that the atmospheric dust carried germs responsible for the 'spontaneous generation' in his broths. Thus, Pasteur's work provided proof that the emergent growth of bacteria in nutrient broths is caused by biogenesis rather than some form of spontaneous generation.
Applications
Today, the process of fermentation is used for a multitude of everyday applications including medication, beverages and food. Currently, companies like Genencor International uses the production of enzymes involved in fermentation to build a revenue of over $400 million a year. Many medications such as antibiotics are produced by the fermentation process. An example is the important drug cortisone, which can be prepared by the fermentation of a plant steroid known as diosgenin.
The enzymes used in the reaction are provided by the mold Rhizopus nigricans. Just as it is commonly known, alcohol of all types and brands are also produced by way of fermentation and distillation. Moonshine is a classic example of how this is carried out. Finally, foods such as yogurt are made by fermentation processes as well. Yogurt is a fermented milk product that contains the characteristic bacterial cultures Lactobacillus bulgaricus and Streptococcus thermopiles.
See also
Cellular respiration
Distillation
Fermentation in food processing
Louis Pasteur
Spontaneous generation
Zymotic diseases (for the Greek language term zumoun for "ferment")
References
Obsolete medical theories
Microbiology
History of science
Biology theories
Metabolism
Louis Pasteur | Fermentation theory | [
"Chemistry",
"Technology",
"Biology"
] | 2,109 | [
"History of science",
"Microbiology",
"Biology theories",
"Cellular processes",
"Metabolism",
"Microscopy",
"Biochemistry",
"History of science and technology"
] |
59,656 | https://en.wikipedia.org/wiki/Rayleigh%20number | In fluid mechanics, the Rayleigh number (, after Lord Rayleigh) for a fluid is a dimensionless number associated with buoyancy-driven flow, also known as free (or natural) convection. It characterises the fluid's flow regime: a value in a certain lower range denotes laminar flow; a value in a higher range, turbulent flow. Below a certain critical value, there is no fluid motion and heat transfer is by conduction rather than convection. For most engineering purposes, the Rayleigh number is large, somewhere around 106 to 108.
The Rayleigh number is defined as the product of the Grashof number (), which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number (), which describes the relationship between momentum diffusivity and thermal diffusivity: . Hence it may also be viewed as the ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities: . It is closely related to the Nusselt number ().
Derivation
The Rayleigh number describes the behaviour of fluids (such as water or air) when the mass density of the fluid is non-uniform. The mass density differences are usually caused by temperature differences. Typically a fluid expands and becomes less dense as it is heated. Gravity causes denser parts of the fluid to sink, which is called convection. Lord Rayleigh studied the case of Rayleigh-Bénard convection. When the Rayleigh number, Ra, is below a critical value for a fluid, there is no flow and heat transfer is purely by conduction; when it exceeds that value, heat is transferred by natural convection.
When the mass density difference is caused by temperature difference, Ra is, by definition, the ratio of the time scale for diffusive thermal transport to the time scale for convective thermal transport at speed :
This means the Rayleigh number is a type of Péclet number. For a volume of fluid of size in all three dimensions and mass density difference , the force due to gravity is of the order , where is acceleration due to gravity. From the Stokes equation, when the volume of fluid is sinking, viscous drag is of the order , where is the dynamic viscosity of the fluid. When these two forces are equated, the speed . Thus the time scale for transport via flow is . The time scale for thermal diffusion across a distance is , where is the thermal diffusivity. Thus the Rayleigh number Ra is
where we approximated the density difference for a fluid of average mass density , thermal expansion coefficient and a temperature difference across distance .
The Rayleigh number can be written as the product of the Grashof number and the Prandtl number:
Classical definition
For free convection near a vertical wall, the Rayleigh number is defined as:
where:
x is the characteristic length
Rax is the Rayleigh number for characteristic length x
g is acceleration due to gravity
β is the thermal expansion coefficient (equals to 1/T, for ideal gases, where T is absolute temperature).
is the kinematic viscosity
α is the thermal diffusivity
Ts is the surface temperature
T∞ is the quiescent temperature (fluid temperature far from the surface of the object)
Grx is the Grashof number for characteristic length x
Pr is the Prandtl number
In the above, the fluid properties Pr, ν, α and β are evaluated at the film temperature, which is defined as:
For a uniform wall heating flux, the modified Rayleigh number is defined as:
where:
q″o is the uniform surface heat flux
k is the thermal conductivity.
Other applications
Solidifying alloys
The Rayleigh number can also be used as a criterion to predict convectional instabilities, such as A-segregates, in the mushy zone of a solidifying alloy. The mushy zone Rayleigh number is defined as:
where:
K is the mean permeability (of the initial portion of the mush)
L is the characteristic length scale
α is the thermal diffusivity
ν is the kinematic viscosity
R is the solidification or isotherm speed.
A-segregates are predicted to form when the Rayleigh number exceeds a certain critical value. This critical value is independent of the composition of the alloy, and this is the main advantage of the Rayleigh number criterion over other criteria for prediction of convectional instabilities, such as Suzuki criterion.
Torabi Rad et al. showed that for steel alloys the critical Rayleigh number is 17. Pickering et al. explored Torabi Rad's criterion, and further verified its effectiveness. Critical Rayleigh numbers for lead–tin and nickel-based super-alloys were also developed.
Porous media
The Rayleigh number above is for convection in a bulk fluid such as air or water, but convection can also occur when the fluid is inside and fills a porous medium, such as porous rock saturated with water. Then the Rayleigh number, sometimes called the Rayleigh-Darcy number, is different. In a bulk fluid, i.e., not in a porous medium, from the Stokes equation, the falling speed of a domain of size of liquid . In porous medium, this expression is replaced by that from Darcy's law , with the permeability of the porous medium. The Rayleigh or Rayleigh-Darcy number is then
This also applies to A-segregates, in the mushy zone of a solidifying alloy.
Geophysical applications
In geophysics, the Rayleigh number is of fundamental importance: it indicates the presence and strength of convection within a fluid body such as the Earth's mantle. The mantle is a solid that behaves as a fluid over geological time scales. The Rayleigh number for the Earth's mantle due to internal heating alone, RaH, is given by:
where:
H is the rate of radiogenic heat production per unit mass
η is the dynamic viscosity
k is the thermal conductivity
D is the depth of the mantle.
A Rayleigh number for bottom heating of the mantle from the core, RaT, can also be defined as:
where:
ΔTsa is the superadiabatic temperature difference (the superadiabatic temperature difference is the actual temperature difference minus the temperature difference in a fluid whose entropy gradient is zero, but has the same profile of the other variables appearing in the equation of state) between the reference mantle temperature and the core–mantle boundary
CP is the specific heat capacity at constant pressure.
High values for the Earth's mantle indicates that convection within the Earth is vigorous and time-varying, and that convection is responsible for almost all the heat transported from the deep interior to the surface.
See also
Grashof number
Prandtl number
Reynolds number
Péclet number
Nusselt number
Rayleigh–Bénard convection
Notes
References
External links
Rayleigh number calculator
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics | Rayleigh number | [
"Physics",
"Chemistry",
"Engineering"
] | 1,446 | [
"Transport phenomena",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
59,863 | https://en.wikipedia.org/wiki/Correspondence%20principle | In physics, a correspondence principle is any one of several premises or assertions about the relationship between classical and quantum mechanics.
The physicist Niels Bohr coined the term in 1920 during the early development of quantum theory; he used it to explain how quantized classical orbitals connect to quantum radiation.
Modern sources often use the term for the idea that the behavior of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers: for large orbits and for large energies, quantum calculations must agree with classical calculations. A "generalized" correspondence principle refers to the requirement for a broad set of connections between any old and new theory.
History
Max Planck was the first to introduce the idea of quanta of energy, while studying black-body radiation in 1900. In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were taken to be infinitesimal. With this idea, he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength).
Niels Bohr used a similar idea, while developing his model of the atom. In 1913, he provided the first postulates of what is now known as old quantum theory. Using these postulates he obtained that for the hydrogen atom, the energy spectrum approaches the classical continuum for large n (a quantum number that encodes the energy of the orbit). Bohr coined the term "correspondence principle" during a lecture in 1920.
Arnold Sommerfeld refined Bohr's theory leading to the Bohr-Sommerfeld quantization condition. Sommerfeld referred to the correspondence principle as Bohr's magic wand (), in 1921.
Bohr's correspondence principle
The seeds of Bohr's correspondence principle appeared from two sources. First Sommerfeld and Max Born developed a "quantization procedure" based on the action angle variables of classical Hamiltonian mechanics. This gave a mathematical foundation for stationary states of the Bohr-Sommerfeld model of the atom. The second seed was Albert Einstein's quantum derivation of Planck's law in 1916. Einstein developed the statistical mechanics for Bohr-model atoms interacting with electromagnetic radiation, leading to absorption and two kinds of emission, spontaneous and stimulated emission. But for Bohr the important result was the use of classical analogies and the Bohr atomic model to fix inconsistencies in Planck's derivation of the blackbody radiation formula.
Bohr used the word "correspondence" in italics in lectures and writing before calling it a correspondence principle. He viewed this as a correspondence between quantum motion and radiation, not between classical and quantum theories. He writes in 1920 that there exists "a far-reaching correspondence between the various types of possible transitions between the stationary states on the one hand and the various harmonic components of the motion on the other hand."
Bohr's first article containing the definition of the correspondence principle was in 1923, in a summary paper entitled (in the English translation) "On the application of quantum theory to atomic structure". In his chapter II, "The process of radiation", he defines his correspondence principle as a condition connecting harmonic components of the electron moment to the possible occurrence of a radiative transition. In modern terms, this condition is a selection rule, saying that a given quantum jump is possible if and only if a particular type of motion exists in the corresponding classical model.
Following his definition of the correspondence principle, Bohr describes two applications. First he shows that the frequency of emitted radiation is related to an integral which can be well approximated by a sum when the quantum numbers inside the integral are large compared with their differences. Similarly he shows a relationship for the intensities of spectral lines and thus the rates at which quantum jumps occur.
These asymptotic relationships are expressed by Bohr as consequences of his general correspondence principle. However, historically each of these applications have been called "the correspondence principle".
The PhD dissertation of Hans Kramers working in Bohr's group in Copenhagen applied Bohr's correspondence principle to account for all of the known facts of the spectroscopic Stark effect, including some spectral components not known at the time of Kramers work.
Sommerfeld had been skeptical of the correspondence principle as it did not seem to be a consequence of a fundamental theory; Kramers' work convinced him that the principle had heuristic utility nevertheless. Other physicists picked up the concept, including work by John Van Vleck and by Kramers and Heisenberg on dispersion theory. The principle became a cornerstone of the semi-classical Bohr-Sommerfeld atomic theory;
Bohr's 1922 Nobel prize was partly awarded for his work with the correspondence principle.
Despite the successes, the physical theories based on the principle faced increasing challenges the early 1920s. Theoretical calculations by Van Vleck and by Kramers of the ionization potential of Helium disagreed significantly with experimental values. Bohr, Kramers, and John C. Slater responded with a new theoretical approach now called the BKS theory based on the correspondence principle but disavowing conservation of energy. Einstein and Wolfgang Pauli criticized the new approach, and the Bothe–Geiger coincidence experiment showed that energy was conserved in quantum collisions.
With the existing theories in conflict with observations, two new quantum mechanics concepts arose. First, Heisenberg's 1925 Umdeutung paper on matrix mechanics was inspired by the correspondence principle, although he did not cite Bohr. Further development in collaboration with Pascual Jordan and Max Born resulted in a mathematical model without connection to the principle. Second, Schrodinger's wave mechanics in the following year similarly did not use the principle. Both pictures were later shown to be equivalent and accurate enough to replace old quantum theory. These approaches have no atomic orbits: the correspondence is more of an analogy than a principle.
Dirac's correspondence
Paul Dirac developed significant portions of the new quantum theory in the second half of the 1920s. While he did not apply Bohr's correspondence principle, he developed a different, more formal classical–quantum correspondence. Dirac connected the structures of classical mechanics known as Poisson brackets to analogous structures of quantum mechanics known as commutators:
By this correspondence, now called canonical quantization, Dirac showed how the mathematical form of classical mechanics could be recast as a basis for the new mathematics of quantum mechanics.
Dirac developed these connections by studying the work of Heisenberg and Kramers on dispersion, work that was directly built on Bohr's correspondence principle; the Dirac approach provides a mathematically sound path towards Bohr's goal of a connection between classical and quantum mechanics. While Dirac did not call this correspondence a "principle", physics textbooks refer to his connections a "correspondence principle".
The classical limit of wave mechanics
The outstanding success of classical mechanics in the description of natural phenomena up to the 20th century means that quantum mechanics must do as well in similar circumstances.
One way to quantitatively define this concept is to require quantum mechanical theories to produce classical mechanics results as the quantum of action goes to zero, . This transition can be accomplished in two different ways.
First, the particle can be approximated by a wave packet, and the indefinite spread of the packet with time can be ignored. In 1927, Paul Ehrenfest proved his namesake theorem that showed that Newton's laws of motion hold on average in quantum mechanics: the quantum statistical expectation value of the position and momentum obey Newton's laws.
Second, the individual particle view can be replaced with a statistical mixture of classical particles with a density matching the quantum probability density. This approach led to the concept of semiclassical physics, beginning with the development of WKB approximation used in descriptions of quantum tunneling for example.
Modern view
While Bohr viewed "correspondence" as principle aiding his description of quantum phenomena, fundamental differences between the mathematical structure of quantum and of classical mechanics prevents correspondence in many cases. Rather than a principle, "there may be in some situations an approximate correspondence between classical and quantum concepts," physicist Asher Peres put it. Since quantum mechanics operates in a discrete space and classical mechanics in a continuous one, any correspondence will be necessarily fuzzy and elusive.
Introductory quantum mechanics textbooks suggest that quantum mechanics goes over to classical theory in the limit of high quantum numbers or in a limit where the Planck constant in the quantum formula is reduced to zero, . However such correspondence is not always possible. For example, classical systems can exhibit chaotic orbits which diverge but quantum states are unitary and maintain a fixed overlap.
Generalized correspondence principle
The term "generalized correspondence principle" has been used in the study of the history of science to mean the reduction of a new scientific theory to an earlier scientific theory in appropriate circumstances. This requires that the new theory explain all the phenomena under circumstances for which the preceding theory was known to be valid; it also means that new theory will retain large parts of the older theory. The generalized principle applies correspondence across aspects of a complete theory, not just a single formula as in the classical limit correspondence. For example, Albert Einstein in his 1905 work on relativity noted that classical mechanics relied on Galilean relativity while electromagnetism did not, and yet both work well. He produced a new theory that combined them in a away that reduced to these separate theories in approximations.
Ironically the singular failure of this "generalized correspondence principle" concept of scientific theories is the replacement of classical mechanics with quantum mechanics.
See also
Quantum decoherence
Classical limit
Classical probability density
Leggett–Garg inequality
References
Quantum mechanics
Theory of relativity
Philosophy of physics
Principles
Metatheory | Correspondence principle | [
"Physics"
] | 1,970 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Theoretical physics",
"Quantum mechanics",
"Theory of relativity"
] |
59,874 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%20equation | The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". The Klein-Gordon equation is a wave equation which is the relativistic version of the Schrödinger equation. The Schrödinger equation is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space & time are not on equal footing.
Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. The second-derivative PDE of the Klein-Gordon equation led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square-root of the Klein-Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein-Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles.
Definition
Preliminaries
Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
Here, is a wave function, a function that assigns a complex number to each point at each time . The parameter is the mass of the particle, and is the potential that represents the environment in which the particle exists. The constant is the imaginary unit, and is the reduced Planck constant, which has units of action (energy multiplied by time).
Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector belonging to a separable complex Hilbert space . This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys . The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions , while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space with the usual inner product.
Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace.
A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states. Thus, a position-space wave function as used above can be written as the inner product of a time-dependent state vector with unphysical but convenient "position eigenstates" :
Time-dependent equation
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:
where is time, is the state vector of the quantum system ( being the Greek letter psi), and is an observable, the Hamiltonian operator.
The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function. For example, given a wave function in position space as above, we have
Time-independent equation
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
where is the energy of the system. This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) .
Properties
Linearity
The Schrödinger equation is a linear differential equation, meaning that if two state vectors and are solutions, then so is any linear combination
of the two state vectors where and are any complex numbers. Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector can be written as the linear combination
where are complex numbers and the vectors are solutions of the time-independent equation .
Unitarity
Holding the Hamiltonian constant, the Schrödinger equation has the solution
The operator is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is , then the state at a later time will be given by
for some unitary operator . Conversely, suppose that is a continuous family of unitary operators parameterized by . Without loss of generality, the parameterization can be chosen so that is the identity operator and that for any . Then depends upon the parameter in such a way that
for some self-adjoint operator , called the generator of the family . A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units).
To see that the generator is Hermitian, note that with , we have
so is unitary only if, to first order, its derivative is Hermitian.
Changes of basis
The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle. The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term:
Writing for a three-dimensional position vector and for a three-dimensional momentum vector, the position-space Schrödinger equation is
The momentum-space counterpart involves the Fourier transforms of the wave function and the potential:
The functions and are derived from by
where and do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.
When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables and are promoted to self-adjoint operators and that satisfy the canonical commutation relation
This implies that
so the action of the momentum operator in the position-space representation is . Thus, becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian .
The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform. In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples with for only discrete reciprocal lattice vectors . This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.
Probability current
The Schrödinger equation is consistent with local probability conservation. It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent.
The continuity equation for probability in non relativistic quantum mechanics is stated as:
where
is the probability current or probability flux (flow per unit area).
If the wavefunction is represented as where is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle.
Separation of variables
If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads:
The operator on the left side depends only on time; the one on the right side depends only on space.
Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts
where is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and is a function of time only. Substituting this expression for into the time dependent left hand side shows that is a phase factor:
A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.
The spatial part of the full wave function solves:
where the energy appears in the phase factor.
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated,
or radial and angular coordinates might be separated:
Examples
Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written
With the differential operator defined by
the previous equation is evocative of the classic kinetic energy analogue,
with state in this case having energy coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
or, from Euler's formula,
The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at ,
and . At ,
in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of ,
This constraint on implies a constraint on the energy levels, yielding
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
Harmonic oscillator
The Schrödinger equation for this situation is
where is the displacement and the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
The solutions in position space are
where , and the functions are the Hermite polynomials of order . The solution set may be generated by
The eigenvalues are
The case is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.
The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.
Hydrogen atom
The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is
where is the electron charge, is the position of the electron relative to the nucleus, is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein is the permittivity of free space and
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass and the electron of mass . The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus,
where are radial functions and are spherical harmonics of degree and order . This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are:
where
is the Bohr radius,
are the generalized Laguerre polynomials of degree ,
are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values
Approximate solutions
It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.
Semiclassical limit
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says
Although the first of these equations is consistent with the classical behavior, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have to be
which is typically not the same as . For a general , therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
The Schrödinger equation in its general form
is closely related to the Hamilton–Jacobi equation (HJE)
where is the classical action and is the Hamiltonian function (not operator). Here the generalized coordinates for (used in the context of the HJE) can be set to the position in Cartesian coordinates as .
Substituting
where is the probability density, into the Schrödinger equation and then taking the limit in the resulting equation yield the Hamilton–Jacobi equation.
Density matrices
Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead. A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written
The density-matrix analogue of the Schrödinger equation for wave functions is
where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices. If the Hamiltonian is time-independent, this equation can be easily solved to yield
More generally, if the unitary operator describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by
Unitary evolution of a density matrix conserves its von Neumann entropy.
Relativistic quantum physics and quantum field theory
The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.
Klein–Gordon and Dirac equations
Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation
instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation,
was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices . Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read
This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass and electric charge in an electromagnetic field (described by the electromagnetic potentials and ) is:
in which the and are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all particles, and the solutions to the equation are spinor fields with two components corresponding to the particle and the other two for the antiparticle.
For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).
In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin , are complex-valued spinor fields.
Fock space
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.
History
Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum of a photon is inversely proportional to its wavelength , or proportional to its wave number :
where is the Planck constant and is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.
These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum according to
According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius .
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen.
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.
The equation he found is
By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.
While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926. Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave , moving in a potential well , created by the proton. This computation accurately reproduced the energy levels of the Bohr model.
The Schrödinger equation details the behavior of but says nothing of its nature. Schrödinger tried to interpret the real part of as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted as the probability amplitude, whose modulus squared is equal to probability density. Later, Schrödinger himself explained this interpretation as follows:
Interpretation
The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.
Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.
Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.
See also
Eckhaus equation
Fokker–Planck equation
Interpretations of quantum mechanics
List of things named after Erwin Schrödinger
Logarithmic Schrödinger equation
Nonlinear Schrödinger equation
Pauli equation
Quantum channel
Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
Schrödinger picture
Wigner quasiprobability distribution
Notes
References
External links
Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware
The Modern Revolution in Physics – an online textbook.
Quantum Physics I at MIT OpenCourseWare
Partial differential equations
Wave mechanics
Functions of space and time | Schrödinger equation | [
"Physics"
] | 7,091 | [
"Physical phenomena",
"Equations of physics",
"Functions of space and time",
"Eponymous equations of physics",
"Classical mechanics",
"Quantum mechanics",
"Waves",
"Wave mechanics",
"Schrödinger equation",
"Spacetime"
] |
59,881 | https://en.wikipedia.org/wiki/Ideal%20gas%20law | The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form:
where , and are the pressure, volume and temperature respectively; is the amount of substance; and is the ideal gas constant.
It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857.
Equation
The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin.
Common forms
The most frequently introduced forms are:where:
is the absolute pressure of the gas,
is the volume of the gas,
is the amount of substance of gas (also known as number of moles),
is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant,
is the Boltzmann constant,
is the Avogadro constant,
is the absolute temperature of the gas,
is the number of particles (usually atoms or molecules) of the gas.
In SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0 K = −273.15 °C, the lowest possible temperature). R has for value 8.314 J/(mol·K) = 1.989 ≈ 2 cal/(mol·K), or 0.0821 L⋅atm/(mol⋅K).
Molar form
How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount, n (in moles), is equal to total mass of the gas (m) (in kilograms) divided by the molar mass, M (in kilograms per mole):
By replacing n with m/M and subsequently introducing density ρ = m/V, we get:
Defining the specific gas constant Rspecific as the ratio R/M,
This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as
It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as or to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being used.
Statistical mechanics
In statistical mechanics, the following molecular equation is derived from first principles
where is the absolute pressure of the gas, is the number density of the molecules (given by the ratio , in contrast to the previous formulation in which is the number of moles), is the absolute temperature, and is the Boltzmann constant relating temperature and energy, given by:
where is the Avogadro constant.
From this we notice that for a gas of mass , with an average particle mass of times the atomic mass constant, , (i.e., the mass is Da) the number of molecules will be given by
and since , we find that the ideal gas law can be rewritten as
In SI units, is measured in pascals, in cubic metres, in kelvins, and in SI units.
Combined gas law
Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law says that the number of moles is unspecified, and the ratio of to is simply taken as a constant:
where is the pressure of the gas, is the volume of the gas, is the absolute temperature of the gas, and is a constant. When comparing the same substance under two different sets of conditions, the law can be written as
Energy associated with a gas
According to the assumptions of the kinetic theory of ideal gases, one can consider that there are no intermolecular attractions between the molecules, or atoms, of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is the kinetic energy of the molecules, or atoms, of the gas.
This corresponds to the kinetic energy of n moles of a monoatomic gas having 3 degrees of freedom; x, y, z. The table here below gives this relationship for different amounts of a monoatomic gas.
Applications to thermodynamic processes
The table below essentially simplifies the ideal gas equation for a particular process, making the equation easier to solve using numerical methods.
A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by a subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, S, or H) is constant throughout the process.
For a given thermodynamic process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation).
In the final three columns, the properties (p, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed.
a. In an isentropic process, system entropy (S) is constant. Under these conditions, p1V1γ = p2V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature.
b. In an isenthalpic process, system enthalpy (H) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar.
Deviations from ideal behavior of real gases
The equation of state given here (PV = nRT) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces.
Derivations
Empirical
The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant.
All the possible gas laws that could have been discovered with this kind of setup are:
Boyle's law ()
Charles's law ()
Avogadro's law ()
Gay-Lussac's law ()
where P stands for pressure, V for volume, N for number of particles in the gas and T for temperature; where are constants in this context because of each equation requiring only the parameters explicitly noted in them changing.
To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4.
Since each formula only holds when only the state variables involved in said formula change while the others (which are a property of the gas but are not explicitly noted in said formula) remain constant, we cannot simply use algebra and directly combine them all. This is why: Boyle did his experiments while keeping N and T constant and this must be taken into account (in this same way, every experiment kept some parameter as constant and this must be taken into account for the derivation).
Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time (as it was done in the experiments). The derivation using 4 formulas can look like this:
at first the gas has parameters
Say, starting to change only pressure and volume, according to Boyle's law (), then:
After this process, the gas has parameters
Using then equation () to change the number of particles in the gas and the temperature,
After this process, the gas has parameters
Using then equation () to change the pressure and the number of particles,
After this process, the gas has parameters
Using then Charles's law (equation 2) to change the volume and temperature of the gas,
After this process, the gas has parameters
Using simple algebra on equations (), (), () and () yields the result:
or where stands for the Boltzmann constant.
Another equivalent result, using the fact that , where n is the number of moles in the gas and R is the universal gas constant, is:
which is known as the ideal gas law.
If three of the six equations are known, it may be possible to derive the remaining three using the same method. However, because each formula has two variables, this is possible only for certain groups of three. For example, if you were to have equations (), () and () you would not be able to get any more because combining any two of them will only give you the third. However, if you had equations (), () and () you would be able to get all six equations because combining () and () will yield (), then () and () will yield (), then () and () will yield (), as well as would the combination of () and () as is explained in the following visual relation:
where the numbers represent the gas laws numbered above.
If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third.
For example:
Change only pressure and volume first:
then only volume and temperature:
then as we can choose any value for , if we set , equation () becomes:
combining equations () and () yields , which is equation (), of which we had no prior knowledge until this derivation.
Theoretical
Kinetic theory
The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved.
First we show that the fundamental assumptions of the kinetic theory of gases imply that
Consider a container in the Cartesian coordinate system. For simplicity, we assume that a third of the molecules moves parallel to the -axis, a third moves parallel to the -axis and a third moves parallel to the -axis. If all molecules move with the same velocity , denote the corresponding pressure by . We choose an area on a wall of the container, perpendicular to the -axis. When time elapses, all molecules in the volume moving in the positive direction of the -axis will hit the area. There are molecules in a part of volume of the container, but only one sixth (i.e. a half of a third) of them moves in the positive direction of the -axis. Therefore, the number of molecules that will hit the area when the time elapses is .
When a molecule bounces off the wall of the container, it changes its momentum to . Hence the magnitude of change of the momentum of one molecule is . The magnitude of the change of momentum of all molecules that bounce off the area when time elapses is then . From and we get
We considered a situation where all molecules move with the same velocity . Now we consider a situation where they can move with different velocities, so we apply an "averaging transformation" to the above equation, effectively replacing by a new pressure and by the arithmetic mean of all squares of all velocities of the molecules, i.e. by Therefore
which gives the desired formula.
Using the Maxwell–Boltzmann distribution, the fraction of molecules that have a speed in the range to is , where
and denotes the Boltzmann constant. The root-mean-square speed can be calculated by
Using the integration formula
it follows that
from which we get the ideal gas law:
Statistical mechanics
Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then (two times) the time-averaged kinetic energy of the particle is:
where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of N particles yields
By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence
where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is
the divergence theorem implies that
where dV is an infinitesimal volume within the container and V is the total volume of the container.
Putting these equalities together yields
which immediately implies the ideal gas law for N particles:
where n = N/NA is the number of moles of gas and R = NAkB is the gas constant.
Other dimensions
For a d-dimensional system, the ideal gas pressure is:
where is the volume of the d-dimensional domain in which the gas exists. The dimensions of the pressure changes with dimensionality.
See also
Gas laws
References
Further reading
External links
Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
Gas equations in detail
Gas laws
Ideal gas
Equations of state
1834 introductions | Ideal gas law | [
"Physics",
"Chemistry"
] | 3,343 | [
"Thermodynamic systems",
"Equations of physics",
"Physical systems",
"Gas laws",
"Statistical mechanics",
"Equations of state",
"Ideal gas"
] |
60,162 | https://en.wikipedia.org/wiki/Tidal%20locking | Tidal locking between a pair of co-orbiting astronomical bodies occurs when one of the objects reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit. In the case where a tidally locked body possesses synchronous rotation, the object takes just as long to rotate around its own axis as it does to revolve around its partner. For example, the same side of the Moon always faces Earth, although there is some variability because the Moon's orbit is not perfectly circular. Usually, only the satellite is tidally locked to the larger body. However, if both the difference in mass between the two bodies and the distance between them are relatively small, each may be tidally locked to the other; this is the case for Pluto and Charon, and for Eris and Dysnomia. Alternative names for the tidal locking process are gravitational locking, captured rotation, and spin–orbit locking.
The effect arises between two bodies when their gravitational interaction slows a body's rotation until it becomes tidally locked. Over many millions of years, the interaction forces changes to their orbits and rotation rates as a result of energy exchange and heat dissipation. When one of the bodies reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit, it is said to be tidally locked. The object tends to stay in this state because leaving it would require adding energy back into the system. The object's orbit may migrate over time so as to undo the tidal lock, for example, if a giant planet perturbs the object.
There is ambiguity in the use of the terms 'tidally locked' and 'tidal locking', in that some scientific sources use it to refer exclusively to 1:1 synchronous rotation (e.g. the Moon), while others include non-synchronous orbital resonances in which there is no further transfer of angular momentum over the course of one orbit (e.g. Mercury). In Mercury's case, the planet completes three rotations for every two revolutions around the Sun, a 3:2 spin–orbit resonance. In the special case where an orbit is nearly circular and the body's rotation axis is not significantly tilted, such as the Moon, tidal locking results in the same hemisphere of the revolving object constantly facing its partner.
Regardless of which definition of tidal locking is used, the hemisphere that is visible changes slightly due to variations in the locked body's orbital velocity and the inclination of its rotation axis over time.
Mechanism
Consider a pair of co-orbiting objects, A and B. The change in rotation rate necessary to tidally lock body B to the larger body A is caused by the torque applied by A's gravity on bulges it has induced on B by tidal forces.
The gravitational force from object A upon B will vary with distance, being greatest at the nearest surface to A and least at the most distant. This creates a gravitational gradient across object B that will distort its equilibrium shape slightly. The body of object B will become elongated along the axis oriented toward A, and conversely, slightly reduced in dimension in directions orthogonal to this axis. The elongated distortions are known as tidal bulges. (For the solid Earth, these bulges can reach displacements of up to around .) When B is not yet tidally locked, the bulges travel over its surface due to orbital motions, with one of the two "high" tidal bulges traveling close to the point where body A is overhead. For large astronomical bodies that are nearly spherical due to self-gravitation, the tidal distortion produces a slightly prolate spheroid, i.e. an axially symmetric ellipsoid that is elongated along its major axis. Smaller bodies also experience distortion, but this distortion is less regular.
The material of B exerts resistance to this periodic reshaping caused by the tidal force. In effect, some time is required to reshape B to the gravitational equilibrium shape, by which time the forming bulges have already been carried some distance away from the A–B axis by B's rotation. Seen from a vantage point in space, the points of maximum bulge extension are displaced from the axis oriented toward A. If B's rotation period is shorter than its orbital period, the bulges are carried forward of the axis oriented toward A in the direction of rotation, whereas if B's rotation period is longer, the bulges instead lag behind.
Because the bulges are now displaced from the A–B axis, A's gravitational pull on the mass in them exerts a torque on B. The torque on the A-facing bulge acts to bring B's rotation in line with its orbital period, whereas the "back" bulge, which faces away from A, acts in the opposite sense. However, the bulge on the A-facing side is closer to A than the back bulge by a distance of approximately B's diameter, and so experiences a slightly stronger gravitational force and torque. The net resulting torque from both bulges, then, is always in the direction that acts to synchronize B's rotation with its orbital period, leading eventually to tidal locking.
Orbital changes
The angular momentum of the whole A–B system is conserved in this process, so that when B slows down and loses rotational angular momentum, its orbital angular momentum is boosted by a similar amount (there are also some smaller effects on A's rotation). This results in a raising of B's orbit about A in tandem with its rotational slowdown. For the other case where B starts off rotating too slowly, tidal locking both speeds up its rotation, and lowers its orbit.
Locking of the larger body
The tidal locking effect is also experienced by the larger body A, but at a slower rate because B's gravitational effect is weaker due to B's smaller mass. For example, Earth's rotation is gradually being slowed by the Moon, by an amount that becomes noticeable over geological time as revealed in the fossil record. Current estimations are that this (together with the tidal influence of the Sun) has helped lengthen the Earth day from about 6 hours to the current 24 hours (over about 4.5 billion years). Currently, atomic clocks show that Earth's day lengthens, on average, by about 2.3 milliseconds per century. Given enough time, this would create a mutual tidal locking between Earth and the Moon. The length of Earth's day would increase and the length of a lunar month would also increase. Earth's sidereal day would eventually have the same length as the Moon's orbital period, about 47 times the length of the Earth day at present. However, Earth is not expected to become tidally locked to the Moon before the Sun becomes a red giant and engulfs Earth and the Moon.
For bodies of similar size the effect may be of comparable size for both, and both may become tidally locked to each other on a much shorter timescale. An example is the dwarf planet Pluto and its satellite Charon. They have already reached a state where Charon is visible from only one hemisphere of Pluto and vice versa.
Eccentric orbits
For orbits that do not have an eccentricity close to zero, the rotation rate tends to become locked with the orbital speed when the body is at periapsis, which is the point of strongest tidal interaction between the two objects. If the orbiting object has a companion, this third body can cause the rotation rate of the parent object to vary in an oscillatory manner. This interaction can also drive an increase in orbital eccentricity of the orbiting object around the primary – an effect known as eccentricity pumping.
In some cases where the orbit is eccentric and the tidal effect is relatively weak, the smaller body may end up in a so-called spin–orbit resonance, rather than being tidally locked. Here, the ratio of the rotation period of a body to its own orbital period is some simple fraction different from 1:1. A well known case is the rotation of Mercury, which is locked to its own orbit around the Sun in a 3:2 resonance. This results in the rotation speed roughly matching the orbital speed around perihelion.
Many exoplanets (especially the close-in ones) are expected to be in spin–orbit resonances higher than 1:1. A Mercury-like terrestrial planet can, for example, become captured in a 3:2, 2:1, or 5:2 spin–orbit resonance, with the probability of each being dependent on the orbital eccentricity.
Occurrence
Moons
All twenty known moons in the Solar System that are large enough to be round are tidally locked with their primaries, because they orbit very closely and tidal force increases rapidly (as a cubic function) with decreasing distance. On the other hand, most of the irregular outer satellites of the giant planets (e.g. Phoebe), which orbit much farther away than the large well-known moons, are not tidally locked.
Pluto and Charon are an extreme example of a tidal lock. Charon is a relatively large moon in comparison to its primary and also has a very close orbit. This results in Pluto and Charon being mutually tidally locked. Pluto's other moons are not tidally locked; Styx, Nix, Kerberos, and Hydra all rotate chaotically due to the influence of Charon. Similarly, and Dysnomia are mutually tidally locked. and Vanth might also be mutually tidally locked, but the data is not conclusive.
The tidal locking situation for asteroid moons is largely unknown, but closely orbiting binaries are expected to be tidally locked, as well as contact binaries.
Earth's Moon
Earth's Moon's rotation and orbital periods are tidally locked with each other, so no matter when the Moon is observed from Earth, the same hemisphere of the Moon is always seen. Most of the far side of the Moon was not seen until 1959, when photographs of most of the far side were transmitted from the Soviet spacecraft Luna 3.
When Earth is observed from the Moon, Earth does not appear to move across the sky. It remains in the same place while showing nearly all its surface as it rotates on its axis.
Despite the Moon's rotational and orbital periods being exactly locked, about 59 percent of the Moon's total surface may be seen with repeated observations from Earth, due to the phenomena of libration and parallax. Librations are primarily caused by the Moon's varying orbital speed due to the eccentricity of its orbit: this allows up to about 6° more along its perimeter to be seen from Earth. Parallax is a geometric effect: at the surface of Earth observers are offset from the line through the centers of Earth and Moon; this accounts for about a 1° difference in the Moon's surface which can be seen around the sides of the Moon when comparing observations made during moonrise and moonset.
Planets
It was thought for some time that Mercury was in synchronous rotation with the Sun. This was because whenever Mercury was best placed for observation, the same side faced inward. Radar observations in 1965 demonstrated instead that Mercury has a 3:2 spin–orbit resonance, rotating three times for every two revolutions around the Sun, which results in the same positioning at those observation points. Modeling has demonstrated that Mercury was captured into the 3:2 spin–orbit state very early in its history, probably within 10–20 million years after its formation.
The 583.92-day interval between successive close approaches of Venus to Earth is equal to 5.001444 Venusian solar days, making approximately the same face visible from Earth at each close approach. Whether this relationship arose by chance or is the result of some kind of tidal locking with Earth is unknown.
The exoplanet Proxima Centauri b discovered in 2016 which orbits around Proxima Centauri, is almost certainly tidally locked, expressing either synchronized rotation or a 3:2 spin–orbit resonance like that of Mercury.
One form of hypothetical tidally locked exoplanets are eyeball planets, which in turn are divided into "hot" and "cold" eyeball planets.
Stars
Close binary stars throughout the universe are expected to be tidally locked with each other, and extrasolar planets that have been found to orbit their primaries extremely closely are also thought to be tidally locked to them. An unusual example, confirmed by MOST, may be Tau Boötis, a star that is probably tidally locked by its planet Tau Boötis b. If so, the tidal locking is almost certainly mutual.
Timescale
An estimate of the time for a body to become tidally locked can be obtained using the following formula:
where
is the initial spin rate expressed in radians per second,
is the semi-major axis of the motion of the satellite around the planet (given by the average of the periapsis and apoapsis distances),
is the moment of inertia of the satellite, where is the mass of the satellite and is the mean radius of the satellite,
is the dissipation function of the satellite,
is the gravitational constant,
is the mass of the planet (i.e., the object being orbited), and
is the tidal Love number of the satellite.
and are generally very poorly known except for the Moon, which has . For a really rough estimate it is common to take (perhaps conservatively, giving overestimated locking times), and
where
is the density of the satellite
is the surface gravity of the satellite
is the rigidity of the satellite. This can be roughly taken as 3 N/m2 for rocky objects and 4 N/m2 for icy ones.
Even knowing the size and density of the satellite leaves many parameters that must be estimated (especially ω, Q, and μ), so that any calculated locking times obtained are expected to be inaccurate, even to factors of ten. Further, during the tidal locking phase the semi-major axis may have been significantly different from that observed nowadays due to subsequent tidal acceleration, and the locking time is extremely sensitive to this value.
Because the uncertainty is so high, the above formulas can be simplified to give a somewhat less cumbersome one. By assuming that the satellite is spherical, , and it is sensible to guess one revolution every 12 hours in the initial non-locked state (most asteroids have rotational periods between about 2 hours and about 2 days)
with masses in kilograms, distances in meters, and in newtons per meter squared; can be roughly taken as 3 N/m2 for rocky objects and 4 N/m2 for icy ones.
There is an extremely strong dependence on semi-major axis .
For the locking of a primary body to its satellite as in the case of Pluto, the satellite and primary body parameters can be swapped.
One conclusion is that, other things being equal (such as and ), a large moon will lock faster than a smaller moon at the same orbital distance from the planet because grows as the cube of the satellite radius . A possible example of this is in the Saturn system, where Hyperion is not tidally locked, whereas the larger Iapetus, which orbits at a greater distance, is. However, this is not clear cut because Hyperion also experiences strong driving from the nearby Titan, which forces its rotation to be chaotic.
The above formulae for the timescale of locking may be off by orders of magnitude, because they ignore the frequency dependence of . More importantly, they may be inapplicable to viscous binaries (double stars, or double asteroids that are rubble), because the spin–orbit dynamics of such bodies is defined mainly by their viscosity, not rigidity.
List of known tidally locked bodies
Solar System
All the bodies below are tidally locked, and all but Mercury are moreover in synchronous rotation. (Mercury is tidally locked, but not in synchronous rotation.)
Extra-solar
The most successful detection methods of exoplanets (transits and radial velocities) suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone, which makes it difficult to estimate the true incidence of this phenomenon. Tau Boötis is known to be locked to the close-orbiting giant planet Tau Boötis b.
Bodies likely to be locked
Solar System
Based on comparison between the likely time needed to lock a body to its primary, and the time it has been in its present orbit (comparable with the age of the Solar System for most planetary moons), a number of moons are thought to be locked. However their rotations are not known or not known enough. These are:
Probably locked to Saturn
Daphnis
Aegaeon
Methone
Anthe
Pallene
Helene
Polydeuces
Probably locked to Uranus
Cordelia
Ophelia
Bianca
Cressida
Desdemona
Juliet
Portia
Rosalind
Cupid
Belinda
Perdita
Puck
Mab
Probably locked to Neptune
Naiad
Thalassa
Despina
Galatea
Larissa
Probably mutually tidally locked
Orcus and Vanth
Extrasolar
Gliese 581c, Gliese 581g, Gliese 581b, and Gliese 581e may be tidally locked to their parent star Gliese 581. Gliese 581d is almost certainly captured either into the 2:1 or the 3:2 spin–orbit resonance with the same star.
All planets in the TRAPPIST-1 system are likely to be tidally locked.
See also
Pseudo-synchronous rotation – a near synchronization of revolution and rotation at periastron
References
Celestial mechanics
Orbits
Locking | Tidal locking | [
"Physics"
] | 3,641 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
14,719,430 | https://en.wikipedia.org/wiki/Iobenguane | Iobenguane, or MIBG, is an aralkylguanidine analog of the adrenergic neurotransmitter norepinephrine (noradrenaline), typically used as a radiopharmaceutical. It acts as a blocking agent for adrenergic neurons. When radiolabeled, it can be used in nuclear medicinal diagnostic and therapy techniques as well as in neuroendocrine chemotherapy treatments.
It localizes to adrenergic tissue and thus can be used to identify the location of tumors such as pheochromocytomas and neuroblastomas. With iodine-131 it can also be used to treat tumor cells that take up and metabolize norepinephrine.
Usage and mechanism
MIBG is absorbed by and accumulated in granules of adrenal medullary chromaffin cells, as well as in pre-synaptic adrenergic neuron granules. The process in which this occurs is closely related to the mechanism employed by norepinephrine and its transporter in vivo. The norepinephrine transporter (NET) functions to provide norepinephrine uptake at the synaptic terminals and adrenal chromaffin cells. MIBG, by bonding to NET, finds its roles in imaging and therapy.
Metabolites and excretion
Less than 10% of the administered MIBG gets metabolized into m-iodohippuric acid (MIHA), and the mechanism for how this metabolite is produced is unknown.
Diagnostic imaging
MIBG concentrates in endocrine tumors, most commonly neuroblastoma, paraganglioma, and pheochromocytoma. It also accumulates in norepinephrine transporters in adrenergic nerves in the heart, lungs, adrenal medulla, salivary glands, liver, and spleen, as well as in tumors that originate in the neural crest. When labelled with iodine-123 it serves as a whole-body, non-invasive scintigraphic screening for germ-line, somatic, benign, and malignant neoplasms originating in the adrenal glands. It can detect both intra and extra-adrenal disease. The imaging is highly sensitive and specific.
Iobenguane concentrates in presynaptic terminals of the heart and other autonomically innervated organs. This enables the possible non-invasive use as an in vivo probe to study these systems.
Alternatives to imaging with 123I-MIBG, for certain indications and under clinical and research use, include the positron-emitting isotope iodine-124, and other radiopharmaceuticals such as 68Ga-DOTA and 18F-FDOPA for positron emission tomography (PET). 123I-MIBG imaging on a gamma camera can offer significantly higher cost-effectiveness and availability compared to PET imaging, and is particularly effective where 131I-MIBG therapy is subsequently planned, due to their directly comparable uptake.
Side effects
Side effects post imaging are rare but can include tachycardia, pallor, vomiting, and abdominal pain.
Radionuclide therapy
MIBG can be radiolabelled with the beta emitting radionuclide 131I for the treatment of certain pheochromocytomas, paragangliomas, carcinoid tumors, neuroblastomas, and medullary thyroid cancer.
Thyroid precautions
Thyroid blockade with (nonradioactive) potassium iodide is indicated for nuclear medicine scintigraphy with iobenguane/mIBG. This competitively inhibits radioiodine uptake, preventing excessive radioiodine levels in the thyroid and minimizing risk of thyroid ablation (in treatment with 131I). The minimal risk of thyroid cancer is also reduced as a result.
The dosing regime for the FDA-approved commercial 123I-MIBG product Adreview is potassium iodide or Lugol's solution containing 100 mg iodide, weight adjusted for children and given an hour before injection. EANM guidelines, endorsed by the SNMMI, suggest a variety of regimes in clinical use, for both children and adults.
Product labeling for diagnostic 131I iobenguane recommends giving potassium iodide one day before injection and continuing 5 to 7 days following. 131I iobenguane used for therapeutic purposes requires a different pre-medication duration, beginning 24–48 hours before iobenguane injection and continuing 10–15 days after injection.
Clinical trials
Iobenguane I 131 for cancers
Iobenguane I 131, marketed under the trade name Azedra, has had a clinical trial as a treatment for malignant, recurrent or unresectable pheochromocytoma and paraganglioma, and the FDA approved it on July 30, 2018. The drug is developed by Progenics Pharmaceuticals.
References
External links
Adrenergic receptor antagonists
Diagnostic endocrinology
Guanidines
3-Iodophenyl compounds
Radiopharmaceuticals | Iobenguane | [
"Chemistry"
] | 1,079 | [
"Medicinal radiochemistry",
"Guanidines",
"Functional groups",
"Radiopharmaceuticals",
"Chemicals in medicine"
] |
3,106,440 | https://en.wikipedia.org/wiki/Rankine%20vortex | The Rankine vortex is a simple mathematical model of a vortex in a viscous fluid. It is named after its discoverer, William John Macquorn Rankine.
The vortices observed in nature are usually modelled with an irrotational (potential or free) vortex. However, in a potential vortex, the velocity becomes infinite at the vortex center. In reality, very close to the origin, the motion resembles a solid body rotation. The Rankine vortex model assumes a solid-body rotation inside a cylinder of radius and a potential vortex outside the cylinder. The radius is referred to as the vortex-core radius. The velocity components of the Rankine vortex, expressed in terms of the cylindrical-coordinate system are given by
where is the circulation strength of the Rankine vortex. Since solid-body rotation is characterized by an azimuthal velocity , where is the constant angular velocity, one can also use the parameter to characterize the vortex.
The vorticity field associated with the Rankine vortex is
At all points inside the core of the Rankine vortex, the vorticity is uniform at twice the angular velocity of the core; whereas vorticity is zero at all points outside the core because the flow there is irrotational.
In reality, vortex cores are not always circular; and vorticity is not exactly uniform throughout the vortex core.
See also
Kaufmann (Scully) vortex – an alternative mathematical simplification for a vortex, with a smoother transition.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity.
Burgers vortex
References
External links
Streamlines vs. Trajectories in a Translating Rankine Vortex: an example of a Rankine vortex imposed on a constant velocity field, with animation.
Equations of fluid dynamics
Vortices | Rankine vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 369 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Fluid dynamics",
"Dynamical systems"
] |
3,106,825 | https://en.wikipedia.org/wiki/Three-center%20four-electron%20bond | The 3-center 4-electron (3c–4e) bond is a model used to explain bonding in certain hypervalent molecules such as tetratomic and hexatomic interhalogen compounds, sulfur tetrafluoride, the xenon fluorides, and the bifluoride ion. It is also known as the Pimentel–Rundle three-center model after the work published by George C. Pimentel in 1951, which built on concepts developed earlier by Robert E. Rundle for electron-deficient bonding. An extended version of this model is used to describe the whole class of hypervalent molecules such as phosphorus pentafluoride and sulfur hexafluoride as well as multi-center π-bonding such as ozone and sulfur trioxide.
There are also molecules such as diborane (B2H6) and dialane (Al2H6) which have three-center two-electron bond (3c-2e) bonds.
History
While the term "hypervalent" was not introduced in the chemical literature until 1969, Irving Langmuir and G. N. Lewis debated the nature of bonding in hypervalent molecules as early as 1921. While Lewis supported the viewpoint of expanded octet, invoking s-p-d hybridized orbitals and maintaining 2c–2e bonds between neighboring atoms, Langmuir instead opted for maintaining the octet rule, invoking an ionic basis for bonding in hypervalent compounds (see Hypervalent molecule, valence bond theory diagrams for PF5 and SF6).
In a 1951 seminal paper, Pimentel rationalized the bonding in hypervalent trihalide ions (, X = F, Br, Cl, I) via a molecular orbital (MO) description, building on the concept of the "half-bond" introduced by Rundle in 1947. In this model, two of the four electrons occupy an all in-phase bonding MO, while the other two occupy a non-bonding MO, leading to an overall bond order of 0.5 between adjacent atoms (see Molecular orbital description).
More recent theoretical studies on hypervalent molecules support the Langmuir view, confirming that the octet rule serves as a good first approximation to describing bonding in the s- and p-block elements.
Examples of molecules exhibiting three-center four-electron bonding
σ 3c–4e
Triiodide
Xenon difluoride
Krypton difluoride
Radon difluoride
Argon fluorohydride
Bifluoride
SN2 reaction transition state and activated complex
Symmetric hydrogen bond
π 3c–4e
Carboxylates
Amides
Ozone
Azide
Allyl anion
Structure and bonding
Molecular orbital description
The σ molecular orbitals (MOs) of triiodide can be constructed by considering the in-phase and out-of-phase combinations of the central atom's p orbital (collinear with the bond axis) with the p orbitals of the peripheral atoms. This exercise generates the diagram at right (Figure 1). Three molecular orbitals result from the combination of the three relevant atomic orbitals, with the four electrons occupying the two MOs lowest in energy – a bonding MO delocalized across all three centers, and a non-bonding MO localized on the peripheral centers. Using this model, one sidesteps the need to invoke hypervalent bonding considerations at the central atom, since the bonding orbital effectively consists of two 2-center-1-electron bonds (which together do not violate the octet rule), and the other two electrons occupy the non-bonding orbital.
Valence bond (natural bond orbital) description
In the natural bond orbital viewpoint of 3c–4e bonding, the triiodide anion is constructed from the combination of the diiodine (I2) σ molecular orbitals and an iodide (I−) lone pair. The I− lone pair acts as a 2-electron donor, while the I2 σ* antibonding orbital acts as a 2-electron acceptor. Combining the donor and acceptor in in-phase and out-of-phase combinations results in the diagram depicted at right (Figure 2). Combining the donor lone pair with the acceptor σ* antibonding orbital results in an overall lowering in energy of the highest-occupied orbital (ψ2). While the diagram depicted in Figure 2 shows the right-hand atom as the donor, an equivalent diagram can be constructed using the left-hand atom as the donor. This bonding scheme is succinctly summarized by the following two resonance structures: I—I···I− ↔ I−···I—I (where "—" represents a single bond and "···" represents a "dummy bond" with formal bond order 0 whose purpose is only to indicate connectivity), which when averaged reproduces the I—I bond order of 0.5 obtained both from natural bond orbital analysis and from molecular orbital theory.
More recent theoretical investigations suggest the existence of a novel type of donor-acceptor interaction that may dominate in triatomic species with so-called "inverted electronegativity"; that is, a situation in which the central atom is more electronegative than the peripheral atoms. Molecules of theoretical curiosity such as neon difluoride (NeF2) and beryllium dilithide (BeLi2) represent examples of inverted electronegativity. As a result of unusual bonding situation, the donor lone pair ends up with significant electron density on the central atom, while the acceptor is the "out-of-phase" combination of the p orbitals on the peripheral atoms. This bonding scheme is depicted in Figure 3 for the theoretical noble gas dihalide NeF2.
SN2 transition state modeling
The valence bond description and accompanying resonance structures A—B···C− ↔ A−···B—C suggest that molecules exhibiting 3c–4e bonding can serve as models for studying the transition states of bimolecular nucleophilic substitution reactions.
See also
Hypervalent molecule
Three-center two-electron bond
References
Chemical bonding | Three-center four-electron bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,269 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
3,107,902 | https://en.wikipedia.org/wiki/Author%20citation%20%28zoology%29 | In zoological nomenclature, author citation is the process in which a person is credited with the creation of the scientific name of a previously unnamed taxon. When citing the author of the scientific name, one must fulfill the formal requirements listed under the International Code of Zoological Nomenclature ("the Code"). According to Article 51.1 of the Code, "The name of the author does not form part of the name of a taxon and its citation is optional, although customary and often advisable." However, recommendation 51A suggests, "The original author and date of a name should be cited at least once in each work dealing with the taxon denoted by that name. This is especially important and has a unique character between homonyms and in identifying species-group names which are not in their native combinations." For the sake of information retrieval, the author citation and year appended to the scientific name, e.g. genus-species-author-year, genus-author-year, family-author-year, etc., is often considered a "de-facto" unique identifier, although this usage may often be imperfect.
Rank
The Code recognizes three groups of names, according to rank:
family-group names at the ranks of superfamily, family, subfamily, tribe, subtribe (any rank below superfamily and above genus).
genus-group names at the ranks of genus and subgenus.
species-group names at the ranks of species and subspecies.
Within each group, the same authorship applies regardless of the taxon level to which the name (with, in the case of a family-group name, the appropriate ending) is applied. For example, the taxa that the red admiral butterfly can be assigned to is as follows:
Family: Nymphalidae Swainson, 1827
Subfamily: Nymphalinae Swainson, 1827
Tribe: Nymphalini Swainson, 1827
Genus: Vanessa Fabricius, 1807
Subgenus: Vanessa (Vanessa) Fabricius, 1807
Species: Vanessa atalanta (Linnaeus, 1758)
Subspecies: Vanessa atalanta atalanta (Linnaeus, 1758)
The parentheses around the author citation indicate that this was not the original taxonomic placement. In this case, Linnaeus published the name as Papilio atalanta Linnaeus, 1758.
Identity of the author(s)
In the first attempt to provide international rules for zoological nomenclature in 1895, the author was defined as the author of the scientific description, and not as the person who provided the name (published or unpublished), this was the usual practice in the nomenclature of various animal groups before. As a result, some disciplines such as malacology required a change in authorship regarding their taxonomic names as they had been attributed to persons who had never published a scientific work.
This new rule was not sufficiently precise, so in the following decades, taxonomic practice continued to diverge among disciplines and authors. The ambiguity led a member of the ICZN Commission in 1974 to provide a more clear interpretation in the second edition of the Code (effective since 1961). Here, a suggestion was made that the author is defined as the individual who "publishes the name and the qualifying conditions...or equally clearcut attribution of name and description" .
The current view among some taxonomists restricts authorship for a taxonomic name to the person who wrote the textual scientific content of the original description. The author of an image is not recognized as a co-author of a name, even if the image was the only basis provided for making the name available.
If a true author of a written text is not directly recognizable in the original publication, they are not the author of a name (but the author of the work is). The text could actually be written by a different person. Some authors have copied text passages from unpublished sources without acknowledging them. In Art. 50.1.1 all these persons are excluded from the authorship of a name if they were not explicitly mentioned in the work itself as being the responsible persons for making a name available.
Most taxonomists also accept Art. 50.1.1 that the author of a cited previously published source, from which text passages were copied, is not acknowledged as the author of a name.
In some cases, the author of the description can differ from the author of the work. This must be explicitly indicated in the original publication, either by a general statement ("all zoological descriptions in this work were written by Smith") or by an individual statement ("the following three descriptions were provided by Jiménez," "this name shall be attributed to me and Wang because she contributed to the description").
In the 1800s it was the usual style to eventually set an abbreviation of another author immediately below the text of the description to indicate authorship. This is commonly accepted today; if the description is attributed to a different person, then that person is the author.
When the name of a different author was only set behind the new name in the headline (and not repeated below the description to indicate that description had been written by that person), this was a convention to indicate authorship only for the new name and not for the description. These authorships for names are not covered by Art. 50.1 and are not accepted. Only authorship for the description is accepted.
Prior to 1900-1920, there were several different conventions concerning authorship which is why we frequently find other authors than today for zoological names in early zoological literature. Art. 50.1 has been an accepted model since the mid-1900s. It eliminated the need to research who the true author was and all readers could verify and determine the name of the author in the original work itself.
Examples to illustrate practical use
In citing the name of an author, the surname is given in full, not abbreviated. The date (true year) of publication in which the name was established is added. If desired, a comma is placed between the author and date (the comma is not prescribed under the Code, it contains no additional information. However, it is included in examples therein and also in the ICZN Official Lists and Indexes).
Balaena mysticetus Linnaeus, 1758
the bowhead whale was described and named by Linnaeus in his Systema Naturae of 1758
Anser albifrons (Scopoli, 1769)
the white-fronted goose was first described (by Giovanni Antonio Scopoli), as Branta albifrons Scopoli, 1769. It is currently placed in the genus Anser, so author and year are set in parentheses. The taxonomist who first placed the species in Anser is not recorded (and much less cited), the two different genus-species combinations are not regarded as synonyms.
An author can have established a name dedicated to oneself. This is rare and against unwritten conventions, but is not restricted under the Code.
Xeropicta krynickii (Krynicki, 1833)
a terrestrial gastropod from Ukraine was first described as Helix krynickii Krynicki, 1833, who originally attributed the name to another person Andrzejowski. But the description was written by Krynicki, and Andrzejowski had not published this name before.
Spelling of the name of the author
In a strict application of the Code, the taxon name author string components "genus," "species," and "year" can only have one combination of characters. The major problem in zoology for consistent spellings of names is the author. The Code gives neither a guide nor a detailed recommendation.
Unlike in botany, it is not recommended to abbreviate the name of the author in zoology. If a name was established by more than three authors, it is allowed to give only the first author, followed by the term "et. al." (and others).
There are no approved standards for the spellings of authors in zoology, and unlike in botany, no one has ever proposed such standards for zoological authors.
It is generally accepted that the name of the author shall be given in the nominative singular case if originally given in a different case and that the name of the author should be spelled in Latin script. There are no commonly accepted conventions on how to transcribe the names of authors if given in non-Latin script.
It is also widely accepted that the names of authors must be spelled with diacritic marks, ligatures, spaces, and punctuation marks. The first letter is normally spelled in upper-case, however, initial capitalization and usage of accessory terms can be inconsistent (e.g. de Wilde/De Wilde, d'Orbigny/D'Orbigny, Saedeleer/De Saedeleer, etc.). Co-authors are separated by commas; the last co-author should be separated by "&". In Chinese and Korean names only the surname is generally cited.
Examples:
Pipadentalium Yoo, 1988 (Scaphopoda)
Sinentomon Yin, 1965 (Protura)
Belbolla huanghaiensis Huang & Zhang, 2005 (Nematoda)
Apart from these, there are no commonly accepted conventions. The author can either be spelled following a self-made standard (Linnaeus 1758, Linnaeus 1766), or as given in the original source which implies that names of persons are not always spelled consistently (Linnæus 1758, Linné 1766), or we are dealing with composed data sets without any consistent standard.
Inferred and anonymous authorships
In some publications, the author responsible for new names and nomenclatural acts is not stated directly in the original source, but can sometimes be inferred from reliable external evidence. Recommendation 51D of the Code states: "...if the authorship is known or inferred from external evidence, the name of the author, if cited, should be enclosed in square brackets to show the original anonymity".
Initials
If the same surname is common to more than one author, initials are sometimes given (for example "A. Agassiz" vs. "L. Agassiz", etc.), but there are no standards concerning this procedure, and not all animal groups/databases use this convention. Although initials are often regarded as useful to disambiguate different persons with the same surname, this does not work in all situations (for example "W. Smith", "C. Pfeiffer", "G. B. Sowerby" and other names occur more than once), and in the examples given in the Code and also the ICZN Official Lists and Indexes, initials are not used.
Implications for information retrieval
For a computer, O. F. Müller, O. Müller, and Müller are different strings, even the differences between O. F. Müller, O.F. Müller, and OF Müller can be problematic. Fauna Europaea is a typical example of a database where combined initials O.F. and O. F. are read as entirely different strings so those who try to search for all taxonomic names described by Otto Friedrich Müller have to know (1) that the submitted data by the various data providers contained several versions (O. F. Müller, O.F. Müller, Müller, and O. Müller), and (2) that in many databases, the search function will not find O.F. Müller if you search for O. F. Müller or Müller, not to mention alternative orthographies of this name such as Mueller or Muller.
Thus, the usage of (e.g.) genus-species-author-year, genus-author-year, family-author-year, etc. as "de facto" unique identifiers for biodiversity informatics purposes can present problems on account of variation in cited author surnames, presence/absence/variations in cited initials, and minor variants in the style of presentation, as well as variant cited authors (responsible person/s) and sometimes, cited dates for what may be in fact the same nomenclatural act in the same work. In addition, in a small number of cases, the same author may have created the same name more than once in the same year for different taxa, which can then only be distinguished by reference to the title, page, and sometimes line of the work in which each name appears.
In Australia, a program was created (TAXAMATCH) that provides a tool to indicate in a preliminary manner whether two variants of a taxon name should be accepted as identical or not according to the similarity of the cited author strings. The authority matching function of TAXAMATCH assigns a moderate-to-high similarity to author strings with minor orthographic and/or date differences, such as "Medvedev & Chernov, 1969" vs. "Medvedev & Cernov, 1969", or "Schaufuss, 1877" vs. "L. W. Schaufuss, 1877", or even "Oshmarin, 1952" vs. "Oschmarin in Skrjabin & Evranova, 1952", and a low similarity to author citations which are very different (for example "Hyalesthes Amyot, 1847" vs. "Hyalesthes Signoret, 1865") and are more likely to represent different publication instances, and therefore possibly also different taxa. The program also understands standardized abbreviations as used in botany and sometimes in zoology as well; for example, "Rchb." for Reichenbach, but may still fail for non-standard abbreviations (such as "H. & A. Ad." for H. & A. Adams, where the normal citation would in fact be "Adams & Adams"). Non-standard abbreviations must then be picked up by subsequent manual inspection after the use of an algorithmic approach to pre-sort the names to be matched into groups of either more or less similar names and cited authorities. However, author names that are spelled very similarly but in fact represent different persons, and who independently authored identical taxon names, will not be adequately separated by this program; examples include "O. F. Müller 1776" vs. "P. L. S. Müller 1776", "G. B. Sowerby I 1850" vs. "G. B. Sowerby III 1875" and "L. Pfeiffer 1856" vs. "K. L. Pfeiffer 1956", so additional manual inspection is also required, especially for known problem cases such as those given above.
A further cause of errors that would not be detected by such a program include authors with multi-part surnames which are sometimes inconsistently applied in the literature, and works where the accepted attribution has changed over time. For example, genera published in the anonymously authored work "Museum Boltenianum sive catalogus cimeliorum..." published in 1798 were for a long time ascribed to Bolten, but are now considered to have been authored by Röding according to a ruling by the ICZN in 1956. Analogous problems are encountered attempting to cross-link medical records by patient name; for relevant discussion see record linkage.
Author of a nomen nudum
A new name mentioned without description or indication or figure is a nomen nudum. A nomen nudum has no authorship nor date and is not an available name. If it is desired or necessary to cite the author of such an unavailable name, the nomenclatural status of the name should be made evident.
Sensu names
A "sensu" name (sensu = "in the sense of", should not be written in italics) is a previously established name that was used by an author in an incorrect sense (for example for a species that was misidentified). Technically this is only a subsequent use of a name, not a new name, and it has no own authorship. Taxonomists often created unwritten rules for authorships of sensu names to record the first and original source for a misidentification of an animal, but this is not in accordance with the Code.
Example:
For a West Alpine snail Pupa ferrari Porro, 1838, Hartmann (1841) used the genus Sphyradium Charpentier, 1837, which Charpentier had established for some similar species. Westerlund argued in 1887 that this species should be placed in another genus, and proposed the name Coryna for Pupa ferrari and some other species. Pilsbry argued in 1922, Westerlund had established Coryna as a new replacement name for Sphyradium, sensu Hartmann, 1841 (therefore "sensu" should not be written in italics, the term Sphyradium sensu Hartmann, 1841 would be misunderstood as a species name). But since a sensu name is not an available name with its own author and year, Pilsbry's argument is not consistent with the ICZN Code's rules.
See also
Author citation (botany)
Glossary of scientific naming
List of authors of names published under the ICZN
Wikispecies: Taxon authorities
References
External links
Zoological nomenclature | Author citation (zoology) | [
"Biology"
] | 3,508 | [
"Zoological nomenclature",
"Biological nomenclature"
] |
3,108,062 | https://en.wikipedia.org/wiki/Bioelectromagnetics | Bioelectromagnetics, also known as bioelectromagnetism, is the study of the interaction between electromagnetic fields and biological entities. Areas of study include electromagnetic fields produced by living cells, tissues or organisms, the effects of man-made sources of electromagnetic fields like mobile phones, and the application of electromagnetic radiation toward therapies for the treatment of various conditions.
Biological phenomena
Bioelectromagnetism is studied primarily through the techniques of electrophysiology. In the late eighteenth century, the Italian physician and physicist Luigi Galvani first recorded the phenomenon while dissecting a frog at a table where he had been conducting experiments with static electricity. Galvani coined the term animal electricity to describe the phenomenon, while contemporaries labeled it galvanism. Galvani and contemporaries regarded muscle activation as resulting from an electrical fluid or substance in the nerves. Short-lived electrical events called action potentials occur in several types of animal cells which are called excitable cells, a category of cell include neurons, muscle cells, and endocrine cells, as well as in some plant cells. These action potentials are used to facilitate inter-cellular communication and activate intracellular processes. The physiological phenomena of action potentials are possible because voltage-gated ion channels allow the resting potential caused by electrochemical gradient on either side of a cell membrane to resolve..
Several animals are suspected to have the ability to sense electromagnetic fields; for example, several aquatic animals have structures potentially capable of sensing changes in voltage caused by a changing magnetic field, while migratory birds are thought to use magnetoreception in navigation.
Bioeffects of electromagnetic radiation
Most of the molecules in the human body interact weakly with electromagnetic fields in the radio frequency or extremely low frequency bands. One such interaction is absorption of energy from the fields, which can cause tissue to heat up; more intense fields will produce greater heating. This can lead to biological effects ranging from muscle relaxation (as produced by a diathermy device) to burns. Many nations and regulatory bodies like the International Commission on Non-Ionizing Radiation Protection have established safety guidelines to limit EMF exposure to a non-thermal level. This can be defined as either heating only to the point where the excess heat can be dissipated, or as a fixed increase in temperature not detectable with current instruments like 0.1 °C. However, biological effects have been shown to be present for these non-thermal exposures; Various mechanisms have been proposed to explain these, and there may be several mechanisms underlying the differing phenomena observed.
Many behavioral effects at different intensities have been reported from exposure to magnetic fields, particularly with pulsed magnetic fields. The specific pulseform used appears to be an important factor for the behavioural effect seen; for example, a pulsed magnetic field originally designed for spectroscopic MRI, referred to as Low Field Magnetic Stimulation, was found to temporarily improve patient-reported mood in bipolar patients, while another MRI pulse had no effect. A whole-body exposure to a pulsed magnetic field was found to alter standing balance and pain perception in other studies.
A strong changing magnetic field can induce electrical currents in conductive tissue such as the brain. Since the magnetic field penetrates tissue, it can be generated outside of the head to induce currents within, causing transcranial magnetic stimulation (TMS). These currents depolarize neurons in a selected part of the brain, leading to changes in the patterns of neural activity. In repeated pulse TMS therapy or rTMS, the presence of incompatible EEG electrodes can result in electrode heating and, in severe cases, skin burns. A number of scientists and clinicians are attempting to use TMS to replace electroconvulsive therapy (ECT) to treat disorders such as severe depression and hallucinations. Instead of one strong electric shock through the head as in ECT, a large number of relatively weak pulses are delivered in TMS therapy, typically at the rate of about 10 pulses per second. If very strong pulses at a rapid rate are delivered to the brain, the induced currents can cause convulsions much like in the original electroconvulsive therapy. Sometimes, this is done deliberately in order to treat depression, such as in ECT.
Effects of electromagnetic radiation on human health
While health effects from extremely low frequency (ELF) electric and magnetic fields (0 to 300 Hz) generated by power lines, and radio/microwave frequencies (RF) (10 MHz - 300 GHz) emitted by radio antennas and wireless networks have been well studied, the intermediate range (300 Hz to 10 MHz) has been studied far less. Direct effects of low power radiofrequency electromagnetism on human health have been difficult to prove, and documented life-threatening effects from radiofrequency electromagnetic fields are limited to high power sources capable of causing significant thermal effects and medical devices such as pacemakers and other electronic implants. However, many studies have been conducted with electromagnetic fields to investigate their effects on cell metabolism, apoptosis, and tumor growth.
Electromagnetic radiation in the intermediate frequency range has found a place in modern medical practice for the treatment of bone healing and for nerve stimulation and regeneration. It is also approved as cancer therapy in form of tumor treating fields, using alternating electric fields in the frequency range of 100–300 kHz. However, the efficacy of this method remains contentious among medical experts. Since some of these methods involve magnetic fields that invoke electric currents in biological tissues and others only involve electric fields, they are strictly speaking electrotherapies albeit their application modi with modern electronic equipment have placed them in the category of bioelectromagnetic interactions.
See also
Bioelectrogenesis
Biomagnetism
Bioelectricity
Bioelectrochemistry
Bioelectrodynamics
Biophotonics
Biophysics
Electric fish
Electrical brain stimulation
Electroencephalography
Electromagnetic radiation and health
Electromyography
Electrotaxis
Kirlian photography
Magnetobiology
Magnetoception
Magnetoelectrochemistry
Mobile phone radiation and health
Radiobiology
Specific absorption rate
Transcutaneous electrical nerve stimulation
Notes
References
Organizations
The Bioelectromagnetics Society (BEMS)
European BioElectromagnetics Association (EBEA)
Society for Physical Regulation in Biology and Medicine (SPRBM) (formerly the Bioelectrical Repair and Growth Society, BRAGS)
International Society for Bioelectromagnetism (ISBEM)
The Bioelectromagnetics Lab at University College Cork, Ireland
Institute of Bioelectromagnetism
Vanderbilt University, Living State Physics Group, archived page
Ragnar Granit Institute.
Institute of Photonics and Electronics AS CR, Department of Bioelectrodynamics.
Books
Becker, Robert O.; Andrew A. Marino, Electromagnetism and Life, State University of New York Press, Albany, 1982. .
Becker, Robert O.; The Body Electric: Electromagnetism and the Foundation of Life, William Morrow & Co, 1985. .
Becker, Robert O.; Cross Currents: The Promise of Electromedicine, the Perils of Electropollution, Tarcher, 1989. .
Binhi, V.N., Magnetobiology: Underlying Physical Problems. San Diego: Academic Press, 2002. .
Brodeur Paul; Currents of Death, Simon & Schuster, 2000. .
Carpenter, David O.; Sinerik Ayrapetyan, Biological Effects of Electric and Magnetic Fields, Volume 1 : Sources and Mechanisms, Academic Press, 1994. .
Carpenter, David O.; Sinerik Ayrapetyan, Biological Effects of Electric and Magnetic Fields : Beneficial and Harmful Effects (Vol 2), Academic Press, 1994. .
Chiabrera A. (Editor), Interactions Between Electromagnetic Fields and Cells, Springer, 1985. .
Habash, Riadh W. Y.; Electromagnetic Fields and Radiation: Human Bioeffects and Safety, Marcel Dekker, 2001. .
Horton William F.; Saul Goldberg, Power Frequency Magnetic Fields and Public Health, CRC Press, 1995. .
Mae-Wan, Ho; et al., Bioelectrodynamics and Biocommunication, World Scientific, 1994. .
Malmivuo, Jaakko; Robert Plonsey, Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, Oxford University Press, 1995. .
O'Connor, Mary E. (Editor), et al., Emerging Electromagnetic Medicine, Springer, 1990. .
Journals
Bioelectromagnetics
Bioelectrochemistry
European Biophysics Journal
International Journal of Bioelectromagnetism, ISBEM, 1999–present, ()
BioMagnetic Research and Technology archive (no longer publishing)
Biophysics, English version of the Russian "Biofizika" ()
Radiatsionnaya Bioliogiya Radioecologia ("Radiation Biology and Radioecology", in Russian) ()
External links
A brief history of Bioelectromagnetism, by Jaakko and Plonsey.
Direct and Inverse Bioelectric Field Problems
Human body meshes for MATLAB, Ansoft/ANSYS HFSS, Octave (surface meshes from real subjects, meshes for Visible Human Project)
Physiology
Radiobiology
Electrophysiology
ru:Магнитобиология | Bioelectromagnetics | [
"Chemistry",
"Biology"
] | 1,914 | [
"Radiobiology",
"Radioactivity",
"Physiology"
] |
3,108,937 | https://en.wikipedia.org/wiki/Landauer%27s%20principle | Landauer's principle is a physical principle pertaining to a lower theoretical limit of energy consumption of computation. It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings. It is hypothesized that energy consumption below this lower bound would require the development of reversible computing.
The principle was first proposed by Rolf Landauer in 1961.
Statement
Landauer's principle states that the minimum energy needed to erase one bit of information is proportional to the temperature at which the system is operating. Specifically, the energy needed for this computational task is given by
where is the Boltzmann constant and is the temperature in Kelvin. At room temperature, the Landauer limit represents an energy of approximately . , modern computers use about a billion times as much energy per operation.
History
Rolf Landauer first proposed the principle in 1961 while working at IBM. He justified and stated important limits to an earlier conjecture by John von Neumann. This refinement is sometimes called the Landauer bound, or Landauer limit.
In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.
In 2011, the principle was generalized to show that while information erasure requires an increase in entropy, this increase could theoretically occur at no energy cost. Instead, the cost can be taken in another conserved quantity, such as angular momentum.
In a 2012 article published in Nature, a team of physicists from the École normale supérieure de Lyon, University of Augsburg and the University of Kaiserslautern described that for the first time they have measured the tiny amount of heat released when an individual bit of data is erased.
In 2014, physical experiments tested Landauer's principle and confirmed its predictions.
In 2016, researchers used a laser probe to measure the amount of energy dissipation that resulted when a nanomagnetic bit flipped from off to on. Flipping the bit required about at 300 K, which is just 44% above the Landauer minimum.
A 2018 article published in Nature Physics features a Landauer erasure performed at cryogenic temperatures on an array of high-spin (S = 10) quantum molecular magnets. The array is made to act as a spin register where each nanomagnet encodes a single bit of information. The experiment has laid the foundations for the extension of the validity of the Landauer principle to the quantum realm. Owing to the fast dynamics and low "inertia" of the single spins used in the experiment, the researchers also showed how an erasure operation can be carried out at the lowest possible thermodynamic cost—that imposed by the Landauer principle—and at a high speed.
Challenges
The principle is widely accepted as physical law, but it has been challenged for using circular reasoning and faulty assumptions. Others have defended the principle, and Sagawa and Ueda (2008) and Cao and Feito (2009) have shown that Landauer's principle is a consequence of the second law of thermodynamics and the entropy reduction associated with information gain.
On the other hand, recent advances in non-equilibrium statistical physics have established that there is not a prior relationship between logical and thermodynamic reversibility. It is possible that a physical process is logically reversible but thermodynamically irreversible. It is also possible that a physical process is logically irreversible but thermodynamically reversible. At best, the benefits of implementing a computation with a logically reversible system are nuanced.
In 2016, researchers at the University of Perugia claimed to have demonstrated a violation of Landauer’s principle, though their conclusions were disputed.
See also
Quantum speed limit
Bremermann's limit
Bekenstein bound
Kolmogorov complexity
Entropy in thermodynamics and information theory
Information theory
Jarzynski equality
Limits of computation
Extended mind thesis
Maxwell's demon
Koomey's law
No-deleting theorem
References
Further reading
External links
Public debate on the validity of Landauer's principle (conference Hot Topics in Physical Informatics, November 12, 2013)
Introductory article on Landauer's principle and reversible computing
Maroney, O.J.E. " Information Processing and Thermodynamic Entropy" The Stanford Encyclopedia of Philosophy.
Eurekalert.org: "Magnetic memory and logic could achieve ultimate energy efficiency", July 1, 2011
Thermodynamic entropy
Entropy and information
Philosophy of thermal and statistical physics
Principles
Limits of computation | Landauer's principle | [
"Physics",
"Chemistry",
"Mathematics"
] | 973 | [
"Physical phenomena",
"Philosophy of thermal and statistical physics",
"Physical quantities",
"Thermodynamic entropy",
"Entropy and information",
"Entropy",
"Thermodynamics",
"Statistical mechanics",
"Limits of computation",
"Dynamical systems"
] |
3,112,392 | https://en.wikipedia.org/wiki/Nuclear%20reactor%20core | A nuclear reactor core is the portion of a nuclear reactor containing the nuclear fuel components where the nuclear reactions take place and the heat is generated. Typically, the fuel will be low-enriched uranium contained in thousands of individual fuel pins. The core also contains structural components, the means to both moderate the neutrons and control the reaction, and the means to transfer the heat from the fuel to where it is required, outside the core.
Water-moderated reactors
Inside the core of a typical pressurized water reactor or boiling water reactor are fuel rods with a diameter of a large gel-type ink pen, each about 4 m long, which are grouped by the hundreds in bundles called "fuel assemblies". Inside each fuel rod, pellets of uranium, or more commonly uranium oxide, are stacked end to end. Also inside the core are control rods, filled with pellets of substances like boron or hafnium or cadmium that readily capture neutrons. When the control rods are lowered into the core, they absorb neutrons, which thus cannot take part in the chain reaction. Conversely, when the control rods are lifted out of the way, more neutrons strike the fissile uranium-235 (U-235) or plutonium-239 (Pu-239) nuclei in nearby fuel rods, and the chain reaction intensifies. The core shroud, also located inside of the reactor, directs the water flow to cool the nuclear reactions inside of the core. The heat of the fission reaction is removed by the water, which also acts to moderate the neutron reactions.
Graphite-moderated reactors
There are also graphite moderated reactors in use.
One type uses solid nuclear graphite for the neutron moderator and ordinary water for the coolant. See the Soviet-made RBMK nuclear-power reactor. This was the type of reactor involved in the Chernobyl disaster.
In the Advanced Gas-cooled Reactor, a British design, the core is made of a graphite neutron moderator where the fuel assemblies are located. Carbon dioxide gas acts as a coolant and it circulates through the core, removing heat.
There have also been several experimental reactors that use graphite for moderation, such as the pebble bed reactor concepts and the molten-salt reactor experiment.
See also
Nuclear meltdown
Lists of nuclear disasters and radioactive incidents
Nuclear power
Nuclear reactor technology
References
Nuclear Reactor Analysis, John Wiley & Sons Canada, Ltd.
Nuclear power plant components
Nuclear technology | Nuclear reactor core | [
"Physics"
] | 500 | [
"Nuclear technology",
"Nuclear physics"
] |
17,549,172 | https://en.wikipedia.org/wiki/Burnishing%20%28metal%29 | Burnishing is the plastic deformation of a surface due to sliding contact with another object. It smooths the surface and makes it shinier. Burnishing may occur on any sliding surface if the contact stress locally exceeds the yield strength of the material. The phenomenon can occur both unintentionally as a failure mode, and intentionally as part of a metalworking or manufacturing process. It is a squeezing operation under cold working.
Failure mode (unintentionally)
The action of a hardened ball against a softer, flat plate illustrates the process of burnishing. If the ball is pushed directly into the plate, stresses develop in both objects around the area where they contact. As this normal force increases, both the ball and the plate's surfaces deform.
The deformation caused by the hardened ball increases with the magnitude of the force pressing against it. If the force on it is small, when the force is released both the ball and plate's surface will return to their original, undeformed shape. In that case, the stresses in the plate are always less than the yield strength of the material, so the deformation is purely elastic. Since it was given that the flat plate is softer than the ball, the plate's surface will always deform more.
If a larger force is used, there will also be plastic deformation and the plate's surface will be permanently altered. A bowl-shaped indentation will be left behind, surrounded by a ring of raised material that was displaced by the ball. The stresses between the ball and the plate are described in more detail by Hertzian stress theory.
Dragging the ball across the plate will have a different effect than pressing. In that case, the force on the ball can be decomposed into two component forces: one normal to the plate's surface, pressing it in, and the other tangential, dragging it along. As the tangential component is increased, the ball will start to slide along the plate. At the same time, the normal force will deform both objects, just as with the static situation. If the normal force is low, the ball will rub against the plate but not permanently alter its surface. The rubbing action will create friction and heat, but it will not leave a mark on the plate. However, as the normal force increases, eventually the stresses in the plate's surface will exceed its yield strength. When this happens the ball will plow through the surface and create a trough behind it. The plowing action of the ball is burnishing. Burnishing also occurs when the ball can rotate, as would happen in the above scenario if another flat plate was brought down from above to induce downwards loading, and at the same time to cause rotation and translation of the ball, as in the case of a ball bearing.
Burnishing also occurs on surfaces that conform to each other, such as between two flat plates, but it happens on a microscopic scale. Even the smoothest of surfaces will have imperfections if viewed at a high enough magnification. The imperfections that extend above the general form of a surface are called asperities, and they can plow material on another surface just like the ball dragging along the plate. The combined effect of many of these asperities produce the smeared texture that is associated with burnishing.
Effects on sliding contact
Burnishing is normally undesirable in mechanical components for a variety of reasons, sometimes simply because its effects are unpredictable. Even light burnishing will significantly alter the surface finish of a part. Initially the finish will be smoother, but with repetitive sliding action, grooves will develop on the surface along the sliding direction. The plastic deformation associated with burnishing will harden the surface and generate compressive residual stresses. Although these properties are usually advantageous, excessive burnishing leads to sub-surface cracks which cause spalling, a phenomenon where the upper layer of a surface flakes off of the bulk material.
Burnishing may also affect the performance of a machine. The plastic deformation associated with burnishing creates greater heat and friction than from rubbing alone. This reduces the efficiency of the machine and limits its speed. Furthermore, plastic deformation alters the form and geometry of the part. This reduces the precision and accuracy of the machine. The combination of higher friction and degraded form often leads to a runaway situation that continually worsens until the component fails.
To prevent destructive burnishing, sliding must be avoided, and in rolling situations, loads must be beneath the spalling threshold. In the areas of a machine that slide with respect to each other, roller bearings can be inserted so that the components are in rolling contact instead of sliding. If sliding cannot be avoided, then a lubricant should be added between the components. The purpose of the lubricant in this case is to separate the components with a lubricant film so they cannot contact. The lubricant also distributes the load over a larger area, so that the local contact forces are not as high. If there was already a lubricant, its film thickness must be increased; usually this can be accomplished by increasing the viscosity of the lubricant.
In manufacturing (intentionally)
Burnishing is not always unwanted. If it occurs in a controlled manner, it can have desirable effects. Burnishing processes are used in manufacturing to improve the size, shape, surface finish, or surface hardness of a workpiece. It is essentially a forming operation that occurs on a small scale. The benefits of burnishing often include combatting fatigue failure, preventing corrosion and stress corrosion, texturing surfaces to eliminate visual defects, closing porosity, creating surface compressive residual stress.
There are several forms of burnishing processes, the most common are roller burnishing and ball burnishing (a subset of which is also referred to as ballizing). In both cases, a burnishing tool runs against the workpiece and plastically deforms its surface. In some instances of the latter case (and always in ballizing), it rubs, in the former it generally rotates and rolls. The workpiece may be at ambient temperature, or heated to reduce the forces and wear on the tool. The tool is usually hardened and coated with special materials to increase its life.
Ball burnishing, or ballizing, is a replacement for other bore finishing operations such as grinding, honing, or polishing. A ballizing tool consists of one or more over-sized balls that are pushed through a hole. The tool is similar to a broach, but instead of cutting away material, it plows it out of the way.
Ball burnishing is also used as a deburring operation. It is especially useful for removing the burr in the middle of a through hole that was drilled from both sides.
Ball burnishing tools of another type are sometimes used in CNC milling centres to follow a ball-nosed milling operation: the hardened ball is applied along a zig-zag toolpath in a holder similar to a ball-point pen, except that the 'ink' is pressurised, recycled lubricant. This combines the productivity of a machined finish which is achieved by a 'semi-finishing' cut, with a better finish than obtainable with slow and time-consuming finish cuts. The feed rate for burnishing is that associated with 'rapid traverse' rather than finish machining.
Roller burnishing, or surface rolling, is used on cylindrical, conical, or disk shaped workpieces. The tool resembles a roller bearing, but the rollers are generally very slightly tapered so that their envelope diameter can be accurately adjusted. The rollers typically rotate within a cage, as in a roller bearing. Typical applications for roller burnishing include hydraulic system components, shaft fillets, and sealing surfaces.
Very close control of size can be exercised.
Burnishing also occurs to some extent in machining processes. In turning, burnishing occurs if the cutting tool is not sharp, if a large negative rake angle is used, if a very small depth of cut is used, or if the workpiece material is gummy. As a cutting tool wears, it becomes more blunt and the burnishing effect becomes more pronounced. In grinding, since the abrasive grains are randomly oriented and some are not sharp, there is always some amount of burnishing. This is one reason the grinding is less efficient and generates more heat than turning. In drilling, burnishing occurs with drills that have lands to burnish the material as it drills into it. Regular twist drills or straight fluted drills have 2 lands to guide them through the hole. On burnishing drills there are 4 or more lands, similar to reamers.
Burnish setting, also known as flush, gypsy, or shot setting, is a setting technique used in stonesetting. A space is drilled, into which a stone is inserted such that the girdle of the stone, the point of maximum diameter, is just below the surface of the metal. A burnishing tool is used to push metal all around the stone to hold the stone and give a flush appearance, with a burnished edge around it. This type of setting has a long history but is gaining a resurgence in contemporary jewelry.
See also
Low plasticity burnishing
References
External links
Metal Burnishing (Cutlery, Pewter, Silver) Spons' Workshop
Mechanical engineering
Mechanical failure modes
Metalworking | Burnishing (metal) | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,911 | [
"Structural engineering",
"Mechanical failure modes",
"Applied and interdisciplinary physics",
"Technological failures",
"Mechanical engineering",
"Mechanical failure"
] |
17,550,575 | https://en.wikipedia.org/wiki/Earthquake%20simulation | Earthquake simulation applies a real or simulated vibrational input to a structure that possesses the essential features of a real seismic event. Earthquake simulations are generally performed to study the effects of earthquakes on man-made engineered structures, or on natural features which may present a hazard during an earthquake.
Dynamic experiments on building and non-building structures may be physical – as with shake-table testing – or virtual (based on computer simulation). In all cases, to verify a structure's expected seismic performance, researchers prefer to deal with so called 'real time-histories' though the last cannot be 'real' for a hypothetical earthquake specified by either a building code or by some particular research requirements.
Shake-table testing
Studying a building's response to an earthquake is performed by putting a model of the structure on a shake-table that simulates the seismic loading. The earliest such experiments were performed more than a century ago.
Computational approaches
Another way is to evaluate the earthquake performance analytically.
The very first earthquake simulations were performed by statically applying some horizontal inertia forces, based on scaled peak ground accelerations, to a mathematical model of a building. With the further development of computational technologies, static approaches began to give way to dynamic ones.
Traditionally, numerical simulation and physical tests have been uncoupled and performed separately. So-called hybrid testing systems employ rapid, parallel analyses using both physical and computational tests.
See also
Seismic analysis
References
External links
Network for Earthquake Engineering Simulation (NEES)
AEM Earthquake Simulation
Building
Earthquake engineering | Earthquake simulation | [
"Engineering"
] | 307 | [
"Structural engineering",
"Building",
"Construction",
"Civil engineering",
"Earthquake engineering"
] |
17,553,405 | https://en.wikipedia.org/wiki/Material%20failure%20theory | Material failure theory is an interdisciplinary field of materials science and solid mechanics which attempts to predict the conditions under which solid materials fail under the action of external loads. The failure of a material is usually classified into brittle failure (fracture) or ductile failure (yield). Depending on the conditions (such as temperature, state of stress, loading rate) most materials can fail in a brittle or ductile manner or both. However, for most practical situations, a material may be classified as either brittle or ductile.
In mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate "failed" states from "unfailed" states. A precise physical definition of a "failed" state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the same form are used to predict brittle failure and ductile yields.
Material failure
In materials science, material failure is the loss of load carrying capacity of a material unit. This definition introduces to the fact that material failure can be examined in different scales, from microscopic, to macroscopic. In structural problems, where the structural response may be beyond the initiation of nonlinear material behaviour, material failure is of profound importance for the determination of the integrity of the structure. On the other hand, due to the lack of globally accepted fracture criteria, the determination of the structure's damage, due to material failure, is still under intensive research.
Types of material failure
Material failure can be distinguished in two broader categories depending on the scale in which the material is examined:
Microscopic failure
Microscopic material failure is defined in terms of crack initiation and propagation. Such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack. Failure criteria in this case are related to microscopic fracture. Some of the most popular failure models in this area are the micromechanical failure models, which combine the advantages of continuum mechanics and classical fracture mechanics. Such models are based on the concept that during plastic deformation, microvoids nucleate and grow until a local plastic neck or fracture of the intervoid matrix occurs, which causes the coalescence of neighbouring voids. Such a model, proposed by Gurson and extended by Tvergaard and Needleman, is known as GTN. Another approach, proposed by Rousselier, is based on continuum damage mechanics (CDM) and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar damage quantity, which represents the void volume fraction of cavities, the porosity f.
Macroscopic failure
Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, equivalently. Li presents a classification of macroscopic failure criteria in four categories:
Stress or strain failure
Energy type failure (S-criterion, T-criterion)
Damage failure
Empirical failure
Five general levels are considered, at which the meaning of deformation and failure is interpreted differently: the structural element scale, the macroscopic scale where macroscopic stress and strain are defined, the mesoscale which is represented by a typical void, the microscale and the atomic scale. The material behavior at one level is considered as a collective of its behavior at a sub-level. An efficient deformation and failure model should be consistent at every level.
Brittle material failure criteria
Failure of brittle materials can be determined using several approaches:
Phenomenological failure criteria
Linear elastic fracture mechanics
Elastic-plastic fracture mechanics
Energy-based methods
Cohesive zone methods
Phenomenological failure criteria
The failure criteria that were developed for brittle solids were the maximum stress/strain criteria. The maximum stress criterion assumes that a material fails when the maximum principal stress in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the minimum principal stress is less than the uniaxial compressive strength of the material. If the uniaxial tensile strength of the material is and the uniaxial compressive strength is , then the safe region for the material is assumed to be
Note that the convention that tension is positive has been used in the above expression.
The maximum strain criterion has a similar form except that the principal strains are compared with experimentally determined uniaxial strains at failure, i.e.,
The maximum principal stress and strain criteria continue to be widely used in spite of severe shortcomings.
Numerous other phenomenological failure criteria can be found in the engineering literature. The degree of success of these criteria in predicting failure has been limited. Some popular failure criteria for various type of materials are:
criteria based on invariants of the Cauchy stress tensor
the Tresca or maximum shear stress failure criterion
the von Mises or maximum elastic distortional energy criterion
the Mohr-Coulomb failure criterion for cohesive-frictional solids
the Drucker-Prager failure criterion for pressure-dependent solids
the Bresler-Pister failure criterion for concrete
the Willam-Warnke failure criterion for concrete
the Hankinson criterion, an empirical failure criterion that is used for orthotropic materials such as wood
the Hill yield criteria for anisotropic solids
the Tsai-Wu failure criterion for anisotropic composites
the Johnson–Holmquist damage model for high-rate deformations of isotropic solids
the Hoek-Brown failure criterion for rock masses
the Cam-Clay failure theory for soil
Linear elastic fracture mechanics
The approach taken in linear elastic fracture mechanics is to estimate the amount of energy needed to grow a preexisting crack in a brittle material. The earliest fracture mechanics approach for unstable crack growth is Griffiths' theory. When applied to the mode I opening of a crack, Griffiths' theory predicts that the critical stress () needed to propagate the crack is given by
where is the Young's modulus of the material, is the surface energy per unit area of the crack, and is the crack length for edge cracks or is the crack length for plane cracks. The quantity is postulated as a material parameter called the fracture toughness. The mode I fracture toughness for plane strain is defined as
where is a critical value of the far field stress and is a dimensionless factor that depends on the geometry, material properties, and loading condition. The quantity is related to the stress intensity factor and is determined experimentally. Similar quantities and can be determined for mode II and model III loading conditions.
The state of stress around cracks of various shapes can be expressed in terms of their stress intensity factors. Linear elastic fracture mechanics predicts that a crack will extend when the stress intensity factor at the crack tip is greater than the fracture toughness of the material. Therefore, the critical applied stress can also be determined once the stress intensity factor at a crack tip is known.
Energy-based methods
The linear elastic fracture mechanics method is difficult to apply for anisotropic materials (such as composites) or for situations where the loading or the geometry are complex. The strain energy release rate approach has proved quite useful for such situations. The strain energy release rate for a mode I crack which runs through the thickness of a plate is defined as
where is the applied load, is the thickness of the plate, is the displacement at the point of application of the load due to crack growth, and is the crack length for edge cracks or is the crack length for plane cracks. The crack is expected to propagate when the strain energy release rate exceeds a critical value - called the critical strain energy release rate.
The fracture toughness and the critical strain energy release rate for plane stress are related by
where is the Young's modulus. If an initial crack size is known, then a critical stress can be determined using the strain energy release rate criterion.
Ductile material failure (yield) criteria
A yield criterion often expressed as yield surface, or yield locus, is a hypothesis concerning the limit of elasticity under any combination of stresses. There are two interpretations of yield criterion: one is purely mathematical in taking a statistical approach while other models attempt to provide a justification based on established physical principles. Since stress and strain are tensor qualities they can be described on the basis of three principal directions, in the case of stress these are denoted by , , and .
The following represent the most common yield criterion as applied to an isotropic material (uniform properties in all directions). Other equations have been proposed or are used in specialist situations.
Isotropic yield criteria
Maximum principal stress theory – by William Rankine (1850). Yield occurs when the largest principal stress exceeds the uniaxial tensile yield strength. Although this criterion allows for a quick and easy comparison with experimental data it is rarely suitable for design purposes. This theory gives good predictions for brittle materials.
Maximum principal strain theory – by St.Venant. Yield occurs when the maximum principal strain reaches the strain corresponding to the yield point during a simple tensile test. In terms of the principal stresses this is determined by the equation:
Maximum shear stress theory – Also known as the Tresca yield criterion, after the French scientist Henri Tresca. This assumes that yield occurs when the shear stress exceeds the shear yield strength :
Total strain energy theory – This theory assumes that the stored energy associated with elastic deformation at the point of yield is independent of the specific stress tensor. Thus yield occurs when the strain energy per unit volume is greater than the strain energy at the elastic limit in simple tension. For a 3-dimensional stress state this is given by:
Maximum distortion energy theory (von Mises yield criterion) also referred to as octahedral shear stress theory. – This theory proposes that the total strain energy can be separated into two components: the volumetric (hydrostatic) strain energy and the shape (distortion or shear) strain energy. It is proposed that yield occurs when the distortion component exceeds that at the yield point for a simple tensile test. This theory is also known as the von Mises yield criterion.
The yield surfaces corresponding to these criteria have a range of forms. However, most isotropic yield criteria correspond to convex yield surfaces.
Anisotropic yield criteria
When a metal is subjected to large plastic deformations the grain sizes and orientations change in the direction of deformation. As a result, the plastic yield behavior of the material shows directional dependency. Under such circumstances, the isotropic yield criteria such as the von Mises yield criterion are unable to predict the yield behavior accurately. Several anisotropic yield criteria have been developed to deal with such situations.
Some of the more popular anisotropic yield criteria are:
Hill's quadratic yield criterion
Generalized Hill yield criterion
Hosford yield criterion
Yield surface
The yield surface of a ductile material usually changes as the material experiences increased deformation. Models for the evolution of the yield surface with increasing strain, temperature, and strain rate are used in conjunction with the above failure criteria for isotropic hardening, kinematic hardening, and viscoplasticity. Some such models are:
the Johnson-Cook model
the Steinberg-Guinan model
the Zerilli-Armstrong model
the Mechanical threshold stress model
the Preston-Tonks-Wallace model
There is another important aspect to ductile materials - the prediction of the ultimate failure strength of a ductile material. Several models for predicting the ultimate strength have been used by the engineering community with varying levels of success. For metals, such failure criteria are usually expressed in terms of a combination of porosity and strain to failure or in terms of a damage parameter.
See also
Fracture mechanics
Fracture
Stress intensity factor
Yield (engineering)
Yield surface
Plasticity (physics)
Structural failure
Strength of materials
Ultimate failure
Damage mechanics
Size effect on structural strength
Concrete fracture analysis
References
Mechanical failure
Plasticity (physics)
Solid mechanics
Mechanics
Materials science
Materials degradation
Fracture mechanics | Material failure theory | [
"Physics",
"Materials_science",
"Engineering"
] | 2,421 | [
"Structural engineering",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Deformation (mechanics)",
"Materials science",
"Plasticity (physics)",
"Mechanics",
"nan",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
17,555,375 | https://en.wikipedia.org/wiki/Tsai%E2%80%93Wu%20failure%20criterion | The Tsai–Wu failure criterion is a phenomenological material failure theory which is widely used for anisotropic composite materials which have different strengths in tension and compression. The Tsai-Wu criterion predicts failure when the failure index in a laminate reaches 1. This failure criterion is a specialization of the general quadratic failure criterion proposed by Gol'denblat and Kopnov and can be expressed in the form
where and repeated indices indicate summation, and are experimentally determined material strength parameters. The stresses are expressed in Voigt notation. If the failure surface is to be closed and convex, the interaction terms must satisfy
which implies that all the terms must be positive.
Tsai–Wu failure criterion for orthotropic materials
For orthotropic materials with three planes of symmetry oriented with the coordinate directions, if we assume that and that there is no coupling between the normal and shear stress terms (and between the shear terms), the general form of the Tsai–Wu failure criterion reduces to
Let the failure strength in uniaxial tension and compression in the three directions of anisotropy be . Also, let us assume that the shear strengths in the three planes of symmetry are (and have the same magnitude on a plane even if the signs are different). Then the coefficients of the orthotropic Tsai–Wu failure criterion are
The coefficients can be determined using equibiaxial tests. If the failure strengths in equibiaxial tension are then
The near impossibility of performing these equibiaxial tests has led to there being a severe lack of experimental data on the parameters .
It can be shown that the Tsai-Wu criterion is a particular case of the generalized Hill yield criterion.
Tsai-Wu failure criterion for transversely isotropic materials
For a transversely isotropic material, if the plane of isotropy is 1–2, then
Then the Tsai–Wu failure criterion reduces to
where . This theory is applicable to a unidirectional composite lamina where the fiber direction is in the '3'-direction.
In order to maintain closed and ellipsoidal failure surfaces for all stress states, Tsai and Wu also proposed stability conditions which take the following form for transversely isotropic materials
Tsai–Wu failure criterion in plane stress
For the case of plane stress with , the Tsai–Wu failure criterion reduces to
The strengths in the expressions for may be interpreted, in the case of a lamina, as
= transverse compressive strength, = transverse tensile strength, = longitudinal compressive strength, = longitudinal strength, = longitudinal shear strength, = transverse shear strength.
Tsai–Wu criterion for foams
The Tsai–Wu criterion for closed cell PVC foams under plane strain conditions may be expressed as
where
For DIAB Divinycell H250 PVC foam (density 250 kg/cu.m.), the values of the strengths are MPa, MPa, MPa, MPa.
For aluminum foams in plane stress, a simplified form of the Tsai–Wu criterion may be used if we assume that the tensile and compressive failure strengths are the same and that there are no shear effects on the failure strength. This criterion may be written as
where
Tsai–Wu criterion for bone
The Tsai–Wu failure criterion has also been applied to trabecular bone/cancellous bone with varying degrees of success. The quantity has been shown to have a nonlinear dependence on the density of the bone.
See also
Material failure theory
Yield (engineering)
References
Engineering failures
Plasticity (physics)
Solid mechanics
Mechanics | Tsai–Wu failure criterion | [
"Materials_science",
"Technology",
"Engineering"
] | 747 | [
"Systems engineering",
"Reliability engineering",
"Deformation (mechanics)",
"Technological failures",
"Plasticity (physics)",
"Engineering failures",
"Civil engineering"
] |
17,560,083 | https://en.wikipedia.org/wiki/CoRoT-4b | CoRoT-4b (formerly known as CoRoT-Exo-4b) is an extrasolar planet orbiting the star CoRoT-4. It is probably in synchronous orbit with stellar rotation. It was discovered by the French CoRoT mission in 2008.
References
External links
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2008
Giant planets
Monoceros
4b
de:Extrasolarer Planet#CoRoT-4 b | CoRoT-4b | [
"Astronomy"
] | 96 | [
"Monoceros",
"Constellations"
] |
17,560,128 | https://en.wikipedia.org/wiki/Chloroformate | Chloroformates are a class of organic compounds with the formula ROC(O)Cl. They are formally esters of chloroformic acid. Most are colorless, volatile liquids that degrade in moist air. A simple example is methyl chloroformate, which is commercially available.
Chloroformates are used as reagents in organic chemistry. For example, benzyl chloroformate is used to introduce the Cbz (carboxybenzyl) protecting group and fluorenylmethyloxycarbonyl chloride is used to introduce the FMOC protecting group. Chloroformates are popular in the field of chromatography as derivatization agents. They convert polar compounds into less polar more volatile derivatives. In this way, chloroformates enable relatively simple transformation of large array of metabolites (aminoacids, amines, carboxylic acids, phenols) for analysis by gas chromatography / mass spectrometry.
Reactions
The reactivity of chloroformates and acyl chlorides are similar. Representative reactions are:
Reaction with amines to form carbamates:
ROC(O)Cl + H2NR' → ROC(O)-N(H)R' + HCl
Reaction with alcohols to form carbonate esters:
ROC(O)Cl + HOR' → ROC(O)-OR' + HCl
Reaction with carboxylic acids to form mixed anhydrides:
Typically these reactions would be conducted in the presence of a base which serves to absorb the HCl.
Alkyl chloroformate esters degrate to give the alkyl chloride, with retention of configuration:
The reaction is proposed to proceed via a substitution nucleophilic internal mechanism.
References
Functional groups | Chloroformate | [
"Chemistry"
] | 382 | [
"Functional groups"
] |
17,561,319 | https://en.wikipedia.org/wiki/Electrochemical%20gas%20sensor | Electrochemical gas sensors are gas detectors that measure the concentration of a target gas by oxidizing or reducing the target gas at an electrode and measuring the resulting current.
History
Beginning his research in 1962, Mr. Naoyoshi Taguchi became the first person in the world to develop a semiconductor device that could detect low concentrations of combustible and reducing gases when used with a simple electrical circuit. Devices based on this technology are often called "TGS" (Taguchi Gas Sensors).
Construction
The sensors contain two or three electrodes, occasionally four, in contact with an electrolyte. The electrodes are typically fabricated by fixing a high surface area of precious metal onto the porous hydrophobic membrane. The working electrode contacts both the electrolyte and the ambient air to be monitored, usually via a porous membrane. The electrolyte most commonly used is a mineral acid, but organic electrolytes are also used for some sensors. The electrodes and housing are usually in a plastic housing which contains a gas entry hole for the gas and electrical contacts.
Theory of operation
The gas diffuses into the sensor, through the back of the porous membrane to the working electrode, where it is oxidized or reduced. This electrochemical reaction results in an electric current that passes through the external circuit. In addition to measuring, amplifying, and performing other signal processing functions, the external circuit maintains the voltage across the sensor between the working and counter electrodes for a two-electrode sensor or between the working and reference electrodes for a three-electrode cell. At the counter electrode, an equal and opposite reaction occurs, such that if the working electrode is an oxidation, then the counter electrode is a reduction.
Diffusion controlled response
The magnitude of the current is controlled by how much of the target gas is oxidized at the working electrode. Sensors are usually designed so that the gas supply is limited by diffusion, and thus the output from the sensor is linearly proportional to the gas concentration. This linear output is one of the advantages of electrochemical sensors over other sensor technologies (e.g. infrared), whose output must be linearized before they can be used. A linear output allows for more precise measurement of low concentrations and much simpler calibration (only a baseline and one point are needed).
Diffusion control offers another advantage. Changing the diffusion barrier allows the sensor manufacturer to tailor the sensor to a particular target gas concentration range. In addition, since the diffusion barrier is primarily mechanical, the calibration of electrochemical sensors tends to be more stable over time and so electrochemical sensor-based instruments require much less maintenance than some other detection technologies. In principle, the sensitivity can be calculated based on the diffusion properties of the gas path into the sensor, though experimental errors in the measurement of the diffusion properties make the calculation less accurate than calibrating with test gas.
Cross sensitivity
For some gases, such as ethylene oxide, cross-sensitivity can be a problem because ethylene oxide requires a very active working electrode catalyst and high operating potential for its oxidation. Therefore, gases that are more easily oxidized, such as alcohols and carbon monoxide will also give a response. Cross-sensitivity problems can be eliminated through the use of a chemical filter, for example, filters that allow the target gas to pass through unimpeded but which reacts with and removes common interferences.
While electrochemical sensors offer many advantages, they are not suitable for every gas. Since the detection mechanism involves the oxidation or reduction of the gas, electrochemical sensors are usually only suitable for electrochemically active gases, though it is possible to detect electrochemically inert gases indirectly if the gas interacts with another species in the sensor that then produces a response. Sensors for carbon dioxide are an example of this approach and they have been commercially available for several years.
Cross-sensitivity of electronic chemical sensors may also be utilized to design chemical sensor arrays, which utilize a variety of specific sensors that are cross-reactive for fingerprint detection of target gases in complex mixtures.
See also
Carbon monoxide detector
Karlsruhe Institute of Technology (KIT) - Forschungsstelle für Brandschutztechnik: KAMINA - gas sensor microarrays for rapid smoke analysis
Gas diffusion electrode
References
Gas sensors
Measuring instruments
Safety equipment | Electrochemical gas sensor | [
"Technology",
"Engineering"
] | 872 | [
"Measuring instruments"
] |
17,561,681 | https://en.wikipedia.org/wiki/SAGEM%20Sigma%2030 | The Sigma 30 is an inertial navigation system produced by SAGEM for use with artillery applications including howitzers, multiple rocket launchers, mortars and light guns. It is currently produced for more than 40 international programs, including France (CAESAR, 2R2M, M270 MLRS), Serbia (Nora B 52), Sweden (FH77 BD, Archer), Germany (PzH2000, M270 MLRS), Italy (M270 MLRS), India (Pinaka MBRL), Polish PT-91M tank (build for Malaysia), and the United States (topographic survey).
Sigma 30 can also be integrated in more complex systems (Positioning and Azimuth Determination System).
References
External links
Jane's
Sagem Défense Sécurité Navigation Unit website
Avionics
Missile guidance
Navigational equipment | SAGEM Sigma 30 | [
"Technology"
] | 176 | [
"Avionics",
"Aircraft instruments"
] |
617,307 | https://en.wikipedia.org/wiki/Rotameter | A rotameter is a device that measures the volumetric flow rate of fluid in a closed tube.
It belongs to a class of meters called variable-area flowmeters, which measure flow rate by allowing the cross-sectional area the fluid travels through to vary, causing a measurable effect.
History
The first variable area meter with rotating float was invented by Karl Kueppers (1874–1933) in Aachen in 1908. This is described in the German patent 215225. Felix Meyer founded the company "Deutsche Rotawerke GmbH" in Aachen recognizing the fundamental importance of this invention. They improved this invention with new shapes of the float and of the glass tube. Kueppers invented the special shape for the inside of the glass tube that realized a symmetrical flow scale.
The brand name Rotameter was registered by the British company GEC Elliot automation, Rotameter Co. In many other countries the brand name Rotameter is registered by Rota Yokogawa GmbH & Co. KG in Germany which is now owned by Yokogawa Electric Corp.
Description
A rotameter consists of a tapered tube, typically made of glass with a 'float' (a shaped weight, made either of anodized aluminum or a ceramic), inside that is pushed up by the drag force of the flow and pulled down by gravity. The drag force for a given fluid and float cross section is a function of flow speed squared only, see drag equation.
A higher volumetric flow rate through a given area increases flow speed and drag force, so the float will be pushed upwards. However, as the inside of the rotameter is cone shaped (widens), the area around the float through which the medium flows increases, the flow speed and drag force decrease until there is mechanical equilibrium with the float's weight.
Floats are made in many different shapes, with spheres and ellipsoids being the most common. The float may be diagonally grooved and partially colored so that it rotates axially as the fluid passes. This shows if the float is stuck since it will only rotate if it is free. Readings are usually taken at the top of the widest part of the float; the center for an ellipsoid, or the top for a cylinder. Some manufacturers use a different standard.
The "float" must not float in the fluid: it has to have a higher density than the fluid, otherwise it will float to the top even if there is no flow.
The mechanical nature of the measuring principle provides a flow measurement device that does not require any electrical power. If the tube is made of metal, the float position is transferred to an external indicator via a magnetic coupling. This capability has considerably expanded the range of applications for the variable area flowmeter, since the measurement can observed remotely from the process or used for automatic control.
Advantages
A rotameter requires no external power or fuel, it uses only the inherent properties of the fluid, along with gravity, to measure flow rate.
A rotameter is also a relatively simple device that can be mass manufactured out of cheap materials, allowing for its widespread use.
Since the area of the flow passage increases as the float moves up the tube, the scale is approximately linear.
Clear glass is used which is highly resistant to thermal shock and chemical action.
Disadvantages
Due to its reliance on the ability of the fluid or gas to displace the float, graduations on a given rotameter will only be accurate for a given substance at a given temperature. The main property of importance is the density of the fluid; however, viscosity may also be significant. Floats are ideally designed to be insensitive to viscosity; however, this is seldom verifiable from manufacturers' specifications. Either separate rotameters for different densities and viscosities may be used, or multiple scales on the same rotameter can be used.
Because operation of a rotameter depends on the force of gravity for operation, a rotameter must be oriented vertically. Significant error can result if the orientation deviates significantly from the vertical.
Due to the direct flow indication the resolution is relatively poor compared to other measurement principles. Readout uncertainty gets worse near the bottom of the scale. Oscillations of the float and parallax may further increase the uncertainty of the measurement.
Since the float must be read through the flowing medium, some fluids may obscure the reading. A transducer may be required for electronically measuring the position of the float.
Rotameters are not easily adapted for reading by machine; although magnetic floats that drive a follower outside the tube are available.
Rotameters are not generally manufactured in sizes greater than 6 inches/150 mm, but bypass designs are sometimes used on very large pipes.
See also
Thorpe tube flowmeter
References
External links
Rota Yokogawa GmbH & Co. KG: Rotameter measuring devices
Rota Yokogawa GmbH & Co. KG: Company history of the founder of Rotameter
eFunda: Introduction to Variable Area Flowmeters
KROHNE: Measuring Principle
Fluid dynamics
Flow meters | Rotameter | [
"Chemistry",
"Technology",
"Engineering"
] | 1,033 | [
"Chemical engineering",
"Measuring instruments",
"Piping",
"Fluid dynamics",
"Flow meters"
] |
617,379 | https://en.wikipedia.org/wiki/Retrotransposon | Retrotransposons (also called Class I transposable elements) are mobile elements which move in the host genome by converting their transcribed RNA into DNA through reverse transcription. Thus, they differ from Class II transposable elements, or DNA transposons, in utilizing an RNA intermediate for the transposition and leaving the transposition donor site unchanged.
Through reverse transcription, retrotransposons amplify themselves quickly to become abundant in eukaryotic genomes such as maize (49–78%) and humans (42%). They are only present in eukaryotes but share features with retroviruses such as HIV, for example, discontinuous reverse transcriptase-mediated extrachromosomal recombination.
There are two main types of retrotransposons, long terminal repeats (LTRs) and non-long terminal repeats (non-LTRs). Retrotransposons are classified based on sequence and method of transposition. Most retrotransposons in the maize genome are LTR, whereas in humans they are mostly non-LTR.
LTR retrotransposons
LTR retrotransposons are characterized by their long terminal repeats (LTRs), which are present at both the 5' and 3' ends of their sequences. These LTRs contain the promoters for these transposable elements (TEs), are essential for TE integration, and can vary in length from just over 100 base pairs (bp) to more than 1,000 bp. On average, LTR retrotransposons span several thousand base pairs, with the largest known examples reaching up to 30 kilobases (kb).
LTRs are highly functional sequences, and for that reason LTR and non-LTR retrotransposons differ greatly in their reverse transcription and integration mechanisms. Non-LTR retrotransposons use a target-primed reverse transcription (TPRT) process, which requires the RNA of the TE to be brought to the cleavage site of the retrotransposon’s integrase, where it is reverse transcribed. In contrast, LTR retrotransposons undergo reverse transcription in the cytoplasm, utilizing two rounds of template switching, and a formation of a pre-integration complex (PIC) composed of double-stranded DNA and an integrase dimer bound to LTRs. This complex then moves into the nucleus for integration into a new genomic location.
LTR retrotransposons typically encode the proteins gag and pol, which may be combined into a single open reading frame (ORF) or separated into distinct ORFs. Similar to retroviruses, the gag protein is essential for capsid assembly and the packaging of the TE's RNA and associated proteins. The pol protein is necessary for reverse transcription and includes these crucial domains: PR (protease), RT (reverse transcriptase), RH (RNase H), and INT (integrase). Additionally, some LTR retrotransposons have an ORF for an envelope (env) protein that is incorporated into the assembled capsid, facilitating attachment to cellular surfaces.
Endogenous retrovirus
An endogenous retrovirus is a retrovirus without virus pathogenic effects that has been integrated into the host genome by inserting their inheritable genetic information into cells that can be passed onto the next generation like a retrotransposon. Because of this, they share features with retroviruses and retrotransposons. When the retroviral DNA is integrated into the host genome they evolve into endogenous retroviruses that influence eukaryotic genomes. So many endogenous retroviruses have inserted themselves into eukaryotic genomes that they allow insight into biology between viral-host interactions and the role of retrotransposons in evolution and disease.
Many retrotransposons share features with endogenous retroviruses, the property of recognising and fusing with the host genome. However, there is a key difference between retroviruses and retrotransposons, which is indicated by the env gene. Although similar to the gene carrying out the same function in retroviruses, the env gene is used to determine whether the gene is retroviral or retrotransposon. If the gene is retroviral it can evolve from a retrotransposon into a retrovirus. They differ by the order of sequences in pol genes. Env genes are found in LTR retrotransposon types Ty1-copia (Pseudoviridae), Ty3-gypsy (Metaviridae) and BEL/Pao. They encode glycoproteins on the retrovirus envelope needed for entry into the host cell. Retroviruses can move between cells whereas LTR retrotransposons can only move themselves into the genome of the same cell. Many vertebrate genes were formed from retroviruses and LTR retrotransposons. One endogenous retrovirus or LTR retrotransposon has the same function and genomic locations in different species, suggesting their role in evolution.
Non-LTR retrotransposons
Like LTR retrotransposons, non-LTR retrotransposons contain genes for reverse transcriptase, RNA-binding protein, nuclease, and sometimes ribonuclease H domain but they lack the long terminal repeats. RNA-binding proteins bind the RNA-transposition intermediate and nucleases are enzymes that break phosphodiester bonds between nucleotides in nucleic acids. Instead of LTRs, non-LTR retrotransposons have short repeats that can have an inverted order of bases next to each other aside from direct repeats found in LTR retrotransposons that is just one sequence of bases repeating itself.
Although they are retrotransposons, they cannot carry out reverse transcription using an RNA transposition intermediate in the same way as LTR retrotransposons. Those two key components of the retrotransposon are still necessary but the way they are incorporated into the chemical reactions is different. This is because unlike LTR retrotransposons, non-LTR retrotransposons do not contain sequences that bind tRNA.
They mostly fall into two types – LINEs (Long interspersed nuclear elements) and SINEs (Short interspersed nuclear elements). SVA elements are the exception between the two as they share similarities with both LINEs and SINEs, containing Alu elements and different numbers of the same repeat. SVAs are shorter than LINEs but longer than SINEs.
While historically viewed as "junk DNA", research suggests in some cases, both LINEs and SINEs were incorporated into novel genes to form new functions.
LINEs
When a LINE is transcribed, the transcript contains an RNA polymerase II promoter that ensures LINEs can be copied into whichever location it inserts itself into. RNA polymerase II is the enzyme that transcribes genes into mRNA transcripts. The ends of LINE transcripts are rich in multiple adenines, the bases that are added at the end of transcription so that LINE transcripts would not be degraded. This transcript is the RNA transposition intermediate.
The RNA transposition intermediate moves from the nucleus into the cytoplasm for translation. This gives the two coding regions of a LINE that in turn binds back to the RNA it is transcribed from. The LINE RNA then moves back into the nucleus to insert into the eukaryotic genome.
LINEs insert themselves into regions of the eukaryotic genome that are rich in bases AT. At AT regions LINE uses its nuclease to cut one strand of the eukaryotic double-stranded DNA. The adenine-rich sequence in LINE transcript base pairs with the cut strand to flag where the LINE will be inserted with hydroxyl groups. Reverse transcriptase recognises these hydroxyl groups to synthesise LINE retrotransposon where the DNA is cut. Like with LTR retrotransposons, this new inserted LINE contains eukaryotic genome information so it can be copied and pasted into other genomic regions easily. The information sequences are longer and more variable than those in LTR retrotransposons.
Most LINE copies have variable length at the start because reverse transcription usually stops before DNA synthesis is complete. In some cases this causes RNA polymerase II promoter to be lost so LINEs cannot transpose further.
Human L1
LINE-1 (L1) retrotransposons make up a significant portion of the human genome, with an estimated 500,000 copies per genome. Genes encoding for human LINE1 usually have their transcription inhibited by methyl groups binding to its DNA carried out by PIWI proteins and enzymes DNA methyltransferases. L1 retrotransposition can disrupt the nature of genes transcribed by pasting themselves inside or near genes which could in turn lead to human disease. LINE1s can only retrotranspose in some cases to form different chromosome structures contributing to differences in genetics between individuals. There is an estimate of 80–100 active L1s in the reference genome of the Human Genome Project, and an even smaller number of L1s within those active L1s retrotranspose often. L1 insertions have been associated with tumorigenesis by activating cancer-related genes oncogenes and diminishing tumor suppressor genes.
Each human LINE1 contains two regions from which gene products can be encoded. The first coding region contains a leucine zipper protein involved in protein-protein interactions and a protein that binds to the terminus of nucleic acids. The second coding region has a purine/pyrimidine nuclease, reverse transcriptase and protein rich in amino acids cysteines and histidines. The end of the human LINE1, as with other retrotransposons is adenine-rich.
Human L1 actively retrotransposes in the human genome. A recent study identified 1,708 somatic L1 retrotransposition events, especially in colorectal epithelial cells. These events occur from early embryogenesis and retrotransposition rate is substantially increased during colorectal tumourigenesis.
SINEs
SINEs are much shorter (300bp) than LINEs. They share similarity with genes transcribed by RNA polymerase II, the enzyme that transcribes genes into mRNA transcripts, and the initiation sequence of RNA polymerase III, the enzyme that transcribes genes into ribosomal RNA, tRNA and other small RNA molecules. SINEs such as mammalian MIR elements have tRNA gene at the start and adenine-rich at the end like in LINEs.
SINEs do not encode a functional reverse transcriptase protein and rely on other mobile transposons, especially LINEs. SINEs exploit LINE transposition components despite LINE-binding proteins prefer binding to LINE RNA. SINEs cannot transpose by themselves because they cannot encode SINE transcripts. They usually consist of parts derived from tRNA and LINEs. The tRNA portion contains an RNA polymerase III promoter which the same kind of enzyme as RNA polymerase II. This makes sure the LINE copies would be transcribed into RNA for further transposition. The LINE component remains so LINE-binding proteins can recognise the LINE part of the SINE.
Alu elements
Alus are the most common SINE in primates. They are approximately 350 base pairs long, do not encode proteins and can be recognized by the restriction enzyme AluI (hence the name). Their distribution may be important in some genetic diseases and cancers. Copy and pasting Alu RNA requires the Alu's adenine-rich end and the rest of the sequence bound to a signal. The signal-bound Alu can then associate with ribosomes. LINE RNA associates on the same ribosomes as the Alu. Binding to the same ribosome allows Alus of SINEs to interact with LINE. This simultaneous translation of Alu element and LINE allows SINE copy and pasting.
SVA elements
SVA elements are present at lower levels than SINES and LINEs in humans. The starts of SVA and Alu elements are similar, followed by repeats and an end similar to endogenous retrovirus. LINEs bind to sites flanking SVA elements to transpose them. SVA are one of the youngest transposons in great apes genome and among the most active and polymorphic in the human population. SVA was created by a fusion between an Alu element, a VNTR (variable number tandem repeat), and an LTR fragment.
Role in human disease
Retrotransposons ensure they are not lost by chance by occurring only in cell genetics that can be passed on from one generation to the next from parent gametes. However, LINEs can transpose into the human embryo cells that eventually develop into the nervous system, raising the question whether this LINE retrotransposition affects brain function. LINE retrotransposition is also a feature of several cancers, but it is unclear whether retrotransposition itself causes cancer instead of just a symptom. Uncontrolled retrotransposition is bad for both the host organism and retrotransposons themselves so they have to be regulated. Retrotransposons are regulated by RNA interference. RNA interference is carried out by a bunch of short non-coding RNAs. The short non-coding RNA interacts with protein Argonaute to degrade retrotransposon transcripts and change their DNA histone structure to reduce their transcription.
Role in evolution
LTR retrotransposons came about later than non-LTR retrotransposons, possibly from an ancestral non-LTR retrotransposon acquiring an integrase from a DNA transposon. Retroviruses gained additional properties to their virus envelopes by taking the relevant genes from other viruses using the power of LTR retrotransposon.
Due to their retrotransposition mechanism, retrotransposons amplify in number quickly, composing 40% of the human genome. The insertion rates for LINE1, Alu and SVA elements are 1/200 – 1/20, 1/20 and 1/900 respectively. The LINE1 insertion rates have varied a lot over the past 35 million years, so they indicate points in genome evolution.
Notably a large number of 100 kilobases in the maize genome show variety due to the presence or absence of retrotransposons. However since maize is unusual genetically as compared to other plants it cannot be used to predict retrotransposition in other plants.
Mutations caused by retrotransposons include:
Gene inactivation
Changing gene regulation
Changing gene products
Acting as DNA repair sites
Role in biotechnology
See also
Copy-number variation
Genomic organization
Insertion sequences
Interspersed repeat
Paleogenetics
Paleovirology
RetrOryza
Retrotransposon markers, a powerful method of reconstructing phylogenies.
Tn3 transposon
Transposon
Retron
References
Mobile genetic elements
Molecular biology
Non-coding DNA | Retrotransposon | [
"Chemistry",
"Biology"
] | 3,110 | [
"Biochemistry",
"Molecular genetics",
"Mobile genetic elements",
"Molecular biology"
] |
617,565 | https://en.wikipedia.org/wiki/GenBank | The GenBank sequence database is an open access, annotated collection of all publicly available nucleotide sequences and their protein translations. It is produced and maintained by the National Center for Biotechnology Information (NCBI; a part of the National Institutes of Health in the United States) as part of the International Nucleotide Sequence Database Collaboration (INSDC).
In October 2024, GenBank contained 34 trillion base pairs from over 4.7 billion nucleotide sequences and more than 580,000 formally described species.
The database started in 1982 by Walter Goad and Los Alamos National Laboratory. GenBank has become an important database for research in biological fields and has grown in recent years at an exponential rate by doubling roughly every 18 months.
GenBank is built by direct submissions from individual laboratories, as well as from bulk submissions from large-scale sequencing centers.
Submissions
Only original sequences can be submitted to GenBank. Direct submissions are made to GenBank using BankIt, which is a Web-based form, or the stand-alone submission program, Sequin. Upon receipt of a sequence submission, the GenBank staff examines the originality of the data and assigns an accession number to the sequence and performs quality assurance checks. The submissions are then released to the public database, where the entries are retrievable by Entrez or downloadable by FTP. Bulk submissions of Expressed Sequence Tag (EST), Sequence-tagged site (STS), Genome Survey Sequence (GSS), and High-Throughput Genome Sequence (HTGS) data are most often submitted by large-scale sequencing centers. The GenBank direct submissions group also processes complete microbial genome sequences.
History
Walter Goad of the Theoretical Biology and Biophysics Group at Los Alamos National Laboratory (LANL) and others established the Los Alamos Sequence Database in 1979, which culminated in 1982 with the creation of the public GenBank. Funding was provided by the National Institutes of Health, the National Science Foundation, the Department of Energy, and the Department of Defense. LANL collaborated on GenBank with the firm Bolt, Beranek, and Newman, and by the end of 1983 more than 2,000 sequences were stored in it.
In the mid-1980s, the Intelligenetics bioinformatics company at Stanford University managed the GenBank project in collaboration with LANL. As one of the earliest bioinformatics community projects on the Internet, the GenBank project started BIOSCI/Bionet news groups for promoting open access communications among bioscientists. During 1989 to 1992, the GenBank project transitioned to the newly created National Center for Biotechnology Information (NCBI).
Growth
The GenBank release notes for release 250.0 (June 2022) state that "from 1982 to the present, the number of bases in GenBank has doubled approximately every 18 months". As of 15 June 2022, GenBank release 250.0 has over 239 million loci, 1,39 trillion nucleotide bases, from 239 million reported sequences.
The GenBank database includes additional data sets that are constructed mechanically from the main sequence data collection, and therefore are excluded from this count.
Limitations
An analysis of Genbank and other services for the molecular identification of clinical blood culture isolates using 16S rRNA sequences showed that such analyses were more discriminative when GenBank was combined with other services such as EzTaxon-e and the BIBI databases.
GenBank may contain sequences wrongly assigned to a particular species, because the initial identification of the organism was wrong. A recent study showed that 75% of mitochondrial Cytochrome c oxidase subunit I sequences were wrongly assigned to the fish Nemipterus mesoprion resulting from continued usage of sequences of initially misidentified individuals. The authors provide recommendations how to avoid further distribution of publicly available sequences with incorrect scientific names.
Numerous published manuscripts have identified erroneous sequences on GenBank. These are not only incorrect species assignments (which can have different causes) but also include chimeras and accession records with sequencing errors. A recent manuscript on the quality of all Cytochrome b records of birds further showed that 45% of the identified erroneous records lack a voucher specimen that prevents a reassessment of the species identification.
Another problem is that sequence records are often submitted as anonymous sequences without species names (e.g. as "Pelomedusa sp. A CK-2014" because the species are either unknown or withheld for publication purposes. However, even after the species have been identified or published, these sequence records are not updated and thus may cause ongoing confusion.
See also
Ensembl
Human Protein Reference Database (HPRD)
Sequence analysis
UniProt
List of sequenced eukaryotic genomes
List of sequenced archaeal genomes
RefSeq — the Reference Sequence Database
Geneious — includes a GenBank Submission Tool
Open science data
Open Standard
References
External links
GenBank
Example sequence record, for hemoglobin beta
BankIt
Sequin — a stand-alone software tool developed by the NCBI for submitting and updating entries to the GenBank sequence database.
EMBOSS — free, open source software for molecular biology
GenBank, RefSeq, TPA and UniProt: What's in a Name?
National Institutes of Health
Genetics databases
Genome databases
Bioinformatics
Biological databases | GenBank | [
"Engineering",
"Biology"
] | 1,094 | [
"Bioinformatics",
"Biological engineering",
"Biological databases"
] |
617,624 | https://en.wikipedia.org/wiki/Cerussite | Cerussite (also known as lead carbonate or white lead ore) is a mineral consisting of lead carbonate with the chemical formula PbCO3, and is an important ore of lead. The name is from the Latin cerussa, white lead. Cerussa nativa was mentioned by Conrad Gessner in 1565, and in 1832 F. S. Beudant applied the name céruse to the mineral, whilst the present form, cerussite, is due to W. Haidinger (1845). Miners' names in early use were lead-spar and white-lead-ore.
Cerussite crystallizes in the orthorhombic crystal system and is isomorphous with aragonite. Like aragonite it is very frequently twinned, the compound crystals being pseudo-hexagonal in form. Three crystals are usually twinned together on two faces of the prism, producing six-rayed stellate groups with the individual crystals intercrossing at angles of nearly 60°. Crystals are of frequent occurrence and they usually have very bright and smooth faces. The mineral also occurs in compact granular masses, and sometimes in fibrous forms. The mineral is usually colorless or white, sometimes grey or greenish in tint and varies from transparent to translucent with an adamantine lustre. It is very brittle, and has a conchoidal fracture. It has a Mohs hardness of 3 to 3.75 and a specific gravity of 6.5. A variety containing 7% of zinc carbonate, replacing lead carbonate, is known as iglesiasite, from Iglesias in Sardinia, where it is found.
The mineral may be readily recognized by its characteristic twinning, in conjunction with the adamantine lustre and high specific gravity. It dissolves with effervescence in dilute nitric acid. A blowpipe test will cause it to fuse very readily, and gives indications for lead.
Finely crystallized specimens have been obtained from the Friedrichssegen mine in Lahnstein in Rhineland-Palatinate, Johanngeorgenstadt in Saxony, Stříbro in the Czech Republic, Phoenixville in Pennsylvania, Broken Hill in New South Wales, and several other localities. Delicate acicular crystals of considerable length were found long ago in the Pentire Glaze mine near St Minver in Cornwall. Cerussite is often found in considerable quantities, and has a lead content of up to 77.5%.
Lead(II) carbonate is practically insoluble in neutral water (solubility product [Pb2+][CO32−] ≈ 1.5×10−13 at 25 °C), but will dissolve in dilute acids.
Commercial uses
"White lead" is the key ingredient in (now discontinued) lead paints. Ingestion of lead-based paint chips is the most common cause of lead poisoning in children.
Both "white lead" and lead acetate have been used in cosmetics throughout history, though this practice has ceased in Western countries.
Gallery
See also
Venetian ceruse – Cerussite-based cosmetic popularly thought to be worn by Elizabeth I of England
References
External links
Mineral galleries
Carbonate minerals
Gemstones
Lead minerals
Luminescent minerals
Minerals in space group 62
Orthorhombic minerals
Aragonite group
Minerals described in 1845 | Cerussite | [
"Physics",
"Chemistry"
] | 676 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
617,777 | https://en.wikipedia.org/wiki/Electron%20deficiency | In chemistry, electron deficiency (and electron-deficient) is jargon that is used in two contexts: chemical species that violate the octet rule because they have too few valence electrons and species that happen to follow the octet rule but have electron-acceptor properties, forming donor-acceptor charge-transfer salts.
Octet rule violations
Traditionally, "electron-deficiency" is used as a general descriptor for boron hydrides and other molecules which do not have enough valence electrons to form localized (2-centre 2-electron) bonds joining all atoms. For example, diborane (B2H6) would require a minimum of 7 localized bonds with 14 electrons to join all 8 atoms, but there are only 12 valence electrons. A similar situation exists in trimethylaluminium. The electron deficiency in such compounds is similar to metallic bonding.
Electron-acceptor molecules
Alternatively, electron-deficiency describes molecules or ions that function as electron acceptors. Such electron-deficient species obey the octet rule, but they have (usually mild) oxidizing properties. 1,3,5-Trinitrobenzene and related polynitrated aromatic compounds are often described as electron-deficient. Electron deficiency can be measured by linear free-energy relationships: "a strongly negative ρ value indicates a large electron demand at the reaction center, from which it may be concluded that a highly electron-deficient center, perhaps an incipient carbocation, is involved."
References
Chemical bonding | Electron deficiency | [
"Physics",
"Chemistry",
"Materials_science"
] | 315 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
618,076 | https://en.wikipedia.org/wiki/Opticks | Opticks: or, A Treatise of the Reflexions, Refractions, Inflexions and Colours of Light is a collection of three books by Isaac Newton that was published in English in 1704 (a scholarly Latin translation appeared in 1706). The treatise analyzes the fundamental nature of light by means of the refraction of light with prisms and lenses, the diffraction of light by closely spaced sheets of glass, and the behaviour of color mixtures with spectral lights or pigment powders. Opticks was Newton's second major work on physical science and it is considered one of the three major works on optics during the Scientific Revolution (alongside Johannes Kepler's Astronomiae Pars Optica and Christiaan Huygens' Treatise on Light).
Overview
The publication of Opticks represented a major contribution to science, different from but in some ways rivalling the Principia, yet Isaac Newton's name did not appear on the cover page of the first edition. Opticks is largely a record of experiments and the deductions made from them, covering a wide range of topics in what was later to be known as physical optics. That is, this work is not a geometric discussion of catoptrics or dioptrics, the traditional subjects of reflection of light by mirrors of different shapes and the exploration of how light is "bent" as it passes from one medium, such as air, into another, such as water or glass. Rather, the Opticks is a study of the nature of light and colour and the various phenomena of diffraction, which Newton called the "inflexion" of light.
Newton sets forth in full his experiments, first reported to the Royal Society of London in 1672, on dispersion, or the separation of light into a spectrum of its component colours. He demonstrates how the appearance of color arises from selective absorption, reflection, or transmission of the various component parts of the incident light.
The major significance of Newton's work is that it overturned the dogma, attributed to Aristotle or Theophrastus and accepted by scholars in Newton's time, that "pure" light (such as the light attributed to the Sun) is fundamentally white or colourless, and is altered into color by mixture with darkness caused by interactions with matter. Newton showed the opposite was true: light is composed of different spectral hues (he describes seven – red, orange, yellow, green, blue, indigo and violet), and all colours, including white, are formed by various mixtures of these hues. He demonstrates that color arises from a physical property of light – each hue is refracted at a characteristic angle by a prism or lens – but he clearly states that color is a sensation within the mind and not an inherent property of material objects or of light itself. For example, he demonstrates that a red violet (magenta) color can be mixed by overlapping the red and violet ends of two spectra, although this color does not appear in the spectrum and therefore is not a "color of light". By connecting the red and violet ends of the spectrum, he organised all colours as a color circle that both quantitatively predicts color mixtures and qualitatively describes the perceived similarity among hues.
Newton's contribution to prismatic dispersion was the first to outline multiple-prism arrays. Multiple-prism configurations, as beam expanders, became central to the design of the tunable laser more than 275 years later and set the stage for the development of the multiple-prism dispersion theory.
Comparison to the Principia
Opticks differs in many respects from the Principia. It was first published in English rather than in the Latin used by European philosophers, contributing to the development of a vernacular science literature. The books were a model of popular science exposition: although Newton's English is somewhat dated—he shows a fondness for lengthy sentences with much embedded qualifications—the book can still be easily understood by a modern reader. In contrast, few readers of Newton's time found the Principia accessible or even comprehensible. His formal but flexible style shows colloquialisms and metaphorical word choice.
Unlike the Principia, Opticks is not developed using the geometric convention of propositions proved by deduction from either previous propositions, lemmas or first principles (or axioms). Instead, axioms define the meaning of technical terms or fundamental properties of matter and light, and the stated propositions are demonstrated by means of specific, carefully described experiments. The first sentence of Book I declares "My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiments. In an Experimentum crucis or "critical experiment" (Book I, Part II, Theorem ii), Newton showed that the color of light corresponded to its "degree of refrangibility" (angle of refraction), and that this angle cannot be changed by additional reflection or refraction or by passing the light through a coloured filter.
The work is a vade mecum of the experimenter's art, displaying in many examples how to use observation to propose factual generalisations about the physical world and then exclude competing explanations by specific experimental tests. Unlike the Principia, which vowed Non fingo hypotheses or "I make no hypotheses" outside the deductive method, the Opticks develops conjectures about light that go beyond the experimental evidence: for example, that the physical behaviour of light was due its "corpuscular" nature as small particles, or that perceived colours were harmonically proportioned like the tones of a diatonic musical scale.
Queries
Newton originally considered to write four books, but he dropped the last book on action at a distance. Instead he concluded Opticks a set of unanswered questions and positive assertions referred as queries in Book III. The first set of queries were brief, but the later ones became short essays, filling many pages. In the first edition, these were sixteen such queries; that number was increased to 23 in the Latin edition, published in 1706, and then in the revised English edition, published in 1717/18. In the fourth edition of 1730, there were 31 queries.
These queries, especially the later ones, deal with a wide range of physical phenomena that go beyond the topic of optics. The queries concern the nature and transmission of heat; the possible cause of gravity; electrical phenomena; the nature of chemical action; the way in which God created matter; the proper way to do science; and even the ethical conduct of human beings. These queries are not really questions in the ordinary sense. These queries are almost all posed in the negative, as rhetorical questions. That is, Newton does not ask whether light "is" or "may be" a "body." Rather, he declares: "Is not Light a Body?" Stephen Hales, a firm Newtonian of the early eighteenth century, declared that this was Newton's way of explaining "by Quaere."
The first query reads: "Do not Bodies act upon Light at a distance, and by their action bend its Rays; and is not this action (caeteris paribus) strongest at the least distance?" suspecting on the effect of gravity on the trajectory of light rays. This query predates the prediction of gravitational lensing by Albert Einstein's general relativity by two centuries and later confirmed by Eddington experiment in 1919. The first part of query 30 reads "Are not gross Bodies and Light convertible into one another" thereby anticipating mass-energy equivalence. Query 6 of the book reads "Do not black Bodies conceive heat more easily from Light than those of other Colours do, by reason that the Light falling on them is not reflected outwards, but enters into the Bodies, and is often reflected and refracted within them, until it be stifled and lost?", thereby introducing the concept of a black body.
The last query (number 31) wonders if a corpuscular theory could explain how different substances react more to certain substances than to others, in particular how aqua fortis (nitric acid) reacts more with calamine that with iron. This 31st query has been often been linked to the origin of the concept of affinity in chemical reactions. Various 18th century historians and chemists like William Cullen and Torbern Bergman, credited Newton for the development affinity tables.
Reception
The Opticks was widely read and debated in England and on the Continent. The early presentation of the work to the Royal Society stimulated a bitter dispute between Newton and Robert Hooke over the "corpuscular" or particle theory of light, which prompted Newton to postpone publication of the work until after Hooke's death in 1703. On the Continent, and in France in particular, both the Principia and the Opticks were initially rejected by many natural philosophers, who continued to defend Cartesian natural philosophy and the Aristotelian version of color, and claimed to find Newton's prism experiments difficult to replicate. Indeed, the Aristotelian theory of the fundamental nature of white light was defended into the 19th century, for example by the German writer Johann Wolfgang von Goethe in his 1810 Theory of Colours ().
Newtonian science became a central issue in the assault waged by the philosophes in the Age of Enlightenment against a natural philosophy based on the authority of ancient Greek or Roman naturalists or on deductive reasoning from first principles (the method advocated by French philosopher René Descartes), rather than on the application of mathematical reasoning to experience or experiment. Voltaire popularised Newtonian science, including the content of both the Principia and the Opticks, in his Elements de la philosophie de Newton (1738), and after about 1750 the combination of the experimental methods exemplified by the Opticks and the mathematical methods exemplified by the Principia were established as a unified and comprehensive model of Newtonian science. Some of the primary adepts in this new philosophy were such prominent figures as Benjamin Franklin, Antoine-Laurent Lavoisier, and James Black.
Subsequent to Newton, much has been amended. Thomas Young and Augustin-Jean Fresnel showed that the wave theory Christiaan Huygens described in his Treatise on Light (1690) could prove that colour is the visible manifestation of light's wavelength. Science also slowly came to recognize the difference between perception of colour and mathematisable optics. The German poet Goethe, with his epic diatribe Theory of Colours, could not shake the Newtonian foundation – but "one hole Goethe did find in Newton's armour.. Newton had committed himself to the doctrine that refraction without colour was impossible. He therefore thought that the object-glasses of telescopes must for ever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong." (John Tyndall, 1880)
See also
Color theory
Luminiferous aether
Prism (optics)
Theory of Colours
Book of Optics (Ibn al-Haytham)
Elements of the Philosophy of Newton (Voltaire)
Multiple-prism dispersion theory
Notes
References
External links
Full and free online editions of Newton's Opticks
Rarebookroom, First edition
ETH-Bibliothek, First edition
Gallica, First edition
Internet Archive, Fourth edition
Project Gutenberg digitized text & images of the Fourth Edition
Cambridge University Digital Library, Papers on Hydrostatics, Optics, Sound and Heat – Manuscript papers by Isaac Newton containing draft of Opticks
1704 non-fiction books
1704 in science
English non-fiction literature
Books by Isaac Newton
History of optics
Mathematics books
Physics books
Treatises
Light | Opticks | [
"Physics"
] | 2,411 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
618,077 | https://en.wikipedia.org/wiki/Power%20engineering | Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering.
History
Pioneering years
Electricity became a subject of scientific interest in the late 17th century. Over the next two centuries a number of important discoveries were made including the incandescent light bulb and the voltaic pile. Probably the greatest discovery with respect to power engineering came from Michael Faraday who in 1831 discovered that a change in magnetic flux induces an electromotive force in a loop of wire—a principle known as electromagnetic induction that helps explain how generators and transformers work.
In 1881 two electricians built the world's first power station at Godalming in England. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and thirty-four incandescent lamps at 40 volts. However supply was intermittent and in 1882 Thomas Edison and his company, The Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station consisted of several generators and initially powered around 3,000 lamps for 59 customers. The power station used direct current and operated at a single voltage. Since the direct current power could not be easily transformed to the higher voltages necessary to minimise power loss during transmission, the possible distance between the generators and load was limited to around half-a-mile (800 m).
That same year in London Lucien Gaulard and John Dixon Gibbs demonstrated the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up forty kilometres (25 miles) of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that switching one lamp on or off would affect other lamps further down the line. Following the demonstration George Westinghouse, an American entrepreneur, imported a number of the transformers along with a Siemens generator and set his engineers to experimenting with them in the hopes of improving them for use in a commercial power system.
One of Westinghouse's engineers, William Stanley, recognised the problem with connecting transformers in series as opposed to parallel and also realised that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built the world's first practical transformer based alternating current power system at Great Barrington, Massachusetts in 1886. In 1885 the Italian physicist and electrical engineer Galileo Ferraris demonstrated an induction motor and in 1887 and 1888 the Serbian-American engineer Nikola Tesla filed a range of patents related to power systems including one for a practical two-phase induction motor which Westinghouse licensed for his AC system.
By 1890 the power industry had flourished and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe – these networks were effectively dedicated to providing electric lighting. During this time a fierce rivalry in the US known as the "war of the currents" emerged between Edison and Westinghouse over which form of transmission (direct or alternating current) was superior. In 1891, Westinghouse installed the first major power system that was designed to drive an electric motor and not just provide electric lighting. The installation powered a synchronous motor at Telluride, Colorado with the motor being started by a Tesla induction motor. On the other side of the Atlantic, Oskar von Miller built a 20 kV 176 km three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt. In 1895, after a protracted decision-making process, the Adams No. 1 generating station at Niagara Falls began transmitting three-phase alternating current power to Buffalo at 11 kV. Following completion of the Niagara Falls project, new power systems increasingly chose alternating current as opposed to direct current for electrical transmission.
Twentieth century
Power engineering and Bolshevism
The generation of electricity was regarded as particularly important following the Bolshevik seizure of power. Lenin stated "Communism is Soviet power plus the electrification of the whole country." He was subsequently featured on many Soviet posters, stamps etc. presenting this view. The GOELRO plan was initiated in 1920 as the first Bolshevik experiment in industrial planning and in which Lenin became personally involved. Gleb Krzhizhanovsky was another key figure involved, having been involved in the construction of a power station in Moscow in 1910. He had also known Lenin since 1897 when they were both in the St. Petersburg chapter of the Union of Struggle for the Liberation of the Working Class.
Power engineering in the USA
In 1936 the first commercial high-voltage direct current (HVDC) line using mercury-arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by installing direct current generators in series (a system known as the Thury system) although this suffered from serious reliability issues. In 1957 Siemens demonstrated the first solid-state rectifier (solid-state rectifiers are now the standard for HVDC systems) however it was not until the early 1970s that this technology was used in commercial power systems. In 1959 Westinghouse demonstrated the first circuit breaker that used SF6 as the interrupting medium. SF6 is a far superior dielectric to air and, in recent times, its use has been extended to produce far more compact switching equipment (known as switchgear) and transformers. Many important developments also came from extending innovations in the ICT field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for much better remote control of the power system's switchgear and generators.
Power
Power Engineering deals with the generation, transmission, distribution and utilization of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors and power electronics.
Power engineers may also work on systems that do not connect to the grid. These systems are called off-grid power systems and may be used in preference to on-grid systems for a variety of reasons. For example, in remote locations it may be cheaper for a mine to generate its own power rather than pay for connection to the grid and in most mobile applications connection to the grid is simply not practical.
Fields
Electricity generation covers the selection, design and construction of facilities that convert energy from primary forms to electric power.
Electric power transmission requires the engineering of high voltage transmission lines and substation facilities to interface to generation and distribution systems. High voltage direct current systems are one of the elements of an electric power grid.
Electric power distribution engineering covers those elements of a power system from a substation to the end customer.
Power system protection is the study of the ways an electrical power system can fail, and the methods to detect and mitigate for such failures.
In most projects, a power engineer must coordinate with many other disciplines such as civil and mechanical engineers, environmental experts, and legal and financial personnel. Major power system projects such as a large generating station may require scores of design professionals in addition to the power system engineers. At most levels of professional power system engineering practice, the engineer will require as much in the way of administrative and organizational skills as electrical engineering knowledge.
Professional societies and international standards organizations
In both the UK and the US, professional societies had long existed for civil and mechanical engineers. The Institution of Electrical Engineers (IEE) was founded in the UK in 1871, and the AIEE in the United States in 1884. These societies contributed to the exchange of electrical knowledge and the development of electrical engineering education.
On an international level, the International Electrotechnical Commission (IEC), which was founded in 1906, prepares standards for power engineering, with 20,000 electrotechnical experts from 172 countries developing global specifications based on consensus.
See also
Energy economics
Industrial ecology
Power electronics
Power system simulation
Power engineering software
References
External links
IEEE Power Engineering Society
Jadavpur University, Department of Power Engineering
Power Engineering International Magazine Articles
Power Engineering Magazine Articles
American Society of Power Engineers, Inc.
National Institute for the Uniform Licensing of Power Engineer Inc.
Worcester Polytechnic Institute Power Systems Engineering
P
P | Power engineering | [
"Physics",
"Engineering"
] | 1,795 | [
"Applied and interdisciplinary physics",
"Energy engineering",
"Mechanical engineering",
"Power engineering",
"Electrical engineering"
] |
618,120 | https://en.wikipedia.org/wiki/Electric%20power%20conversion | In electrical engineering, power conversion is the process of converting electric energy from one form to another.
A power converter is an electrical device for converting electrical energy between alternating current (AC) and direct current (DC). It can also change the voltage or frequency of the current.
Power converters include simple devices such as transformers, and more complex ones like resonant converters. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation.
Power converters are classified based on the type of power conversion they perform. One way of classifying power conversion systems is based on whether the input and output is alternating or direct current.
DC power conversion
DC to DC
The following devices can convert DC to DC:
Linear regulator
Voltage regulator
Motor–generator
Rotary converter
Switched-mode power supply
DC to AC
The following devices can convert DC to AC:
Power inverter
Motor–generator
Rotary converter
Switched-mode power supply
Chopper (electronics)
AC power conversion
AC to DC
The following devices can convert AC to DC:
Rectifier
Mains power supply unit (PSU)
Motor–generator
Rotary converter
Switched-mode power supply
AC to AC
The following devices can convert AC to AC:
Transformer or autotransformer
Voltage converter
Voltage regulator
Cycloconverter
Variable-frequency transformer
Motor–generator
Rotary converter
Switched-mode power supply
Other systems
There are also devices and methods to convert between power systems designed for single and three-phase operation.
The standard power voltage and frequency vary from country to country and sometimes within a country. In North America and northern South America, it is usually 120 volts, 60 hertz (Hz), but in Europe, Asia, Africa, and many other parts of the world, it is usually 230 volts, 50 Hz. Aircraft often use 400 Hz power internally, so 50 Hz or 60 Hz to 400 Hz frequency conversion is needed for use in the ground power unit used to power the airplane while it is on the ground. Conversely, internal 400 Hz internal power may be converted to 50 Hz or 60 Hz for convenience power outlets available to passengers during flight.
Certain specialized circuits can also be considered power converters, such as the flyback transformer subsystem powering a CRT, generating high voltage at approximately 15 kHz.
Consumer electronics usually include an AC adapter (a type of power supply) to convert mains-voltage AC current to low-voltage DC suitable for consumption by microchips. Consumer voltage converters (also known as "travel converters") are used when traveling between countries that use ~120 V versus ~240 V AC mains power. (There are also consumer "adapters" which merely form an electrical connection between two differently shaped AC power plugs and sockets, but these change neither voltage nor frequency.)
Why use transformers in power converters
Transformers are used in power converters to incorporate electrical isolation and voltage step-down or step up.
The secondary circuit is floating, when you touch the secondary circuit, you merely drag its potential to your body's potential or the earth's potential. There will be no current flowing through your body. That's why you can use your cellphone safely when it is being charged, even if your cellphone has a metal shell and is connected to the secondary circuit.
Operating at high frequency and supplying low power, power converters have much smaller transformers compared with those of fundamental-frequency, high-power applications.
The current in the primary winding of a transformer help to sets up the mutual flux in accordance with Ampere's law and balances the demagnetizing effect of the load current in the secondary winding.
Flyback converter's transformer works differently, like an inductor. In each cycle, the flyback converter's transformer first gets charged and then releases its energy to the load. Accordingly, the flyback converter's transformer air gap has two functions. It not only determines inductance but also stores energy. For the flyback converter, the transformer gap can have the function of energy transmission through cycles of charging and discharging.
The core's relative permeability can be > 1,000, even > 10,000. While the air gap features much lower permeability, accordingly it has higher energy density.
See also
Power supply
Cascade converter
Motor-generator
Resonant converter
Rotary converter
References
Abraham I. Pressman (1997). Switching Power Supply Design. McGraw-Hill. .
Ned Mohan, Tore M. Undeland, William P. Robbins (2002). Power Electronics: Converters, Applications, and Design. Wiley. .
Fang Lin Luo, Hong Ye, Muhammad H. Rashid (2005). Digital Power Electronics and Applications. Elsevier. .
Fang Lin Luo, Hong Ye (2004). Advanced DC/DC Converters. CRC Press. .
Mingliang Liu (2006). Demystifying Switched-Capacitor Circuits. Elsevier. .
External links
A general description of DC-DC converters
U.S. based 50 Hz, 60 Hz, and 400 Hz frequency converter manufacturer
GlobTek, Inc. Glossary of electric power supply and power conversion terms
Electric power systems components
Electronic engineering | Electric power conversion | [
"Technology",
"Engineering"
] | 1,096 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
618,227 | https://en.wikipedia.org/wiki/Superpartner | In particle physics, a superpartner (also sparticle) is a class of hypothetical elementary particles predicted by supersymmetry, which, among other applications, is one of the well-studied ways to extend the standard model of high-energy physics.
When considering extensions of the Standard Model, the s- prefix from sparticle is used to form names of superpartners of the Standard Model fermions (sfermions), e.g. the stop squark. The superpartners of Standard Model bosons have an -ino (bosinos) appended to their name, e.g. gluino, the set of all gauge superpartners are called the gauginos.
Theoretical predictions
According to the supersymmetry theory, each fermion should have a partner boson, the fermion's superpartner, and each boson should have a partner fermion. Exact unbroken supersymmetry would predict that a particle and its superpartners would have the same mass. No superpartners of the Standard Model particles have yet been found. This may indicate that supersymmetry is incorrect, or it may also be the result of the fact that supersymmetry is not an exact, unbroken symmetry of nature. If superpartners are found, their masses would indicate the scale at which supersymmetry is broken.
For particles that are real scalars (such as an axion), there is a fermion superpartner as well as a second, real scalar field. For axions, these particles are often referred to as axinos and saxions.
In extended supersymmetry there may be more than one superparticle for a given particle. For instance, with two copies of supersymmetry in four dimensions, a photon would have two fermion superpartners and a scalar superpartner.
In zero dimensions it is possible to have supersymmetry, but no superpartners. However, this is the only situation where supersymmetry does not imply the existence of superpartners.
Recreating superpartners
If the supersymmetry theory is correct, it should be possible to recreate these particles in high-energy particle accelerators. Doing so will not be an easy task; these particles may have masses up to a thousand times greater than their corresponding "real" particles.
Some researchers have hoped the Large Hadron Collider at CERN might produce evidence for the existence of superpartner particles. However, as of 2018, no such evidence has been found.
See also
Chargino
Gluino – as a superpartner of the Gluon
Gravitino – as a superpartner of the hypothetical graviton
Higgsino – as a superpartner of the Higgs Field
Neutralino
References
Supersymmetric quantum field theory
Particle physics | Superpartner | [
"Physics"
] | 610 | [
"Supersymmetric quantum field theory",
"Supersymmetry",
"Symmetry",
"Particle physics"
] |
618,241 | https://en.wikipedia.org/wiki/Absorber | In high energy physics experiments, an absorber is a block of material used to absorb some of the energy of an incident particle in an experiment. Absorbers can be made of a variety of materials, depending on the purpose; lead, tungsten and liquid hydrogen are common choices. Most absorbers are used as part of a particle detector; particle accelerators use absorbers to reduce the radiation damage on accelerator components.
Other uses of the same word
Absorbers are used in ionization cooling, as in the International Muon Ionization Cooling Experiment.
In solar power, a high degree of efficiency is achieved by using black absorbers which reflect off much less of the incoming energy.
In sunscreen formulations, ingredients which absorb UVA/UVB rays, such as avobenzone and octyl methoxycinnamate, are known as absorbers. They are contrasted with physical "blockers" of UV radiation such as titanium dioxide and zinc oxide.
References
Particle detectors
Accelerator physics | Absorber | [
"Physics",
"Technology",
"Engineering"
] | 200 | [
"Applied and interdisciplinary physics",
"Measuring instruments",
"Particle detectors",
"Experimental physics",
"Particle physics",
"Particle physics stubs",
"Accelerator physics"
] |
619,200 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase | Cyclin-dependent kinases (CDKs) are a predominant group of serine/threonine protein kinases involved in the regulation of the cell cycle and its progression, ensuring the integrity and functionality of cellular machinery. These regulatory enzymes play a crucial role in the regulation of eukaryotic cell cycle and transcription, as well as DNA repair, metabolism, and epigenetic regulation, in response to several extracellular and intracellular signals. They are present in all known eukaryotes, and their regulatory function in the cell cycle has been evolutionarily conserved. The catalytic activities of CDKs are regulated by interactions with CDK inhibitors (CKIs) and regulatory subunits known as cyclins. Cyclins have no enzymatic activity themselves, but they become active once they bind to CDKs. Without cyclin, CDK is less active than in the cyclin-CDK heterodimer complex. CDKs phosphorylate proteins on serine (S) or threonine (T) residues. The specificity of CDKs for their substrates is defined by the S/T-P-X-K/R sequence, where S/T is the phosphorylation site, P is proline, X is any amino acid, and the sequence ends with lysine (K) or arginine (R). This motif ensures CDKs accurately target and modify proteins, crucial for regulating cell cycle and other functions. Deregulation of the CDK activity is linked to various pathologies, including cancer, neurodegenerative diseases, and stroke.
Evolutionary history
CDKs were initially identified through studies in model organisms such as yeasts and frogs, underscoring their pivotal role in cell cycle progression. These enzymes operate by forming complexes with cyclins, whose levels fluctuate throughout the cell cycle, thereby ensuring timely cell cycle transitions. Over the years, the understanding of CDKs has expanded beyond cell division to include roles in gene transcription integration of cellular signals.
The evolutionary journey of CDKs has led to a diverse family with specific members dedicated to cell cycle phases or transcriptional control. For instance, budding yeast expresses six distinct CDKs, with some binding multiple cyclins for cell cycle control and others binding with a single cyclin for transcription regulation. In humans, the expansion to 20 CDKs and 29 cyclins illustrates their complex regulatory roles. Key CDKs such as CDK1 are indispensable for cell cycle control, while others like CDK2 and CDK3 are not. Moreover, transcriptional CDKs, such as CDK7 in humans, play crucial roles in initiating transcription by phosphorylating RNA polymerase II (RNAPII), indicating the intricate link between cell cycle regulation and transcriptional management. This evolutionary expansion from simple regulators to multifunctional enzymes underscores the critical importance of CDKs in the complex regulatory networks of eukaryotic cells.
Notable people
In 2001, the scientists Leland H. Hartwell, Tim Hunt and Sir Paul M. Nurse were awarded the Nobel Prize in Physiology or Medicine for their discovery of key regulators of the cell cycle.
Leland H. Hartwell (b. 1929 ): Through studies of yeast in 1971, Heartwell identified crucial genes for cell division, outlining the cell cycle's stages and essential checkpoints to prevent cancerous cell division.
Tim Hunt (b. 1943): Through studies of sea urchins in the 1980s, Hunt discovered the role of cyclins in the regulation of cell cycle phases through their cyclical synthesis and degradation.
Sir Paul M. Nurse (b. 1949): In the mid-1970s, Nurse's studies uncovered the cdc2 gene in fission yeast, which is crucial for the progression of the cell cycle from G1 to S phase and from G2 to M phase. In 1987, he identified the corresponding gene in humans, CDK1, highlighting the conservation of cell cycle control mechanisms across species.
CDKs and cyclins in the cell cycle
CDK is one of the estimated 800 human protein kinases. CDKs have low molecular weight, and they are known to be inactive by themselves. They are characterized by their dependency on the regulatory subunit, cyclin. The activation of CDKs also requires post-translational modifications involving phosphorylation reactions. This phosphorylation typically occurs on a specific threonine residue, leading to a conformational change in the CDK that enhances its kinase activity. The activation forms a cyclin-CDK complex which phosphorylates specific regulatory proteins that are required to initiate steps in the cell-cycle.In human cells, the CDK family comprises 20 different members that play a crucial role in the regulation of the cell cycle and transcription. These are usually separated into cell-cycle CDKs, which regulate cell-cycle transitions and cell division, and transcriptional CDKs, which mediate gene transcription. CDK1, CDK2, CDK3, CDK4, CDK6, and CDK7 are directly related to the regulation of cell-cycle events, while CDK7 – 11 are associated with transcriptional regulation. Different cyclin-CDK complexes regulate different phases of the cell cycle, known as G0/G1, S, G2, and M phases, featuring several checkpoints to maintain genomic stability and ensure accurate DNA replication. Cyclin-CDK complexes of earlier cell-cycle phase help activate cyclin-CDK complexes in later phase.
CDK structure and activation
Cyclin-dependent kinases (CDKs) mainly consist of a two-lobed configuration, which is characteristic of all kinases in general. CDKs have specific features in their structure that play a major role in their function and regulation.
N-terminal lobe (N-lobe): In this part, the inhibitory element known as the glycine-rich G-loop is located. The inhibitory element is found within the beta-sheets in this N-terminal lobe. Additionally, there is a helix known as the C-helix. This helix contains the PSTAIRE sequence in CDK1. This region plays a crucial role in regulating the binding between cyclin-dependent kinases (CDKs) and cyclins.
C-terminal lobe (C-lobe): This part contains α-helices and the activation segment, which extends from the DFG motif (D145 in CDK2) to the APE motif (E172 in CDK2). This segment also includes a phosphorylation-sensitive residue (T160 in CDK2) in the so-called T-loop. The activation segment in the C-lobe serves as a platform for the binding of the phospho-acceptor Ser/Thr region of substrates.
Cyclin binding
The active site, or ATP-binding site, in all kinases is a cleft located between a smaller amino-terminal lobe and a larger carboxy-terminal lobe. Research on the structure of human CDK2 has shown that CDKs have a specially adapted ATP-binding site that can be regulated through the binding of cyclin. Phosphorylation by CDK-activating kinase (CAK) at Thr160 in the T-loop helps to increase the complex's activity. Without cyclin, a flexible loop known as the activation loop or T-loop blocks the cleft, and the positioning of several key amino acids is not optimal for ATP binding. With cyclin, two alpha helices change position to enable ATP binding. One of them, the L12 helix located just before the T-loop in the primary sequence, is transformed into a beta strand and helps to reorganize the T-loop so that it no longer blocks the active site. The other alpha helix, known as the PSTAIRE helix, is reorganized and helps to change the position of the key amino acids in the active site.
There's considerable specificity in which cyclin binds to CDK. Furthermore, the cyclin binding determines the specificity of the cyclin-CDK complex for certain substrates, highlighting the importance of distinct activation pathways that confer cyclin-binding specificity on CDK1. This illustrates the complexity and fine-tuning in the regulation of the cell cycle through selective binding and activation of CDKs by their respective cyclins.
Cyclins can directly bind the substrate or localize the CDK to a subcellular area where the substrate is found. The RXL-binding site was crucial in revealing how CDKs selectively enhance activity toward specific substrates by facilitating substrate docking. Substrate specificity of S cyclins is imparted by the hydrophobic batch, which has affinity for substrate proteins that contain a hydrophobic RXL (or Cy) motif. Cyclin B1 and B2 can localize CDK1 to the nucleus and the Golgi, respectively, through a localization sequence outside the CDK-binding region.
Phosphorylation
To achieve full kinase activity, an activating phosphorylation on a threonine adjacent to the CDK's active site is required. The identity of the CDK-activating kinase (CAK) that carries out this phosphorylation varies among different model organisms. The timing of this phosphorylation also varies; in mammalian cells, the activating phosphorylation occurs after cyclin binding, while in yeast cells, it occurs before cyclin binding. CAK activity is not regulated by known cell cycle pathways, and it is the cyclin binding that is the limiting step for CDK activation.
Unlike activating phosphorylation, CDK inhibitory phosphorylation is crucial for cell cycle regulation. Various kinases and phosphatases control their phosphorylation state. For instance, the activity of CDK1 is controlled by the balance between WEE1 kinases, Myt1 kinases, and the phosphorylation of Cdc25c phosphatases. Wee1, a kinase preserved across all eukaryotes, phosphorylates CDK1 at Tyr 15. Myt1 can phosphorylate both the threonine (Thr 14) and the tyrosine (Tyr 15). The dephosphorylation is performed by Cdc25c phosphatases, by removing the phosphate groups from both the threonine and the tyrosine. This inhibitory phosphorylation helps preventing cell-cycle progression in response to events like DNA damage. The phosphorylation does not significantly alter the CDK structure, but reduces its affinity to the substrate, thereby inhibiting its activity. For the cell cycle to progress, the inhibitory phosphate groups must be removed by the Cdc25 phosphatases to reactivate the CDKs.
CDK inhibitors
A cyclin-dependent kinase inhibitor (CKI) is a protein that interacts with a cyclin-CDK complex to inhibit kinase activity, often during G1 phase or in response to external signals or DNA damage. In animal cells, two primary CKI families exist: the INK4 family (p16, p15, p18, p19) and the CIP/KIP family (p21, p27, p57). The INK4 family proteins specifically bind to and inhibit CDK4 and CDK6 by D-type cyclins or by CAK, while the CIP/KIP family prevent the activation of CDK-cyclin heterodimers, disrupting both cyclin binding and kinase activity. These inhibitors have a KID (kinase inhibitory domain) at the N-terminus, facilitating their attachment to cyclins and CDKs. Their primary function occurs in the nucleus, supported by a C-terminal sequence that enables their nuclear translocation.
In yeast and Drosophila, CKIs are strong inhibitors of S- and M-CDK, but do not inhibit G1/S-CDKs. During G1, high levels of CKIs prevent cell cycle events from occurring out of order, but do not prevent transition through the Start checkpoint, which is initiated through G1/S-CDKs. Once the cell cycle is initiated, phosphorylation by early G1/S-CDKs leads to destruction of CKIs, relieving inhibition on later cell cycle transitions. In mammalian cells, the CKI regulation works differently. Mammalian protein p27 (Dacapo in Drosophila) inhibits G1/S- and S-CDKs but does not inhibit S- and M-CDKs.
Ligand-based inhibition methods involve the use of small molecules or ligands that specifically bind to CDK2, which is a crucial regulator of the cell cycle. The ligands bind to the active site of CDK2, thereby blocking its activity. These inhibitors can either mimic the structure of ATP, competing for the active site and preventing protein phosphorylation needed for cell cycle progression, or bind to allosteric sites, altering the structure of CDK2 to decrease its efficiency.
CDK subunits (CKS)
CDKs are essential for the control and regulation of the cell cycle. They are associated with small regulatory subunits regulatory subunits (CKSs). In mammalian cells, two CKSs are known: CKS1 and CKS2. These proteins are necessary for the proper functioning of CDKs, although their exact functions are not yet fully known. An interaction occurs between CKS1 and the carboxy-terminal lobe of CDKs, where they bind together. This binding increases the affinity of the cyclin-CDK complex for its substrates, especially those with multiple phosphorylation sites, thus contributing the promotion of cell proliferation.
Non-cyclin activators
Viral cyclins
Viruses can encode proteins with sequence homology to cyclins. One much-studied example is K-cyclin (or v-cyclin) from Kaposi sarcoma herpes virus (see Kaposi's sarcoma), which activates CDK6. The vCyclin-CDK6 complex promotes an accelerated transition from G1 to S phase in the cell by phosphorylating pRb and releasing E2F. This leads to the removal of inhibition on Cyclin E–CDK2's enzymatic activity. It is shown that vCyclin contributes to promoting transformation and tumorigenesis, mainly through its effect on p27 pSer10 phosphorylation and cytoplasmic sequestration.
CDK5 activators
Two protein types, p35 and p39, responsible for increasing the activity of CDK5 during neuronal differentiation in postnatal development. p35 and p39 play a crucial role in a unique mechanism for regulating CDK5 activity in neuronal development and network formation. The activation of CDK with these cofactors (p35 and p39) does not require phosphorylation of the activation loop, which is different from the traditional activation of many other kinases. This highlights the importance of activating CDK5 activity, which is critical for proper neuronal development, dendritic spine and synapse formation, as well as in response to epileptic events.
RINGO/Speedy
Proteins in the RINGO/Speedy group represent a standout bunch among proteins that don't share amino acid sequence homology with the cyclin family. They play a crucial role in activating CDKs. Originally identified in Xenopus, these proteins primarily bind to and activate CDK1 and CDK2, despite lacking homology to cyclins. What is particularly interesting, is that CDKs activated by RINGO/Speedy can phosphorylate different sites than those targeted by cyclin-activated CDKs, indicating a unique mode of action for these non-cyclin CDK activators.
Medical significance
CDKs and cancer
The dysregulation of CDKs and cyclins disrupts the cell cycle coordination, which makes them involved in the pathogenesis of several diseases, mainly cancers. Thus, studies of cyclins and cyclin-dependent kinases (CDK) are essential for advancing the understanding of cancer characteristics. Research has shown that alterations in cyclins, CDKs, and CDK inhibitors (CKIs) are common in most cancers, involving chromosomal translocations, point mutations, insertions, deletions, gene overexpression, frame-shift mutations, missense mutations, or splicing errors.
The dysregulation of the CDK4/6-RB pathway is a common feature in many cancers, often resulting from various mechanisms that inactivate the cyclin D-CDK4/6 complex. Several signals can lead to overexpression of cyclin D and enhance CDK4/6 activity, contributing toward tumorigenesis. Additionally, the CDK4/6-RB pathway interacts with the p53 signaling pathway via p21CIP1 transcription, which can inhibit both cyclin D-CDK4/6 and cyclin E-CDK2 complexes. Mutations in p53 can deactivate the G1 checkpoint, further promoting uncontrolled proliferation.
CDK inhibitors and therapeutic potential
Due to their central role in regulating cell cycle progression and cell proliferation, CDKs are considered ideal therapeutic targets for cancer. The following CDK4/6 inhibitors mark a significant advancement in cancer treatment, offering targeted therapies that are effective and have a manageable side effect profile.
Palbociclib, one of the first CDK4/6 inhibitors approved by the FDA, has become essential in the treatment of HR+/HER2- advanced or metastatic breast cancer, often in combination with endocrine therapy.
Ribociclib, demonstrating similar efficacy to palbociclib, is also approved for HR+/HER2- advanced breast cancer and offers benefits for a younger patient demographic.
Abemaciclib stands out by being usable as monotherapy, in addition to combination treatment, for certain HR+/HER2- breast cancer patients. It has also shown effectiveness in treating patients with brain metastases.
Trilaciclib has proven its value by improving patients' quality of life during cancer treatment by reducing the risk of chemotherapy-induced myelosuppression, a common side effect that can lead to treatment delays and dose reductions.
Challenges and future potential
Complications of developing a CDK drug include the fact that many CDKs are not involved in the cell cycle, but other processes such as transcription, neural physiology, and glucose homeostasis. More research is required, however, because disruption of the CDK-mediated pathway has potentially serious consequences; while CDK inhibitors seem promising, it has to be determined how side-effects can be limited so that only target cells are affected. As such diseases are currently treated with glucocorticoids. The comparison with glucocorticoids serves to illustrate the potential benefits of CDK inhibitors, assuming their side effects can be more narrowly targeted or minimized.
See also
Cell cycle
Protein kinase
Enzyme catalysis
Enzyme inhibitor
References
External links
Cell cycle regulators
Protein families
EC 2.7.11 | Cyclin-dependent kinase | [
"Chemistry",
"Biology"
] | 4,033 | [
"Protein families",
"Cell cycle regulators",
"Protein classification",
"Signal transduction"
] |
619,201 | https://en.wikipedia.org/wiki/Nebulizer | In medicine, a nebulizer (American English) or nebuliser (British English) is a drug delivery device used to administer medication in the form of a mist inhaled into the lungs. Nebulizers are commonly used for the treatment of asthma, cystic fibrosis, COPD and other respiratory diseases or disorders. They use oxygen, compressed air or ultrasonic power to break up solutions and suspensions into small aerosol droplets that are inhaled from the mouthpiece of the device. An aerosol is a mixture of gas and solid or liquid particles.
Medical uses
Guidelines
Various asthma guidelines, such as the Global Initiative for Asthma Guidelines [GINA], the British Guidelines on the management of Asthma, The Canadian Pediatric Asthma Consensus Guidelines, and United States Guidelines for Diagnosis and Treatment of Asthma each recommend metered dose inhalers in place of nebulizer-delivered therapies.
The European Respiratory Society acknowledge that although nebulizers are used in hospitals and at home they suggest much of this use may not be evidence-based.
Effectiveness
Recent evidence shows that nebulizers are no more effective than metered-dose inhalers (MDIs) with spacers. An MDI with a spacer may offer advantages to children who have acute asthma. Those findings refer specifically to the treatment of asthma and not to the efficacy of nebulisers generally, as for COPD for example. For COPD, especially when assessing exacerbations or lung attacks, there is no evidence to indicate that MDI (with a spacer) delivered medicine is more effective than administration of the same medicine with a nebulizer.
The European Respiratory Society highlighted a risk relating to droplet size reproducibility caused by selling nebulizer devices separately from nebulized solution. They found this practice could vary droplet size 10-fold or more by changing from an inefficient nebulizer system to a highly efficient one.
Two advantages attributed to nebulizers, compared to MDIs with spacers (inhalers), are their ability to deliver larger dosages at a faster rate, especially in acute asthma; however, recent data suggests actual lung deposition rates are the same. In addition, another trial found that a MDI (with spacer) had a lower required dose for clinical result compared to a nebulizer.
Beyond use in chronic lung disease, nebulizers may also be used to treat acute issues like the inhalation of toxic substances. One such example is the treatment of inhalation of toxic hydrofluoric acid (HF) vapors. Calcium gluconate is a first-line treatment for HF exposure to the skin. By using a nebulizer, calcium gluconate is delivered to the lungs as an aerosol to counteract the toxicity of inhaled HF vapors.
Aerosol deposition
The lung deposition characteristics and efficacy of an aerosol depend largely on the particle or droplet size. Generally, the smaller the particle the greater its chance of peripheral penetration and retention. However, for very fine particles below 0.5 μm in diameter there is a chance of avoiding deposition altogether and being exhaled. In 1966 the Task Group on Lung Dynamics, concerned mainly with the hazards of inhalation of environmental toxins, proposed a model for deposition of particles in the lung. This suggested that particles of more than 10 μm in diameter are most likely to deposit in the mouth and throat, for those of 5–10 μm diameter a transition from mouth to airway deposition occurs, and particles smaller than 5 μm in diameter deposit more frequently in the lower airways and are appropriate for pharmaceutical aerosols.
Nebulizing processes have been modeled using computational fluid dynamics.
Types
Pneumatic
Jet nebulizer
The most commonly used nebulizers are jet nebulizers, which are also called "atomizers". Jet nebulizers are connected by tubing to a supply of compressed gas, usually compressed air or oxygen to flow at high velocity through a liquid medicine to turn it into an aerosol that is inhaled by the patient. Currently there seems to be a tendency among physicians to prefer prescription of a pressurized Metered Dose Inhaler (pMDI) for their patients, instead of a jet nebulizer that generates a lot more noise (often 60 dB during use) and is less portable due to a greater weight. However, jet nebulizers are commonly used in hospitals for patients who have difficulty using inhalers, such as in serious cases of respiratory disease, or severe asthma attacks. The main advantage of the jet nebulizer is related to its low operational cost. If the patient needs to inhale medicine on a daily basis the use of a pMDI can be rather expensive. Today several manufacturers have also managed to lower the weight of the jet nebulizer to just over half a kilogram (just under one and a half pounds), and therefore started to label it as a portable device. Compared to all the competing inhalers and nebulizers, the noise and heavy weight is still the biggest draw back of the jet nebulizer.
Mechanical
Soft mist inhaler
The medical company Boehringer Ingelheim also invented a device named Respimat Soft Mist Inhaler in 1997. This new technology provides a metered dose to the user, as the liquid bottom of the inhaler is rotated clockwise 180 degrees by hand, adding a buildup tension into a spring around the flexible liquid container. When the user activates the bottom of the inhaler, the energy from the spring is released and imposes pressure on the flexible liquid container, causing liquid to spray out of 2 nozzles, thus forming a soft mist to be inhaled. The device features no gas propellant and no need for battery/power to operate. The average droplet size in the mist was measured to 5.8 micrometers, which could indicate some potential efficiency problems for the inhaled medicine to reach the lungs. Subsequent trials have proven this was not the case. Due to the very low velocity of the mist, the Soft Mist Inhaler in fact has a higher efficiency compared to a conventional pMDI. In 2000, arguments were launched towards the European Respiratory Society (ERS) to clarify/expand their definition of a nebulizer, as the new Soft Mist Inhaler in technical terms both could be classified as a "hand driven nebulizer" and a "hand driven pMDI".
Electrical
Ultrasonic wave nebulizer
Ultrasonic wave nebulizers were invented in 1965 as a new type of portable nebulizer. The technology inside an ultrasonic wave nebulizer is to have an electronic oscillator generate a high frequency ultrasonic wave, which causes the mechanical vibration of a piezoelectric element. This vibrating element is in contact with a liquid reservoir and its high frequency vibration is sufficient to produce a vapor mist via ultrasonic atomization. As they create aerosols from ultrasonic vibration instead of using a heavy air compressor, they only have a weight around . Another advantage is that the ultrasonic vibration is almost silent. Examples of these more modern type of nebulizers are: Omron NE-U17 and Beurer Nebulizer IH30.
Vibrating mesh technology
A new significant innovation was made in the nebulizer market around 2005, with creation of the ultrasonic Vibrating Mesh Technology (VMT). With this technology a mesh/membrane with 1000–7000 laser drilled holes vibrates at the top of the liquid reservoir, and thereby pressures out a mist of very fine droplets through the holes. This technology is more efficient than having a vibrating piezoelectric element at the bottom of the liquid reservoir, and thereby shorter treatment times are also achieved. The old problems found with the ultrasonic wave nebulizer, having too much liquid waste and undesired heating of the medical liquid, have also been solved by the new vibrating mesh nebulizers. Available VMT nebulizers include: Pari eFlow, Respironics i-Neb, Beurer Nebulizer IH50, and Aerogen Aeroneb. As the price of the ultrasonic VMT nebulizers is higher than models using previous technologies, most manufacturers continue to also sell the classic jet nebulizers.
Use and attachments
Nebulizers accept their medicine in the form of a liquid solution, which is often loaded into the device upon use. Corticosteroids and bronchodilators such as salbutamol (albuterol USAN) are often used, and sometimes in combination with ipratropium. The reason these pharmaceuticals are inhaled instead of ingested is in order to target their effect to the respiratory tract, which speeds onset of action of the medicine and reduces side effects, compared to other alternative intake routes.
Usually, the aerosolized medicine is inhaled through a tube-like mouthpiece, similar to that of an inhaler. The mouthpiece, however, is sometimes replaced with a face mask, similar to that used for inhaled anesthesia, for ease of use with young children or the elderly. Pediatric masks are often shaped like animals such as fish, dogs or dragons to make children less resistant to nebulizer treatments. Many nebulizer manufacturers also offer pacifier attachments for infants and toddlers. But mouthpieces are preferable if patients are able to use them since face-masks result in reduced lung delivery because of aerosol losses in the nose.
After use with corticosteroid, it is theoretically possible for patients to develop a yeast infection in the mouth (thrush) or hoarseness of voice (dysphonia), although these conditions are clinically very rare. To avoid these adverse effects, some clinicians suggest that the person who used the nebulizer should rinse his or her mouth. This is not true for bronchodilators; however, patients may still wish to rinse their mouths due to the unpleasant taste of some bronchodilating drugs.
History
The first "powered" or pressurized inhaler was invented in France by Sales-Girons in 1858. This device used pressure to atomize the liquid medication. The pump handle is operated like a bicycle pump. When the pump is pulled up, it draws liquid from the reservoir, and upon the force of the user's hand, the liquid is pressurized through an atomizer, to be sprayed out for inhalation near the user's mouth.
In 1864, the first steam-driven nebulizer was invented in Germany. This inhaler, known as "Siegle's steam spray inhaler", used the Venturi principle to atomize liquid medication, and this was the very beginning of nebulizer therapy. The importance of droplet size was not yet understood, so the efficacy of this first device was unfortunately mediocre for many of the medical compounds. The Siegle steam spray inhaler consisted of a spirit burner, which boiled water in the reservoir into steam that could then flow across the top and into a tube suspended in the pharmaceutical solution. The passage of steam drew the medicine into the vapor, and the patient inhaled this vapor through a mouthpiece made of glass.
The first pneumatic nebulizer fed from an electrically driven gas (air) compressor was invented in the 1930s and called a Pneumostat. With this device, a medical liquid (typically epinephrine chloride, used as a bronchial muscle relaxant to reverse constriction). As an alternative to the expensive electrical nebulizer, many people in the 1930s continued to use the much more simple and cheap hand-driven nebulizer, known as the Parke-Davis Glaseptic.
In 1956, a technology competing against the nebulizer was launched by Riker Laboratories (3M), in the form of pressurized metered-dose inhalers, with Medihaler-iso (isoprenaline) and Medihaler-epi (epinephrine) as the two first products. In these devices, the drug is cold-fill and delivered in exact doses through some special metering valves, driven by a gas propellant technology (i.e. Freon or a less environmentally damaging HFA).
In 1964, a new type of electronic nebulizer was introduced: the "ultrasonic wave nebulizer". Today the nebulizing technology is not only used for medical purposes. Ultrasonic wave nebulizers are also used in humidifiers, to spray out water aerosols to moisten dry air in buildings.
Some of the first models of electronic cigarettes featured an ultrasonic wave nebulizer (having a piezoelectric element vibrating and creating high-frequency ultrasound waves, to cause vibration and atomization of liquid nicotine) in combination with a vapouriser (built as a spray nozzle with an electric heating element). The most common type of electronic cigarettes currently sold, however, omit the ultrasonic wave nebulizer, as it was not found to be efficient enough for this kind of device. Instead, the electronic cigarettes now use an electric vaporizer, either in direct contact with the absorbent material in the "impregnated atomizer," or in combination with the nebulization technology related to a "spraying jet atomizer" (in the form of liquid droplets being out-sprayed by a high-speed air stream, that passes through some small venturi injection channels, drilled in a material absorbed with nicotine liquid).
See also
Heated humidified high-flow therapy
Inhaler
Humidifier
Vaporizer
List of medical inhalants
Spray bottle
References
Aerosols
Respiratory therapy
Drug delivery devices
Medical equipment
Dosage forms | Nebulizer | [
"Chemistry",
"Biology"
] | 2,831 | [
"Pharmacology",
"Drug delivery devices",
"Colloids",
"Medical equipment",
"Aerosols",
"Medical technology"
] |
619,215 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20complex | A cyclin-dependent kinase complex (CDKC, cyclin-CDK) is a protein complex formed by the association of an inactive catalytic subunit of a protein kinase, cyclin-dependent kinase (CDK), with a regulatory subunit, cyclin. Once cyclin-dependent kinases bind to cyclin, the formed complex is in an activated state. Substrate specificity of the activated complex is mainly established by the associated cyclin within the complex. Activity of CDKCs is controlled by phosphorylation of target proteins, as well as binding of inhibitory proteins.
Structure and Regulation
The structure of CDKs in complex with a cyclin subunits (CDKC) has long been a goal of structural and cellular biologists starting in the 1990s when the structure of unbound cyclin A was solved by Brown et al. and in the same year Jeffery et al. solved the structure of human cyclin A-CDK2 complex to 2.3 Angstrom resolution. Since this time, many CDK structures have been determined to higher resolution, including the structures of CDK2 and CDK2 bound to a variety of substrates, as seen in Figure 1.
High resolution structures exist for approximately 25 CDK-cyclin complexes in total within the Protein Data Bank. Based on function, there are two general populations of CDK-cyclin complex structures, open and closed form. The difference between the forms lies within the binding of cyclin partners where closed form complexes have CDK-cyclin binding at both the C and N-termini of the activation loop of the CDK, whereas the open form partners bind only at the N-terminus. Open form structures correspond most often to those complexes involved in transcriptional regulation (CDK 8, 9, 12, and 13), while closed form CDK-cyclin complex are most often involved in cell cycle progression and regulation (CDK 1, 2, 6). These distinct roles, however, do not significantly differ with the sequence homology between the CDK components. In particular, among these known structures there appear to be four major conserved regions: a N-terminal Glycine-rich loop, a Hinge Region, an αC-helix, and a T-loop regulation site.
Activation Loop
The activation loop, also referred to as the T-loop, is the region of CDK (between the DFG and APE motifs in many CDK) that is enzymatically active when CDK is bound to its function-specific partner. In CDK-cyclin complexes, this activation region is composed of a conserved αL-12 Helix and contains a key phosphorylatable residue (usually Threonine for CDK-cyclin partners, but also includes Serine and Tyrosine) that mediates the enzymatic activity of the CDK. It is at this essential residue (T160 in CDK2 complexes, T177 in CDK6 complexes) that enzymatic ATP-phosphorylation of CDK-cyclin complexes by CAK (cyclin activating kinase, referring to the CDK7-Cyclin H complex in human cells) takes place. After the hydrolysis of ATP to phosphorylate at this site, these complexes are able to complete their intended function, the phosphorylation of cellular targets. It is important to note that in CDK 1, 2 and 6, the T-loop and a separate C-terminal region are the major sites of cyclin binding in the CDK, and which cyclins are bound to each of these CDK is mediated by the particular sequence of the activation site T-loop. These cyclin binding sites are the regions of highest variability in CDKs despite relatively high sequence homology surrounding the αL-12 Helix motif of this structural component.
Glycine-rich region
The glycine-rich loop (Gly-rich loop) as seen in residues 12-16 in CDK2 encodes a conserved GXGXXG motif across both yeast and animal models. The regulatory region is subject to differential phosphorylation at non-glycine residues within this motif, making this site subject to Wee1 and/or Myt1 inhibitory kinase phosphorylation and Cdc25 de-phosphorylation in mammals. This reversible phosphorylation at the Gly-rich loop in CDK2 occurs at Y15, where activity has been further studied. Study of this residue has shown that phosphorylation promotes a conformational change that prevents ATP and substrate binding by steric interference with these necessary binding sites in the activation loop of the CDK-cyclin complexes. This activity is aided by the notable flexibility that the Gly-rich loop has within the structure of most CDK allowing for its rotation toward the activation loop to have a significant effect on reducing substrate affinity without major changes in the overall CDK-cyclin complex structure.
Hinge Region
The conserved hinge region of CDK within eukaryotic cells acts as an essential bridge between the Gly-rich loop and the activation loop. CDK are characterized by a N-terminal lobe that is primarily twisted beta-sheet connected via this hinge region to an alpha helix dominated C-terminal lobe. In discussion of the T-loop and the Gly-rich loop, it is important to note that these regions, which must be able to spatially interact in order to carry out their biochemical functions, lie on opposite lobes of the CDK itself. Thus, this hinge region, which can vary in length slightly between CDK type and CDK-cyclin complex, connects essential regulatory regions of the CDK by connecting these lobes, and plays key roles in the resulting structure of CDK-cyclin complexes by properly orienting ATP for easy catalysis of phosphorylation reactions by the assembled complex.
αC-Helix
The αC-Helix region is highly conserved across many of the mammalian kinome (family of kinases). Its main responsibility is to maintain allosteric control of the kinase active site. This control manifests in CDK-cyclin complexes by specifically preventing CDK activity until its binds to its partner regulator (i.e. cyclin or other partner protein). This binding causes a conformational change in the αC-Helix region of the CDK and allows for it to be moved from the active site cleft and completes the initial process of T-loop activation. Given that this region is so conserved across the protein superfamily of kinases, this mechanism where the αC-Helix has been shown to fold out of the N-terminal lobe of the kinase, allowing for increased access to the αL-12 Helix that lies within the T-loop, is considered a potential target for drug development.
The cell cycle
Yeast cell cycle
Although these complexes have a variety functions, CDKCs are most known for their role in the cell cycle. Initially, studies were conducted in Schizosaccharomyces pombe and Saccharomyces cerevisiae (yeast). S. pombe and S. cerevisiae are most known for their association with a single Cdk, Cdc2 and Cdc28 respectively, which complexes with several different cyclins. Depending on the cyclin, various portions of the cell cycle are affected. For example, in S. pombe, Cdc2 associates with Cdk13 to form the Cdk13-Cdc2 complex. In S. cerevisiae, the association of Cdc28 with cyclins, Cln1, Cln2, or Cln3, results in the transition from G1 phase to S phase. Once in the S phase, Cln1 and Cln2 dissociates with Cdc28 and complexes between Cdc28 and Clb5 or Clb6 are formed. In G2 phase, complexes formed from the association between Cdc28 and Clb1, Clb2, Clb3, or Clb4, results in the progression from G2 phase to M (Mitotic) phase. These complexes are present in early M phase as well. See Table 1 for a summary of yeast CDKCs.
Table 1. CDKCs Associated with Cell Cycle Phases in Yeast
From what is known about the complexes formed during each phase of the cell cycle in yeast, proposed models have emerged based on important phosphorylation sites and transcription factors involved.
Mammalian cell cycle
Using the information discovered through yeast cell cycle studies, significant progress has been made regarding the mammalian cell cycle. It has been determined that the cell cycles are similar and CDKCs, either directly or indirectly, affect the progression of the cell cycle. As previously mentioned, in yeast, only one cyclin-dependent kinase (CDK) is associated with several different cyclins. However, in mammalian cells, several different CDKs bind to various cyclins to form CDKCs. For instance, Cdk1 (also known as human Cdc2), the first human CDK to be identified, associates with cyclins A or B. CyclinA/B-Cdk1 complexes drive the transition between G2 phase and M phase, as well as early M phase. Another mammalian CDK, Cdk2, can form complexes with cyclins D1, D2, D3, E, or A. Cdk4 and Cdk6 interact with cyclins D1, D2, and D3. Studies have indicated that there is no difference between CDKCs cyclin D1-Cdk4/6, therefore, any unique properties can possibly be linked to substrate specificity or activation. While levels of CDKs remain fairly constant throughout the cell cycle, cyclin levels fluctuate. The fluctuation controls the activation of the cyclin-CDK complexes and ultimately the progression throughout the cycle. See Table 2 for a summary of mammalian cell CDKCs involved in the cell cycle.
Table 2. CDKCs Associated with Cell Cycle Phases in Mammalian Cells
G1 to S phase progression
During late G1 phase, CDKCs bind and phosphorylate members of the retinoblastoma (Rb) protein family. Members of the Rb protein family are tumor suppressors, which prevent uncontrolled cell proliferation that would occur during tumor formation. However, pRbs are also thought to repress the genes required in order for the transition from G1 phase to S phase to occur. When the cell is ready to transition into the next phase, CDKCs, cyclin D1-Cdk4 and cyclin D1-Cdk6 phosphorylate pRB, followed by additional phosphorylation from the cyclin E-Cdk2 CDKC. Once phosphorylation occurs, transcription factors are then released to irreversibly inactivate pRB and progression into the S phase of the cell cycle ensues. The cyclin E-Cdk2 CDKC formed in the G1 phase then aids in the initiation of DNA replication during S phase.
G2 to M phase progression
At the end of S phase, cyclin A is associated with Cdk1 and Cdk2. During G2 phase, cyclin A is degraded, while cyclin B is synthesized and cyclin B-Cdk1 complexes form. Not only are cyclin B-Cdk1 complexes important for the transition into M phase, but these CDKCs play a role in the following regulatory and structural processes:
Chromosomal condensation
Fragmentation of Golgi network
Breakdown of nuclear lamina
Inactivation of the cyclin B-Cdk1 complex through the degradation of cyclin B is necessary for exit out of the M phase of the cell cycle.
Other
Even though the majority of the known CDKCs are involved in the cell cycle, not all kinase complexes function in this manner. Studies have shown other CDKCs, such as cyclin k-Cdk9 and cyclin T1-Cdk9, are involved in the replication stress response, and influence transcription. Additionally, cyclin H-Cdk7 complexes may play a role in meiosis in male germ cells, and has been shown to be involved in transcriptional activities as well.
See also
Cyclin
Cyclin-dependent kinase
Cyclin D/Cdk4
Cyclin E/Cdk2
References
Cell cycle | Cyclin-dependent kinase complex | [
"Biology"
] | 2,620 | [
"Cell cycle",
"Cellular processes"
] |
619,632 | https://en.wikipedia.org/wiki/Transfection | Transfection is the process of deliberately introducing naked or purified nucleic acids into eukaryotic cells. It may also refer to other methods and cell types, although other terms are often preferred: "transformation" is typically used to describe non-viral DNA transfer in bacteria and non-animal eukaryotic cells, including plant cells. In animal cells, transfection is the preferred term, as the term "transformation" is also used to refer to a cell's progression to a cancerous state (carcinogenesis). Transduction is often used to describe virus-mediated gene transfer into prokaryotic cells.
The word transfection is a portmanteau of the prefix trans- and the word "infection." Genetic material (such as supercoiled plasmid DNA or siRNA constructs), may be transfected. Transfection of animal cells typically involves opening transient pores or "holes" in the cell membrane to allow the uptake of material. Transfection can be carried out using calcium phosphate (i.e. tricalcium phosphate), by electroporation, by cell squeezing, or by mixing a cationic lipid with the material to produce liposomes that fuse with the cell membrane and deposit their cargo inside.
Transfection can result in unexpected morphologies and abnormalities in target cells.
Terminology
The meaning of the term has evolved. The original meaning of transfection was "infection by transformation", i.e., introduction of genetic material, DNA or RNA, from a prokaryote-infecting virus or bacteriophage into cells, resulting in an infection. For work with bacterial and archaeal cells transfection retains its original meaning as a special case of transformation. Because the term transformation had another sense in animal cell biology (a genetic change allowing long-term propagation in culture, or acquisition of properties typical of cancer cells), the term transfection acquired, for animal cells, its present meaning of a change in cell properties caused by introduction of DNA.
Methods
There are various methods of introducing foreign DNA into a eukaryotic cell: some rely on physical treatment (electroporation, cell squeezing, nanoparticles, magnetofection); others rely on chemical materials or biological particles (viruses) that are used as carriers. There are many different methods of gene delivery developed for various types of cells and tissues, from bacterial to mammalian. Generally, the methods can be divided into three categories: physical, chemical, and biological.
Physical methods include electroporation, microinjection, gene gun, impalefection, hydrostatic pressure, continuous infusion, and sonication. Chemicals include methods such as lipofection, which is a lipid-mediated DNA-transfection process utilizing liposome vectors. It can also include the use of polymeric gene carriers (polyplexes). Biological transfection is typically mediated by viruses, utilizing the ability of a virus to inject its DNA inside a host cell. A gene that is intended for delivery is packaged into a replication-deficient viral particle. Viruses used to date include retrovirus, lentivirus, adenovirus, adeno-associated virus, and herpes simplex virus.
Physical methods
Physical methods are the conceptually simplest, using some physical means to force the transfected material into the target cell's nucleus. The most widely used physical method is electroporation, where short electrical pulses disrupt the cell membrane, allowing the transfected nucleic acids to enter the cell. Other physical methods use different means to poke holes in the cell membrane: Sonoporation uses high-intensity ultrasound (attributed mainly to the cavitation of gas bubbles interacting with nearby cell membranes), optical transfection uses a highly focused laser to form a ~1 μm diameter hole.
Several methods use tools that force the nucleic acid into the cell, namely: microinjection of nucleic acid with a fine needle; biolistic particle delivery, in which nucleic acid is attached to heavy metal particles (usually gold) and propelled into the cells at high speed; and magnetofection, where nucleic acids are attached to magnetic iron oxide particles and driven into the target cells by magnets.
Hydrodynamic delivery is a method used in mice and rats, in which nucleic acids can be delivered to the liver by injecting a relatively large volume in the blood in less than 10 seconds; nearly all of the DNA is expressed in the liver by this procedure.
Chemical methods
Chemical-based transfection can be divided into several kinds: cyclodextrin, polymers, liposomes, or nanoparticles (with or without chemical or viral functionalization. See below).
One of the cheapest methods uses calcium phosphate, originally discovered by F. L. Graham and A. J. van der Eb in 1973 (see also). HEPES-buffered saline solution (HeBS) containing phosphate ions is combined with a calcium chloride solution containing the DNA to be transfected. When the two are combined, a fine precipitate of the positively charged calcium and the negatively charged phosphate will form, binding the DNA to be transfected on its surface. The suspension of the precipitate is then added to the cells to be transfected (usually a cell culture grown in a monolayer). By a process not entirely understood, the cells take up some of the precipitate, and with it, the DNA. This process has been a preferred method of identifying many oncogenes.
Another method is the use of cationic polymers such as DEAE-dextran or polyethylenimine (PEI). The negatively charged DNA binds to the polycation and the complex is taken up by the cell via endocytosis.
Lipofection (or liposome transfection) is a technique used to inject genetic material into a cell by means of liposomes, which are vesicles that can easily merge with the cell membrane since they are both made of a phospholipid bilayer. Lipofection generally uses a positively charged (cationic) lipid (cationic liposomes or mixtures) to form an aggregate with the negatively charged (anionic) genetic material. This transfection technology performs the same tasks as other biochemical procedures utilizing polymers, DEAE-dextran, calcium phosphate, and electroporation. The efficiency of lipofection can be improved by treating transfected cells with a mild heat shock.
Fugene is a series of widely used proprietary non-liposomal transfection reagents capable of directly transfecting a wide variety of cells with high efficiency and low toxicity.
Dendrimer is a class of highly branched molecules based on various building blocks and synthesized through a convergent or a divergent method. These dendrimers bind the nucleic acids to form dendriplexes that then penetrate the cells.
Viral methods
DNA can also be introduced into cells using viruses as a carrier. In such cases, the technique is called transduction, and the cells are said to be transduced. Adenoviral vectors can be useful for viral transfection methods because they can transfer genes into a wide variety of human cells and have high transfer rates. Lentiviral vectors are also helpful due to their ability to transduce cells not currently undergoing mitosis.
Protoplast fusion is a technique in which transformed bacterial cells are treated with lysozyme in order to remove the cell wall. Following this, fusogenic agents (e.g., Sendai virus, PEG, electroporation) are used in order to fuse the protoplast carrying the gene of interest with the target recipient cell. A major disadvantage of this method is that bacterial components are non-specifically introduced into the target cell as well.
Stable and transient transfection
Stable and transient transfection differ in their long term effects on a cell; a stably transfected cell will continuously express transfected DNA and pass it on to daughter cells, while a transiently transfected cell will express transfected DNA for a short amount of time and not pass it on to daughter cells.
For some applications of transfection, it is sufficient if the transfected genetic material is only transiently expressed. Since the DNA introduced in the transfection process is usually not integrated into the nuclear genome, the foreign DNA will be diluted through mitosis or degraded. Cell lines expressing the Epstein–Barr virus (EBV) nuclear antigen 1 (EBNA1) or the SV40 large-T antigen allow episomal amplification of plasmids containing the viral EBV (293E) or SV40 (293T) origins of replication, greatly reducing the rate of dilution.
If it is desired that the transfected gene actually remain in the genome of the cell and its daughter cells, a stable transfection must occur. To accomplish this, a marker gene is co-transfected, which gives the cell some selectable advantage, such as resistance towards a certain toxin. Some (very few) of the transfected cells will, by chance, have integrated the foreign genetic material into their genome. If the toxin is then added to the cell culture, only those few cells with the marker gene integrated into their genomes will be able to proliferate, while other cells will die. After applying this selective stress (selection pressure) for some time, only the cells with a stable transfection remain and can be cultivated further.
Common agents for selecting stable transfection are:
Geneticin, or G418, neutralized by the product of the neomycin resistance gene
Puromycin
Zeocin
Hygromycin B
Blasticidin S
RNA transfection
RNA can also be transfected into cells to transiently express its coded protein, or to study RNA decay kinetics. RNA transfection is often used in primary cells that do not divide.
siRNAs can also be transfected to achieve RNA silencing (i.e. loss of RNA and protein from the targeted gene). This has become a major application in research to achieve "knock-down" of proteins of interests (e.g. Endothelin-1) with potential applications in gene therapy. Limitation of the silencing approach are the toxicity of the transfection for cells and potential "off-target" effects on the expression of other genes/proteins.
RNA can be purified from cells after lysis or synthesized from free nucleotides either chemically, or enzymatically using an RNA polymerase to transcribe a DNA template. As with DNA, RNA can be delivered to cells by a variety of means including microinjection, electroporation, and lipid-mediated transfection. If the RNA encodes a protein, transfected cells may translate the RNA into the encoded protein. If the RNA is a regulatory RNA (such as a miRNA), the RNA may cause other changes in the cell (such as RNAi-mediated knockdown).
Encapsulating the RNA molecule in lipid nanoparticles was a breakthrough for producing viable RNA vaccines, solving a number of key technical barriers in delivering the RNA molecule into the human cell.
RNA molecules shorter than about 25nt (nucleotides) largely evade detection by the innate immune system, which is triggered by longer RNA molecules. Most cells of the body express proteins of the innate immune system, and upon exposure to exogenous long RNA molecules, these proteins initiate signaling cascades that result in inflammation. This inflammation hypersensitizes the exposed cell and nearby cells to subsequent exposure. As a result, while a cell can be repeatedly transfected with short RNA with few non-specific effects, repeatedly transfecting cells with even a small amount of long RNA can cause cell death unless measures are taken to suppress or evade the innate immune system (see "Long-RNA transfection" below).
Short-RNA transfection is routinely used in biological research to knock down the expression of a protein of interest (using siRNA) or to express or block the activity of a miRNA (using short RNA that acts independently of the cell's RNAi machinery, and therefore is not referred to as siRNA). While DNA-based vectors (viruses, plasmids) that encode a short RNA molecule can also be used, short-RNA transfection does not risk modification of the cell's DNA, a characteristic that has led to the development of short RNA as a new class of macromolecular drugs.
Long-RNA transfection is the process of deliberately introducing RNA molecules longer than about 25nt into living cells. A distinction is made between short- and long-RNA transfection because exogenous long RNA molecules elicit an innate immune response in cells that can cause a variety of nonspecific effects including translation block, cell-cycle arrest, and apoptosis.
Endogenous vs. exogenous long RNA
The innate immune system has evolved to protect against infection by detecting pathogen-associated molecular patterns (PAMPs), and triggering a complex set of responses collectively known as "inflammation". Many cells express specific pattern recognition receptors (PRRs) for exogenous RNA including toll-like receptor 3,7,8 (TLR3, TLR7, TLR8), the RNA helicase RIG1 (RARRES3), protein kinase R (PKR, a.k.a. EIF2AK2), members of the oligoadenylate synthetase family of proteins (OAS1, OAS2, OAS3), and others. All of these proteins can specifically bind to exogenous RNA molecules and trigger an immune response.
The specific chemical, structural or other characteristics of long RNA molecules that are required for recognition by PRRs remain largely unknown despite intense study. At any given time, a typical mammalian cell may contain several hundred thousand mRNA and other, regulatory long RNA molecules. How cells distinguish exogenous long RNA from the large amount of endogenous long RNA is an important open question in cell biology. Several reports suggest that phosphorylation of the 5'-end of a long RNA molecule can influence its immunogenicity, and specifically that 5'-triphosphate RNA, which can be produced during viral infection, is more immunogenic than 5'-diphosphate RNA, 5'-monophosphate RNA or RNA containing no 5' phosphate. However, in vitro-transcribed (ivT) long RNA containing a 7-methylguanosine cap (present in eukaryotic mRNA) is also highly immunogenic despite having no 5' phosphate, suggesting that characteristics other than 5'-phosphorylation can influence the immunogenicity of an RNA molecule.
Eukaryotic mRNA contains chemically modified nucleotides such as N6-methyladenosine, 5-methylcytidine, and 2'-O-methylated nucleotides. Although only a very small number of these modified nucleotides are present in a typical mRNA molecule, they may help prevent mRNA from activating the innate immune system by disrupting secondary structure that would resemble double-stranded RNA (dsRNA), a type of RNA thought to be present in cells only during viral infection.
The immunogenicity of long RNA has been used to study both innate and adaptive immunity.
Repeated long-RNA transfection
Inhibiting only three proteins, interferon-β, STAT2, and EIF2AK2 is sufficient to rescue human fibroblasts from the cell death caused by frequent transfection with long, protein-encoding RNA. Inhibiting interferon signaling disrupts the positive-feedback loop that normally hypersensitizes cells exposed to exogenous long RNA. Researchers have recently used this technique to express reprogramming proteins in primary human fibroblasts.
See also
Gene targeting
Minicircle
Protofection
Transformation
Transduction
Transgene
Vector (molecular biology)
Viral vector
References
Further reading
External links
Biology Research Resource — Articles and Forums about Transfection
Research in optical transfection at the University of St Andrews
The 10th US-Japan Symposium on Drug Delivery Systems
Molecular biology
Gene delivery
Applied genetics
Biotechnology | Transfection | [
"Chemistry",
"Biology"
] | 3,393 | [
"Genetics techniques",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Gene delivery"
] |
620,355 | https://en.wikipedia.org/wiki/RuvABC | RuvABC is a complex of three proteins that mediate branch migration and resolve the Holliday junction created during homologous recombination in bacteria. As such, RuvABC is critical to bacterial DNA repair.
RuvA and RuvB bind to the four strand DNA structure formed in the Holliday junction intermediate, and migrate the strands through each other, using a putative spooling mechanism. The RuvAB complex can carry out DNA helicase activity, which helps unwind the duplex DNA. The binding of the RuvC protein to the RuvAB complex is thought to cleave the DNA strands, thereby resolving the Holliday junction.
Protein complex
The RuvABC is a complex of three proteins that resolve the Holliday junction formed during bacterial homologous recombination. In Escherichia coli bacteria, DNA replication forks stall at least once per cell cycle, so that DNA replication must be restarted if the cell is to survive. Replication restart is a multi-step process in E. coli that requires the sequential action of several proteins. When the progress of the replication fork is impeded the proteins single-stranded binding protein SSB and RecG helicase along with the RuvABC complex are required for rescue. The resolution of Holliday junctions that accumulate following replication on damaged DNA templates in E. coli requires the RuvABC complex.
RuvA
RuvA (Holliday junction branch migration complex subunit RuvA) is a DNA-binding protein that binds Holliday junctions with high affinity. The structure of the complex has been variously elucidated through X-ray crystallography and EM data, and suggest that the complex consists of either one or two RuvA tetramers, with charge lined grooves through which the incoming DNA is channelled. The structure also showed the presence of so-called 'acidic pins' in the centre of the tetramer, which serve to separate the DNA duplexes. Its crystal structure has been solved at 1.9A.
RuvB
RuvB (Holliday junction branch migration complex subunit RuvB) is an ATPase that is only active in the presence of DNA and compared to RuvA, RuvB has a low affinity for DNA. The RuvB proteins are thought to form hexameric rings on the exit points of the newly formed DNA duplexes, and it is proposed that they 'spool' the emerging DNA through the RuvA tetramer.
RuvC
RuvC (Crossover junction endodeoxyribonuclease RuvC) is the resolvase, which cleaves the Holliday junction. RuvC proteins have been shown to form dimers in solution and its structure has been solved at 2.5A. It is thought to bind either on the open, DNA exposed face of a single RuvA tetramer, or to replace one of the two tetramers. Binding is proposed to be mediated by an unstructured loop on RuvC, which becomes structured on binding RuvA. RuvC can be bound to the complex in either orientation, therefore resolving Holliday junctions in either a horizontal or vertical manner.
See also
RecBCD
References
Further reading
Eggleston AK, Mitchell AH, and West SC (1997). “In Vitro Reconstitution of the Late Steps of Genetic Recombination in E. coli”. Cell. 89: 607–617.
External links
Proteins | RuvABC | [
"Chemistry"
] | 711 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
1,606,353 | https://en.wikipedia.org/wiki/Kroll%20process | The Kroll process is a pyrometallurgical industrial process used to produce metallic titanium from titanium tetrachloride. As of 2001 William Justin Kroll's process replaced the Hunter process for almost all commercial production.
Process
In the Kroll process, titanium tetrachloride is reduced by liquid magnesium to give titanium metal:
{TiCl4} + 2{Mg} ->[825~^{\circ}\mathrm{C}]{Ti} + 2{MgCl2}
The reduction is conducted at 800–850 °C in a stainless steel retort. Complications result from partial reduction of the TiCl4, giving to the lower chlorides TiCl2 and TiCl3. The MgCl2 can be further refined back to magnesium.
Appurtenant processes
The resulting porous metallic titanium sponge is purified by leaching or vacuum distillation. The sponge is crushed, and pressed before it is melted in a consumable carbon electrode vacuum arc furnace, "backfilled with pure gettered argon of a pressure high enough to avoid a glow discharge". The melted ingot is allowed to solidify under vacuum. It is often remelted to remove inclusions and ensure uniformity. These melting steps add to the cost of the product. Titanium is about six times as expensive as stainless steel: Potter noted in 2023 that "Titanium is just fundamentally difficult and expensive to deal with. Turning titanium ingots into bars and sheets is a challenge due to titanium’s reactivity: it readily absorbs impurities, requiring “frequent surface removal and trimming to eliminate surface defects” which are “costly and involve significant yield loss.”" The appurtenant processes that turn Kroll's sponge into useful metal have "changed little since the 1950s."
History and subsequent developments
Many methods had been applied to the production of titanium metal, beginning with a report in 1887 by Nilsen and Pettersen using sodium, which was optimized into the commercial Hunter process. In this process (which ceased to be commercial in the 1990s) TiCl4 is reduced to the metal by sodium.
In the 1920s Anton Eduard van Arkel working for Philips NV had described the thermal decomposition of titanium tetraiodide to give highly pure titanium.
Titanium tetrachloride was found to reduce with hydrogen at high temperatures to give hydrides that can be thermally processed to the pure metal.
With these three ideas as background, Kroll in Luxembourg developed both new reductants and new apparatus for the reduction of titanium tetrachloride. Its high reactivity toward trace amounts of water and other metal oxides presented challenges. Significant success came with the use of calcium as a reductant, but the resulting mixture still contained significant oxide impurities. Major success using magnesium at 1000 °C using a molybdenum clad reactor, was reported by Kroll to the Electrochemical Society in Ottawa. Kroll's titanium was highly ductile reflecting its high purity.
The Kroll process displaced the Hunter process and continues to be the dominant technology for the production of titanium metal, as well as driving the majority of the world's production of magnesium metal.
After moving to the United States, Kroll further developed the method for the production of zirconium at the Albany Research Center.
See also
Chloride process
References
Further reading
P.Kar, Mathematical modeling of phase change electrodes with application to the FFC process, PhD thesis; UC, Berkeley, 2007.
External links
Titanium: Kroll Method: YouTube video uploaded by Innovations in Manufacturing at Oak Ridge National Laboratory
Industrial processes
Chemical processes
Zirconium
Titanium processes
Metallurgical processes
Materials science
20th-century inventions
Luxembourgish inventions | Kroll process | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 770 | [
"Applied and interdisciplinary physics",
"Metallurgical processes",
"Metallurgy",
"Materials science",
"Titanium processes",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
1,607,648 | https://en.wikipedia.org/wiki/Postulates%20of%20special%20relativity | Albert Einstein derived the theory of special relativity in 1905, from principle now called the postulates of special relativity. Einstein's formulation is said to only require two postulates, though his derivation implies a few more assumptions.
The idea that special relativity depended only on two postulates, both of which seemed to follow from the theory and experiment of the day, was one of the most compelling arguments for the correctness of the theory (Einstein 1912: "This theory is correct to the extent to which the two principles upon which it is based are correct. Since these seem to be correct to a great extent, ...")
Postulates of special relativity
1. First postulate (principle of relativity)
The laws of physics take the same form in all inertial frames of reference.
2. Second postulate (invariance of c)
As measured in any inertial frame of reference, light is always propagated in empty space with a definite velocity c that is independent of the state of motion of the emitting body. Or: the speed of light in free space has the same value c in all inertial frames of reference.
The two-postulate basis for special relativity is the one historically used by Einstein, and it is sometimes the starting point today. As Einstein himself later acknowledged, the derivation of the Lorentz transformation tacitly makes use of some additional assumptions, including spatial homogeneity, isotropy, and memorylessness. Hermann Minkowski also implicitly used both postulates when he introduced the Minkowski space formulation, even though he showed that c can be seen as a space-time constant, and the identification with the speed of light is derived from optics.
Alternative derivations of special relativity
Historically, Hendrik Lorentz and Henri Poincaré (1892–1905) derived the Lorentz transformation from Maxwell's equations, which served to explain the negative result of all aether drift measurements. By that the luminiferous aether becomes undetectable in agreement with what Poincaré called the principle of relativity (see History of Lorentz transformations and Lorentz ether theory). A more modern example of deriving the Lorentz transformation from electrodynamics (without using the historical aether concept at all), was given by Richard Feynman.
George Francis FitzGerald already made an argument similar to Einstein's in 1889, in response to the Michelson-Morley experiment seeming to show both postulates to be true. He wrote that a length contraction is "almost the only hypothesis that can reconcile" the apparent contradictions. Lorentz independently came to similar conclusions, and later wrote "the chief difference being that Einstein simply postulates what we have deduced".
Following these derivations, many alternative derivations have been proposed, based on various sets of assumptions. It has often been argued (such as by Vladimir Ignatowski in 1910,
or Philipp Frank and Hermann Rothe in 1911,
and many others in subsequent years)
that a formula equivalent to the Lorentz transformation, up to a non-negative free parameter, follows from just the relativity postulate itself, without first postulating the universal light speed. These formulations rely on the aforementioned various assumptions such as isotropy. The numerical value of the parameter in these transformations can then be determined by experiment, just as the numerical values of the parameter pair c and the Vacuum permittivity are left to be determined by experiment even when using Einstein's original postulates. Experiment rules out the validity of the Galilean transformations. When the numerical values in both Einstein's and other approaches have been found then these different approaches result in the same theory.
Insufficiency of the two standard postulates
Einstein's 1905 derivation is not complete. A break in Einstein's logic occurs where, after having established "the law of the constancy of the speed of light" for empty space, he invokes the law in situations where space is no longer empty. For the derivation to apply to physical objects requires an additional postulate or "bridging hypothesis", that the geometry derived for empty space also applies when a space is populated. This would be equivalent to stating that we know that the introduction of matter into a region, and its relative motion, have no effect on lightbeam geometry.
Such a statement would be problematic, as Einstein rejected the notion that a process such as light-propagation could be immune to other factors (1914: "There can be no doubt that this principle is of far-reaching significance; and yet, I cannot believe in its exact validity. It seems to me unbelievable that the course of any process (e.g., that of the propagation of light in a vacuum) could be conceived of as independent of all other events in the world.")
Including this "bridge" as an explicit third postulate might also have damaged the theory's credibility, as refractive index and the Fizeau effect would have suggested that the presence and behaviour of matter does seem to influence light-propagation, contra the theory. If this bridging hypothesis had been stated as a third postulate, it could have been claimed that the third postulate (and therefore the theory) were falsified by the experimental evidence.
The 1905 system as "null theory"
Without a "bridging hypothesis" as a third postulate, the 1905 derivation is open to the criticism that its derived relationships may only apply in vacuo, that is, in the absence of matter.
The controversial suggestion that the 1905 theory, derived by assuming empty space, might only apply to empty space, appears in Edwin F. Taylor and John Archibald Wheeler's book "Spacetime Physics" (Box 3-1: "The Principle of Relativity Rests on Emptiness").
A similar suggestion that the reduction of GR geometry to SR's flat spacetime over small regions may be "unphysical" (because flat pointlike regions cannot contain matter capable of acting as physical observers) was acknowledged but rejected by Einstein in 1914 ("The equations of the new theory of relativity reduce to those of the original theory in the special case where the gμν can be considered constant ... the sole objection that can be raised against the theory is that the equations we have set up might, perhaps, be void of any physical content. But no one is likely to think in earnest that this objection is justified in the present case").
Einstein revisited the problem in 1919 ("It is by no means settled a priori that a limiting transition of this kind has any possible meaning. For if gravitational fields do play an essential part in the structure of the particles of matter, the transition to the limiting case of constant gμν would, for them, lose its justification, for indeed, with constant gμν there could not be any particles of matter.")
A further argument for unphysicality can be gleaned from Einstein's solution to the "hole problem" under general relativity, in which Einstein rejects the physicality of coordinate-system relationships in truly empty space.
Alternative relativistic models
Einstein's special theory is not the only theory that combines a form of light speed constancy with the relativity principle. A theory along the lines of that proposed by Heinrich Hertz (in 1890) allows for light to be fully dragged by all objects, giving local c-constancy for all physical observers. The logical possibility of a Hertzian theory shows that Einstein's two standard postulates (without the bridging hypothesis) are not sufficient to allow us to arrive uniquely at the solution of special relativity (although special relativity might be considered the most minimalist solution).
Einstein agreed that the Hertz theory was logically consistent ("It is on the basis of this hypothesis that Hertz developed an electrodynamics of moving bodies that is free of contradictions."), but dismissed it on the grounds of a poor agreement with the Fizeau result, leaving special relativity as the only remaining option. Given that SR was similarly unable to reproduce the Fizeau result without introducing additional auxiliary rules (to address the different behaviour of light in a particulate medium), this was perhaps not a fair comparison.
Mathematical formulation of the postulates
In the rigorous mathematical formulation of special relativity, we suppose that the universe exists on a four-dimensional spacetime M. Individual points in spacetime are known as events; physical objects in spacetime are described by worldlines (if the object is a point particle) or worldsheets (if the object is larger than a point). The worldline or worldsheet only describes the motion of the object; the object may also have several other physical characteristics such as energy-momentum, mass, charge, etc.
In addition to events and physical objects, there are a class of inertial frames of reference. Each inertial frame of reference provides a coordinate system for events in the spacetime M. Furthermore, this frame of reference also gives coordinates to all other physical characteristics of objects in the spacetime; for instance, it will provide coordinates for the momentum and energy of an object, coordinates for an electromagnetic field, and so forth.
We assume that given any two inertial frames of reference, there exists a coordinate transformation that converts the coordinates from one frame of reference to the coordinates in another frame of reference. This transformation not only provides a conversion for spacetime coordinates , but will also provide a conversion for all other physical coordinates, such as a conversion law for momentum and energy , etc. (In practice, these conversion laws can be efficiently handled using the mathematics of tensors.)
We also assume that the universe obeys a number of physical laws. Mathematically, each physical law can be expressed with respect to the coordinates given by an inertial frame of reference by a mathematical equation (for instance, a differential equation) which relates the various coordinates of the various objects in the spacetime. A typical example is Maxwell's equations. Another is Newton's first law.
1. First Postulate (Principle of relativity)
Under transitions between inertial reference frames, the equations of all fundamental laws of physics stay form-invariant, while all the numerical constants entering these equations preserve their values. Thus, if a fundamental physical law is expressed with a mathematical equation in one inertial frame, it must be expressed by an identical equation in any other inertial frame, provided both frames are parameterised with charts of the same type. (The caveat on charts is relaxed, if we employ connections to write the law in a covariant form.)
2. Second Postulate (Invariance of c)
There exists an absolute constant with the following property. If A, B are two events which have coordinates and in one inertial frame , and have coordinates and in another inertial frame , then
if and only if .
Informally, the Second Postulate asserts that objects travelling at speed c in one reference frame will necessarily travel at speed c in all reference frames. This postulate is a subset of the postulates that underlie Maxwell's equations in the interpretation given to them in the context of special relativity. However, Maxwell's equations rely on several other postulates, some of which are now known to be false (e.g., Maxwell's equations cannot account for the quantum attributes of electromagnetic radiation).
The second postulate can be used to imply a stronger version of itself, namely that the spacetime interval is invariant under changes of inertial reference frame. In the above notation, this means that
for any two events A, B. This can in turn be used to deduce the transformation laws between reference frames; see Lorentz transformation.
The postulates of special relativity can be expressed very succinctly using the mathematical language of pseudo-Riemannian manifolds. The second postulate is then an assertion that the four-dimensional spacetime M is a pseudo-Riemannian manifold equipped with a metric g of signature (1,3), which is given by the Minkowski metric when measured in each inertial reference frame. This metric is viewed as one of the physical quantities of the theory; thus it transforms in a certain manner when the frame of reference is changed, and it can be legitimately used in describing the laws of physics. The first postulate is an assertion that the laws of physics are invariant when represented in any frame of reference for which g is given by the Minkowski metric. One advantage of this formulation is that it is now easy to compare special relativity with general relativity, in which the same two postulates hold but the assumption that the metric is required to be Minkowski is dropped.
The theory of Galilean relativity is the limiting case of special relativity in the limit (which is sometimes referred to as the non-relativistic limit). In this theory, the first postulate remains unchanged, but the second postulate is modified to:
If A, B are two events which have coordinates and in one inertial frame , and have coordinates and in another inertial frame , then . Furthermore, if , then
.
The physical theory given by classical mechanics, and Newtonian gravity is consistent with Galilean relativity, but not special relativity. Conversely, Maxwell's equations are not consistent with Galilean relativity unless one postulates the existence of a physical aether. In a number of cases, the laws of physics in special relativity (such as the equation ) can be deduced by combining the postulates of special relativity with the hypothesis that the laws of special relativity approach the laws of classical mechanics in the non-relativistic limit.
Notes | Postulates of special relativity | [
"Physics"
] | 2,783 | [
"Special relativity",
"Theory of relativity"
] |
1,608,886 | https://en.wikipedia.org/wiki/Tests%20of%20special%20relativity | Special relativity is a physical theory that plays a fundamental role in the description of all physical phenomena, as long as gravitation is not significant. Many experiments played (and still play) an important role in its development and justification. The strength of the theory lies in its unique ability to correctly predict to high precision the outcome of an extremely diverse range of experiments. Repeats of many of those experiments are still being conducted with steadily increased precision, with modern experiments focusing on effects such as at the Planck scale and in the neutrino sector. Their results are consistent with the predictions of special relativity. Collections of various tests were given by Jakob Laub, Zhang, Mattingly, Clifford Will, and Roberts/Schleif.
Special relativity is restricted to flat spacetime, i.e., to all phenomena without significant influence of gravitation. The latter lies in the domain of general relativity and the corresponding tests of general relativity must be considered.
Experiments paving the way to relativity
The predominant theory of light in the 19th century was that of the luminiferous aether, a stationary medium in which light propagates in a manner analogous to the way sound propagates through air. By analogy, it follows that the speed of light is constant in all directions in the aether and is independent of the velocity of the source. Thus an observer moving relative to the aether must measure some sort of "aether wind" even as an observer moving relative to air measures an apparent wind.
First-order experiments
Beginning with the work of François Arago (1810), a series of optical experiments had been conducted, which should have given a positive result for magnitudes of first order in (i.e., of ) and which thus should have demonstrated the relative motion of the aether. Yet the results were negative. An explanation was provided by Augustin Fresnel (1818) with the introduction of an auxiliary hypothesis, the so-called "dragging coefficient", that is, matter is dragging the aether to a small extent. This coefficient was directly demonstrated by the Fizeau experiment (1851). It was later shown that all first-order optical experiments must give a negative result due to this coefficient. In addition, some electrostatic first-order experiments were conducted, again having negative results. In general, Hendrik Lorentz (1892, 1895) introduced several new auxiliary variables for moving observers, demonstrating why all first-order optical and electrostatic experiments have produced null results. For example, Lorentz proposed a location variable by which electrostatic fields contract in the line of motion and another variable ("local time") by which the time coordinates for moving observers depend on their current location.
Second-order experiments
The stationary aether theory, however, would give positive results when the experiments are precise enough to measure magnitudes of second order in (i.e., of ). Albert A. Michelson conducted the first experiment of this kind in 1881, followed by the more sophisticated Michelson–Morley experiment in 1887. Two rays of light, traveling for some time in different directions were brought to interfere, so that different orientations relative to the aether wind should lead to a displacement of the interference fringes. But the result was negative again. The way out of this dilemma was the proposal by George Francis FitzGerald (1889) and Lorentz (1892) that matter is contracted in the line of motion with respect to the aether (length contraction). That is, the older hypothesis of a contraction of electrostatic fields was extended to intermolecular forces. However, since there was no theoretical reason for that, the contraction hypothesis was considered ad hoc.
Besides the optical Michelson–Morley experiment, its electrodynamic equivalent was also conducted, the Trouton–Noble experiment. By that it should be demonstrated that a moving condenser must be subjected to a torque. In addition, the Experiments of Rayleigh and Brace intended to measure some consequences of length contraction in the laboratory frame, for example the assumption that it would lead to birefringence. Though all of those experiments led to negative results. (The Trouton–Rankine experiment conducted in 1908 also gave a negative result when measuring the influence of length contraction on an electromagnetic coil.)
To explain all experiments conducted before 1904, Lorentz was forced to again expand his theory by introducing the complete Lorentz transformation. Henri Poincaré declared in 1905 that the impossibility of demonstrating absolute motion (principle of relativity) is apparently a law of nature.
Refutations of complete aether drag
The idea that the aether might be completely dragged within or in the vicinity of Earth, by which the negative aether drift experiments could be explained, was refuted by a variety of experiments.
Oliver Lodge (1893) found that rapidly whirling steel disks above and below a sensitive common path interferometric arrangement failed to produce a measurable fringe shift.
Gustaf Hammar (1935) failed to find any evidence for aether dragging using a common-path interferometer, one arm of which was enclosed by a thick-walled pipe plugged with lead, while the other arm was free.
The Sagnac effect showed that aether wind caused by earth drag cannot be demonstrated.
The existence of the aberration of light was inconsistent with aether drag hypothesis.
The assumption that aether drag is proportional to mass and thus only occurs with respect to Earth as a whole was refuted by the Michelson–Gale–Pearson experiment, which demonstrated the Sagnac effect through Earth's motion.
Lodge expressed the paradoxical situation in which physicists found themselves as follows: "...at no practicable speed does ... matter [have] any appreciable viscous grip upon the ether. Atoms must be able to throw it into vibration, if they are oscillating or revolving at sufficient speed; otherwise they would not emit light or any kind of radiation; but in no case do they appear to drag it along, or to meet with resistance in any uniform motion through it."
Special relativity
Overview
Eventually, Albert Einstein (1905) drew the conclusion that established theories and facts known at that time only form a logical coherent system when the concepts of space and time are subjected to a fundamental revision. For instance:
Maxwell-Lorentz's electrodynamics (independence of the speed of light from the speed of the source),
the negative aether drift experiments (no preferred reference frame),
Moving magnet and conductor problem (only relative motion is relevant),
the Fizeau experiment and the aberration of light (both implying modified velocity addition and no complete aether drag).
The result is special relativity theory, which is based on the constancy of the speed of light in all inertial frames of reference and the principle of relativity. Here, the Lorentz transformation is no longer a mere collection of auxiliary hypotheses but reflects a fundamental Lorentz symmetry and forms the basis of successful theories such as Quantum electrodynamics. There is a large number of possible tests of the predictions and the second postulate:
Fundamental experiments
The effects of special relativity can phenomenologically be derived from the following three fundamental experiments:
Michelson–Morley experiment, by which the dependence of the speed of light on the direction of the measuring device can be tested. It establishes the relation between longitudinal and transverse lengths of moving bodies.
Kennedy–Thorndike experiment, by which the dependence of the speed of light on the velocity of the measuring device can be tested. It establishes the relation between longitudinal lengths and the duration of time of moving bodies.
Ives–Stilwell experiment, by which time dilation can be directly tested.
From these three experiments and by using the Poincaré-Einstein synchronization, the complete Lorentz transformation follows, with being the Lorentz factor:
Besides the derivation of the Lorentz transformation, the combination of these experiments is also important because they can be interpreted in different ways when viewed individually. For example, isotropy experiments such as Michelson-Morley can be seen as a simple consequence of the relativity principle, according to which any inertially moving observer can consider himself as at rest. Therefore, by itself, the MM experiment is compatible to Galilean-invariant theories like emission theory or the complete aether drag hypothesis, which also contain some sort of relativity principle. However, when other experiments that exclude the Galilean-invariant theories are considered (i.e. the Ives–Stilwell experiment, various refutations of emission theories and refutations of complete aether dragging), Lorentz-invariant theories and thus special relativity are the only theories that remain viable.
Constancy of the speed of light
Interferometers, resonators
Modern variants of Michelson-Morley and Kennedy–Thorndike experiments have been conducted in order to test the isotropy of the speed of light. Contrary to Michelson-Morley, the Kennedy-Thorndike experiments employ different arm lengths, and the evaluations last several months. In that way, the influence of different velocities during Earth's orbit around the Sun can be observed. Laser, maser and optical resonators are used, reducing the possibility of any anisotropy of the speed of light to the 10−17 level. In addition to terrestrial tests, Lunar Laser Ranging Experiments have also been conducted as a variation of the Kennedy-Thorndike-experiment.
Another type of isotropy experiments are the Mössbauer rotor experiments in the 1960s, by which the anisotropy of the Doppler effect on a rotating disc can be observed by using the Mössbauer effect (those experiments can also be utilized to measure time dilation, see below).
No dependence on source velocity or energy
Emission theories, according to which the speed of light depends on the velocity of the source, can conceivably explain the negative outcome of aether drift experiments. It was not until the mid-1960s that the constancy of the speed of light was definitively shown by experiment, since in 1965, J. G. Fox showed that the effects of the extinction theorem rendered the results of all experiments previous to that time inconclusive, and therefore compatible with both special relativity and emission theory. More recent experiments have definitely ruled out the emission model: the earliest were those of Filippas and Fox (1964), using moving sources of gamma rays, and Alväger et al. (1964), which demonstrated that photons did not acquire the speed of the high speed decaying mesons which were their source. In addition, the de Sitter double star experiment (1913) was repeated by Brecher (1977) under consideration of the extinction theorem, ruling out a source dependence as well.
Observations of Gamma-ray bursts also demonstrated that the speed of light is independent of the frequency and energy of the light rays.
One-way speed of light
A series of one-way measurements were undertaken, all of them confirming the isotropy of the speed of light. However, only the two-way speed of light (from A to B back to A) can unambiguously be measured, since the one-way speed depends on the definition of simultaneity and therefore on the method of synchronization. The Einstein synchronization convention makes the one-way speed equal to the two-way speed. However, there are many models having isotropic two-way speed of light, in which the one-way speed is anisotropic by choosing different synchronization schemes. They are experimentally equivalent to special relativity because all of these models include effects like time dilation of moving clocks, that compensate any measurable anisotropy. However, of all models having isotropic two-way speed, only special relativity is acceptable for the overwhelming majority of physicists since all other synchronizations are much more complicated, and those other models (such as Lorentz ether theory) are based on extreme and implausible assumptions concerning some dynamical effects, which are aimed at hiding the "preferred frame" from observation.
Isotropy of mass, energy, and space
Clock-comparison experiments (periodic processes and frequencies can be considered as clocks) such as the Hughes–Drever experiments provide stringent tests of Lorentz invariance. They are not restricted to the photon sector as Michelson-Morley but directly determine any anisotropy of mass, energy, or space by measuring the ground state of nuclei. Upper limit of such anisotropies of 10−33 GeV have been provided. Thus these experiments are among the most precise verifications of Lorentz invariance ever conducted.
Time dilation and length contraction
The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10−8. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the resolution of the twin paradox, i.e. that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role.
Direct confirmation of length contraction is hard to achieve in practice since the dimensions of the observed particles are vanishingly small. However, there are indirect confirmations; for example, the behavior of colliding heavy ions can be explained if their increased density due to Lorentz contraction is considered. Contraction also leads to an increase of the intensity of the Coulomb field perpendicular to the direction of motion, whose effects already have been observed. Consequently, both time dilation and length contraction must be considered when conducting experiments in particle accelerators.
Relativistic momentum and energy
Starting with 1901, a series of measurements was conducted aimed at demonstrating the velocity dependence of the mass of electrons. The results actually showed such a dependency but the precision necessary to distinguish between competing theories was disputed for a long time. Eventually, it was possible to definitely rule out all competing models except special relativity.
Today, special relativity's predictions are routinely confirmed in particle accelerators such as the Relativistic Heavy Ion Collider. For example, the increase of relativistic momentum and energy is not only precisely measured but also necessary to understand the behavior of cyclotrons and synchrotrons etc., by which particles are accelerated near to the speed of light.
Sagnac and Fizeau
Special relativity also predicts that two light rays traveling in opposite directions around a spinning closed path (e.g. a loop) require different flight times to come back to the moving emitter/receiver (this is a consequence of the independence of the speed of light from the velocity of the source, see above). This effect was actually observed and is called the Sagnac effect. Currently, the consideration of this effect is necessary for many experimental setups and for the correct functioning of GPS.
If such experiments are conducted in moving media (e.g. water, or glass optical fiber), it is also necessary to consider Fresnel's dragging coefficient as demonstrated by the Fizeau experiment. Although this effect was initially understood as giving evidence of a nearly stationary aether or a partial aether drag it can easily be explained with special relativity by using the velocity composition law.
Test theories
Several test theories have been developed to assess a possible positive outcome in Lorentz violation experiments by adding certain parameters to the standard equations. These include the Robertson-Mansouri-Sexl framework (RMS) and the Standard-Model Extension (SME). RMS has three testable parameters with respect to length contraction and time dilation. From that, any anisotropy of the speed of light can be assessed. On the other hand, SME includes many Lorentz violation parameters, not only for special relativity, but for the Standard model and General relativity as well; thus it has a much larger number of testable parameters.
Other modern tests
Due to the developments concerning various models of Quantum gravity in recent years, deviations of Lorentz invariance (possibly following from those models) are again the target of experimentalists. Because "local Lorentz invariance" (LLI) also holds in freely falling frames, experiments concerning the weak Equivalence principle belong to this class of tests as well. The outcomes are analyzed by test theories (as mentioned above) like RMS or, more importantly, by SME.
Besides the mentioned variations of Michelson–Morley and Kennedy–Thorndike experiments, Hughes–Drever experiments are continuing to be conducted for isotropy tests in the proton and neutron sector. To detect possible deviations in the electron sector, spin-polarized torsion balances are used.
Time dilation is confirmed in heavy ion storage rings, such as the TSR at the MPIK, by observation of the Doppler effect of lithium, and those experiments are valid in the electron, proton, and photon sector.
Other experiments use Penning traps to observe deviations of cyclotron motion and Larmor precession in electrostatic and magnetic fields.
Possible deviations from CPT symmetry (whose violation represents a violation of Lorentz invariance as well) can be determined in experiments with neutral mesons, Penning traps and muons, see Antimatter Tests of Lorentz Violation.
Astronomical tests are conducted in connection with the flight time of photons, where Lorentz violating factors could cause anomalous dispersion and birefringence leading to a dependency of photons on energy, frequency or polarization.
With respect to threshold energy of distant astronomical objects, but also of terrestrial sources, Lorentz violations could lead to alterations in the standard values for the processes following from that energy, such as Vacuum Cherenkov radiation, or modifications of synchrotron radiation.
Neutrino oscillations (see Lorentz-violating neutrino oscillations) and the speed of neutrinos (see measurements of neutrino speed) are being investigated for possible Lorentz violations.
Other candidates for astronomical observations are the Greisen–Zatsepin–Kuzmin limit and Airy disks. The latter is investigated to find possible deviations of Lorentz invariance that could drive the photons out of phase.
Observations in the Higgs sector are under way.
See also
Tests of general relativity
History of special relativity
Test theories of special relativity
References
Physics experiments
Special relativity | Tests of special relativity | [
"Physics"
] | 3,895 | [
"Special relativity",
"Experimental physics",
"Physics experiments",
"Theory of relativity"
] |
1,609,504 | https://en.wikipedia.org/wiki/Vi%C3%A8te%27s%20formula | In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant :
It can also be represented as
The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression and marks the beginning of mathematical analysis. It has linear convergence and can be used for calculations of , but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses and as a motivating example for the concept of statistical independence.
The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known.
Significance
François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation
By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of . As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation, Eli Maor highlights Viète's formula as marking the beginning of mathematical analysis and Jonathan Borwein calls its appearance "the dawn of modern mathematics".
Using his formula, Viète calculated to an accuracy of nine decimal digits. However, this was not the most accurate approximation to known at the time, as the Persian mathematician Jamshīd al-Kāshī had calculated to an accuracy of nine sexagesimal digits and 16 decimal digits in 1424. Not long after Viète published his formula, Ludolph van Ceulen used a method closely related to Viète's to calculate 35 digits of , which were published only after van Ceulen's death in 1610.
Beyond its mathematical and historical significance, Viète's formula can be used to explain the different speeds of waves of different frequencies in an infinite chain of springs and masses, and the appearance of in the limiting behavior of these speeds. Additionally, a derivation of this formula as a product of integrals involving the Rademacher system, equal to the integral of products of the same functions, provides a motivating example for the concept of statistical independence.
Interpretation and convergence
Viète's formula may be rewritten and understood as a limit expression
where
For each choice of , the expression in the limit is a finite product, and as gets arbitrarily large, these finite products have values that approach the value of Viète's formula arbitrarily closely. Viète did his work long before the concepts of limits and rigorous proofs of convergence were developed in mathematics; the first proof that this limit exists was not given until the work of Ferdinand Rudio in 1891.
The rate of convergence of a limit governs the number of terms of the expression needed to achieve a given number of digits of accuracy. In Viète's formula, the numbers of terms and digits are proportional to each other: the product of the first terms in the limit gives an expression for that is accurate to approximately digits. This convergence rate compares very favorably with the Wallis product, a later infinite product formula for . Although Viète himself used his formula to calculate only with nine-digit accuracy, an accelerated version of his formula has been used to calculate to hundreds of thousands of digits.
Related formulas
Viète's formula may be obtained as a special case of a formula for the sinc function that has often been attributed to Leonhard Euler, more than a century later:
Substituting in this formula yields
Then, expressing each term of the product on the right as a function of earlier terms using the half-angle formula:
gives Viète's formula.
It is also possible to derive from Viète's formula a related formula for that still involves nested square roots of two, but uses only one multiplication:
which can be rewritten compactly as
Many formulae for and other constants such as the golden ratio are now known, similar to Viète's in their use of either nested radicals or infinite products of trigonometric functions.
Derivation
Viète obtained his formula by comparing the areas of regular polygons with and sides inscribed in a circle. The first term in the product, , is the ratio of areas of a square and an octagon, the second term is the ratio of areas of an octagon and a hexadecagon, etc. Thus, the product telescopes to give the ratio of areas of a square (the initial polygon in the sequence) to a circle (the limiting case of a -gon). Alternatively, the terms in the product may be instead interpreted as ratios of perimeters of the same sequence of polygons, starting with the ratio of perimeters of a digon (the diameter of the circle, counted twice) and a square, the ratio of perimeters of a square and an octagon, etc.
Another derivation is possible based on trigonometric identities and Euler's formula.
Repeatedly applying the double-angle formula
leads to a proof by mathematical induction that, for all positive integers ,
The term goes to in the limit as goes to infinity, from which Euler's formula follows. Viète's formula may be obtained from this formula by the substitution .
See also
Morrie's law, same identity taking on Viète's formula
List of trigonometric identities
References
External links
Viète's Variorum de rebus mathematicis responsorum, liber VIII (1593) on Google Books. The formula is on the second half of p. 30.
Articles containing proofs
Infinite products
Pi algorithms | Viète's formula | [
"Mathematics"
] | 1,341 | [
"Mathematical analysis",
"Pi algorithms",
"Infinite products",
"Articles containing proofs",
"Pi"
] |
1,609,767 | https://en.wikipedia.org/wiki/Laser-induced%20breakdown%20spectroscopy | Laser-induced breakdown spectroscopy (LIBS) is a type of atomic emission spectroscopy which uses a highly energetic laser pulse as the excitation source. The laser is focused to form a plasma, which atomizes and excites samples. The formation of the plasma only begins when the focused laser achieves a certain threshold for optical breakdown, which generally depends on the environment and the target material.
2000s developments
From 2000 to 2010, the U.S. Army Research Laboratory (ARL) researched potential extensions to LIBS technology, which focused on hazardous material detection. Applications investigated at ARL included the standoff detection of explosive residues and other hazardous materials, plastic landmine discrimination, and material characterization of various metal alloys and polymers. Results presented by ARL suggest that LIBS may be able to discriminate between energetic and non-energetic materials.
Research
Broadband high-resolution spectrometers were developed in 2000 and commercialized in 2003. Designed for material analysis, the spectrometer allowed the LIBS system to be sensitive to chemical elements in low concentration.
ARL LIBS applications studied from 2000 to 2010 included:
Tested for detection of Halon alternative agents
Tested a field-portable LIBS system for the detection of lead in soil and paint
Studied the spectral emission of aluminum and aluminum oxides from bulk aluminum in different bath gases
Performed kinetic modeling of LIBS plumes
Demonstrated the detection and discrimination of geological materials, plastic landmines, explosives, and chemical and biological warfare agent surrogates
ARL LIBS prototypes studied during this period included:
Laboratory LIBS setup
Commercial LIBS system
Man-portable LIBS device
Standoff LIBS system developed for 100+ m detection and discriminate on of explosive residues.
2010s developments
LIBS is one of several analytical techniques that can be deployed in the field as opposed to pure laboratory techniques e.g. spark OES. , recent research on LIBS focuses on compact and (man-)portable systems. Some industrial applications of LIBS include the detection of material mix-ups, analysis of inclusions in steel, analysis of slags in secondary metallurgy, analysis of combustion processes, and high-speed identification of scrap pieces for material-specific recycling tasks. Armed with data analysis techniques, this technique is being extended to pharmaceutical samples.
LIBS using short laser pulses
Following multiphoton or tunnel ionization the electron is being accelerated by inverse Bremsstrahlung and can collide with the nearby molecules and generate new electrons through collisions. If the pulse duration is long, the newly ionized electrons can be accelerated and eventually avalanche or cascade ionization follows. Once the density of the electrons reaches a critical value, breakdown occurs and high density plasma is created which has no memory of the laser pulse. So, the criterion for the shortness of a pulse in dense media is as follows: A pulse interacting with a dense matter is considered to be short if during the interaction the threshold for the avalanche ionization is not reached. At the first glance this definition may appear to be too limiting. Fortunately, due to the delicately balanced behavior of the pulses in dense media, the threshold cannot be reached easily. The phenomenon responsible for the balance is the intensity clamping through the onset of filamentation process during the propagation of strong laser pulses in dense media.
A potentially important development to LIBS involves the use of a short laser pulse as a spectroscopic source. In this method, a plasma column is created as a result of focusing ultrafast laser pulses in a gas. The self-luminous plasma is far superior in terms of low level of continuum and also smaller line broadening. This is attributed to the lower density of the plasma in the case of short laser pulses due to the defocusing effects which limits the intensity of the pulse in the interaction region and thus prevents further multiphoton/tunnel ionization of the gas.
Line intensity
For an optically thin plasma composed of a single, neutral atomic species in local thermal equilibrium (LTE), the density of photons emitted by a transition from level i to level j is
where :
is the emission rate density of photons (in m−3 sr−1 s−1)
is the number of neutral atoms in the plasma (in m−3)
is the transition probability between level i and level j (in s−1)
is the degeneracy of the upper level i (2J+1)
is the partition function (unitless)
is the energy level of the upper level i (in eV)
is the Boltzmann constant (in eV/K)
is the temperature (in K)
is the line profile such that
is the wavelength (in nm)
The partition function is the statistical occupation fraction of every level of the atomic species :
LIBS for food analysis
Recently, LIBS has been investigated as a fast, micro-destructive food analysis tool. It is considered a potential analytical tool for qualitative and quantitative chemical analysis, making it suitable as a PAT (Process Analytical Technology) or portable tool. Milk, bakery products, tea, vegetable oils, water, cereals, flour, potatoes, palm date and different types of meat have been analyzed using LIBS. Few studies have shown its potential as an adulteration detection tool for certain foods. LIBS has also been evaluated as a promising elemental imaging technique in meat.
In 2019, researchers of the University of York and of the Liverpool John Moores University employed LIBS for studying 12 European oysters (Ostrea edulis, Linnaeus, 1758) from the Late Mesolithic shell midden at Conors Island (Republic of Ireland). The results highlighted the applicability of LIBS to determine prehistoric seasonality practices as well as biological age and growth at an improved rate and reduced cost than was previously achievable.
See also
Atomic spectroscopy
Laser ablation
Laser-induced fluorescence
List of surface analysis methods
Photoacoustic spectroscopy
Raman spectroscopy
Spectroscopy
References
Further reading
External links
NIST LIBS Database
Scientific techniques
Spectroscopy
Emission spectroscopy | Laser-induced breakdown spectroscopy | [
"Physics",
"Chemistry"
] | 1,215 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Emission spectroscopy",
"Spectroscopy"
] |
1,609,956 | https://en.wikipedia.org/wiki/Vortex%20shedding | In fluid dynamics, vortex shedding is an oscillating flow that takes place when a fluid such as air or water flows past a bluff (as opposed to streamlined) body at certain velocities, depending on the size and shape of the body. In this flow, vortices are created at the back of the body and detach periodically from either side of the body forming a Kármán vortex street. The fluid flow past the object creates alternating low-pressure vortices on the downstream side of the object. The object will tend to move toward the low-pressure zone.
If the bluff structure is not mounted rigidly and the frequency of vortex shedding matches the resonance frequency of the structure, then the structure can begin to resonate, vibrating with harmonic oscillations driven by the energy of the flow. This vibration is the cause for overhead power line wires humming in the wind, and for the fluttering of automobile whip radio antennas at some speeds. Tall chimneys constructed of thin-walled steel tubes can be sufficiently flexible that, in air flow with a speed in the critical range, vortex shedding can drive the chimney into violent oscillations that can damage or destroy the chimney.
Vortex shedding was one of the causes proposed for the failure of the original Tacoma Narrows Bridge (Galloping Gertie) in 1940, but was rejected because the frequency of the vortex shedding did not match that of the bridge. The bridge actually failed by aeroelastic flutter.
A thrill ride, "VertiGo" at Cedar Point in Sandusky, Ohio suffered vortex shedding during the winter of 2001, causing one of the three towers to collapse. The ride was closed for the winter at the time. In northeastern Iran, the Hashemi-Nejad natural gas refinery's flare stacks suffered vortex shedding seven times from 1975 to 2003. Some simulation and analyses were done, which revealed that the main cause was the interaction of the pilot flame and flare stack. The problem was solved by removing the pilot.
Governing equation
The frequency at which vortex shedding takes place for a cylinder is related to the Strouhal number by the following equation:
Where is the dimensionless Strouhal number, is the vortex shedding frequency (Hz), is the diameter of the cylinder (m), and is the flow velocity (m/s).
The Strouhal number depends on the Reynolds number , but a value of 0.22 is commonly used. As the unit is dimensionless, any set of units can be used for the variables. Over four orders of magnitude in Reynolds number, from 102 to 105, the Strouhal number varies only between 0.18 and 0.22.
Mitigation of vortex shedding effects
Fairings can be fitted to a structure to streamline the flow past the structure, such as on an aircraft wing.
Tall metal smokestacks or other tubular structures such as antenna masts or tethered cables can be fitted with an external corkscrew fin (a strake) to deliberately introduce turbulence, so the load is less variable and resonant load frequencies have negligible amplitudes. The effectiveness of helical strakes for reducing vortex induced vibration was discovered in 1957 by Christopher Scruton and D. E. J. Walshe at the National Physics Laboratory in Great Britain. They are therefore often described as Scruton strakes. For maximum effectiveness in suppression of vortices caused by air flow, each fin or strake should have a height of about 10 percent of the cylinder diameter. The pitch of each fin should be approximately 5 times the cylinder diameter.
A tuned mass damper can be used to mitigate vortex shedding in stacks and chimneys.
A Stockbridge damper is used to mitigate aeolian vibrations caused by vortex shedding on overhead power lines.
See also
Aeroelastic flutter - vibration-induced vortices - by way of contrast
Vortex
Vortex-induced vibration
Von Kármán vortex street
References
External links
Flow visualisation of the vortex shedding mechanism on circular cylinder using hydrogen bubbles illuminated by a laser sheet in a water channel. Courtesy of G.R.S. Assi.
Vortices
Fluid dynamics | Vortex shedding | [
"Chemistry",
"Mathematics",
"Engineering"
] | 854 | [
"Vortices",
"Chemical engineering",
"Piping",
"Fluid dynamics",
"Dynamical systems"
] |
1,610,231 | https://en.wikipedia.org/wiki/Energy%20density | In physics, energy density is the quotient between the amount of energy stored in a given system or contained in a given region of space and the volume of the system or region considered. Often only the useful or extractable energy is measured. It is sometimes confused with stored energy per unit mass, which is called specific energy or .
There are different types of energy stored, corresponding to a particular type of reaction. In order of the typical magnitude of the energy stored, examples of reactions are: nuclear, chemical (including electrochemical), electrical, pressure, material deformation or in electromagnetic fields. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles from the combustion of gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈ 15 kg of air). Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide. Electrochemical reactions are used by devices such as laptop computers and mobile phones to release energy from batteries.
Energy per unit volume has the same physical units as pressure, and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. The energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
In cosmological and other contexts in general relativity, the energy densities considered relate to the elements of the stress–energy tensor and therefore do include the rest mass energy as well as energy densities associated with pressure.
Chemical energy
When discussing the chemical energy contained, there are different types which can be quantified depending on the intended purpose. One is the theoretical total amount of thermodynamic work that can be derived from a system, at a given temperature and pressure imposed by the surroundings, called exergy. Another is the theoretical amount of electrical energy that can be derived from reactants that are at room temperature and atmospheric pressure. This is given by the change in standard Gibbs free energy. But as a source of heat or for use in a heat engine, the relevant quantity is the change in standard enthalpy or the heat of combustion.
There are two kinds of heat of combustion:
The higher value (HHV), or gross heat of combustion, includes all the heat released as the products cool to room temperature and whatever water vapor is present condenses.
The lower value (LHV), or net heat of combustion, does not include the heat which could be released by condensing water vapor, and may not include the heat released on cooling all the way down to room temperature.
A convenient table of HHV and LHV of some fuels can be found in the references.
In energy storage and fuels
For energy storage, the energy density relates the stored energy to the volume of the storage equipment, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy of a fuel per unit mass is called its specific energy.
The adjacent figure shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Some values may not be precise because of isomers or other irregularities. The heating values of the fuel describe their specific energies more comprehensively.
The density values for chemical fuels do not include the weight of the oxygen required for combustion. The atomic weights of carbon and oxygen are similar, while hydrogen is much lighter. Figures are presented in this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that contain their own oxidizer (such as gunpowder and TNT), where the mass of the oxidizer in effect adds weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite.
Given the high energy density of gasoline, the exploration of alternative media to store the energy of powering a car, such as hydrogen or battery, is strongly limited by the energy density of the alternative medium. The same mass of lithium-ion storage, for example, would result in a car with only 2% the range of its gasoline counterpart. If sacrificing the range is undesirable, much more storage volume is necessary. Alternative options are discussed for energy storage to increase energy density and decrease charging time, such as supercapacitors.
No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly it is pulled out.
Efficiency
In general an engine will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion.
Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Large scale, intensive energy use impacts and is impacted by climate, waste storage, and environmental consequences.
Nuclear energy
The greatest energy source by far is matter itself, according to the mass–energy equivalence. This energy is described by , where c is the speed of light. In terms of density, , where ρ is the volumetric mass density, V is the volume occupied by the mass. This energy can be released by the processes of nuclear fission (~ 0.1%), nuclear fusion (~ 1%), or the annihilation of some or all of the matter in the volume V by matter–antimatter collisions (100%).
The most effective ways of accessing this energy, aside from antimatter, are fusion and fission. Fusion is the process by which the sun produces energy which will be available for billions of years (in the form of sunlight and heat). However as of 2024, sustained fusion power production continues to be elusive. Power from fission in nuclear power plants (using uranium and thorium) will be available for at least many decades or even centuries because of the plentiful supply of the elements on earth, though the full potential of this source can only be realized through breeder reactors, which are, apart from the BN-600 reactor, not yet used commercially.
Fission reactors
Nuclear fuels typically have volumetric energy densities at least tens of thousands of times higher than chemical fuels. A 1 inch tall uranium fuel pellet is equivalent to about 1 ton of coal, 120 gallons of crude oil, or 17,000 cubic feet of natural gas. In light-water reactors, 1 kg of natural uranium – following a corresponding enrichment and used for power generation– is equivalent to the energy content of nearly 10,000 kg of mineral oil or 14,000 kg of coal. Comparatively, coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density.
The density of thermal energy contained in the core of a light-water reactor (pressurized water reactor (PWR) or boiling water reactor (BWR)) of typically ( electrical corresponding to ≈ thermal) is in the range of 10 to 100 MW of thermal energy per cubic meter of cooling water depending on the location considered in the system (the core itself (≈ ), the reactor pressure vessel (≈ ), or the whole primary circuit (≈ )). This represents a considerable density of energy that requires a continuous water flow at high velocity at all times in order to remove heat from the core, even after an emergency shutdown of the reactor.
The incapacity to cool the cores of three BWRs at Fukushima after the 2011 tsunami and the resulting loss of external electrical power and cold source caused the meltdown of the three cores in only a few hours, even though the three reactors were correctly shut down just after the Tōhoku earthquake. This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's.
Antimatter–matter annihilation
Because antimatter–matter interactions result in complete conversion of the rest mass to radiant energy, the energy density of this reaction depends on the density of the matter and antimatter used. A neutron star would approximate the most dense system capable of matter-antimatter annihilation. A black hole, although denser than a neutron star, does not have an equivalent anti-particle form, but would offer the same 100% conversion rate of mass to energy in the form of Hawking radiation. Even in the case of relatively small black holes (smaller than astronomical objects) the power output would be tremendous.
Electric and magnetic fields
Electric and magnetic fields can store energy and its density relates to the strength of the fields within a given volume. This (volumetric) energy density is given by
where is the electric field, is the magnetic field, and and are the permittivity and permeability of the surroundings respectively. The SI unit is the joule per cubic metre.
In ideal (linear and nondispersive) substances, the energy density is
where is the electric displacement field and is the magnetizing field. In the case of absence of magnetic fields, by exploiting Fröhlich's relationships it is also possible to extend these equations to anisotropic and nonlinear dielectrics, as well as to calculate the correlated Helmholtz free energy and entropy densities.
In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma.
Pulsed sources
When a pulsed laser impacts a surface, the radiant exposure, i.e. the energy deposited per unit of surface, may also be called energy density or fluence.
Table of material energy densities
The following unit conversions may be helpful when considering the data in the tables: 3.6 MJ = 1 kW⋅h ≈ 1.34 hp⋅h. Since 1 J = 10−6 MJ and 1 m3 = 103 L, divide joule/m3 by 109 to get MJ/L = GJ/m3. Divide MJ/L by 3.6 to get kW⋅h/L.
Chemical reactions (oxidation)
Unless otherwise stated, the values in the following table are lower heating values for perfect combustion, not counting oxidizer mass or volume. When used to produce electricity in a fuel cell or to do work, it is the Gibbs free energy of reaction (ΔG) that sets the theoretical upper limit. If the produced is vapor, this is generally greater than the lower heat of combustion, whereas if the produced is liquid, it is generally less than the higher heat of combustion. But in the most relevant case of hydrogen, ΔG is 113 MJ/kg if water vapor is produced, and 118 MJ/kg if liquid water is produced, both being less than the lower heat of combustion (120 MJ/kg).
Electrochemical reactions (batteries)
Common battery formats
Nuclear reactions
In material deformation
The mechanical energy storage capacity, or resilience, of a Hookean material when it is deformed to the point of failure can be computed by calculating tensile strength times the maximum elongation dividing by two. The maximum elongation of a Hookean material can be computed by dividing stiffness of that material by its ultimate tensile strength. The following table lists these values computed using the Young's modulus as measure of stiffness:
Other release mechanisms
See also
Energy content of biofuel
Energy density Extended Reference Table
Figure of merit
Food energy
Heat of combustion
High-energy-density matter
Power density and specifically Power-to-weight ratio
Rechargeable battery
Solid-state battery
Specific energy
Specific impulse
Orders of magnitude (energy)
References
Further reading
The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998)
Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000)
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
"Aircraft Fuels". Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259
"Fuels of the Future for Cars and Trucks" – Dr. James J. Eberhardt – Energy Efficiency and Renewable Energy, U.S. Department of Energy – 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002
Energy
Density
Volume-specific quantities
Physical cosmological concepts
Physical quantities | Energy density | [
"Physics",
"Mathematics"
] | 2,766 | [
"Physical cosmological concepts",
"Physical phenomena",
"Concepts in astrophysics",
"Physical quantities",
"Quantity",
"Intensive quantities",
"Mass",
"Volume-specific quantities",
"Energy (physics)",
"Energy",
"Density",
"Wikipedia categories named after physical quantities",
"Physical prop... |
9,564,150 | https://en.wikipedia.org/wiki/Jones%27%20stain | Jones' stain, also Jones stain, is a methenamine silver–periodic acid–Schiff stain used in pathology. It is also referred to as methenamine PAS which is commonly abbreviated MPAS.
It stains for basement membrane and is widely used in the investigation of medical kidney diseases.
The Jones stain demonstrates the spiked GBM, caused by subepithelial deposits, seen in membranous nephropathy.
See also
Staining
References
Staining | Jones' stain | [
"Chemistry",
"Biology"
] | 96 | [
"Staining",
"Microbiology techniques",
"Cell imaging",
"Microscopy"
] |
9,564,185 | https://en.wikipedia.org/wiki/Van%20Gieson%27s%20stain | Van Gieson's stain is a mixture of picric acid and acid fuchsin. It is the simplest method of differential staining of collagen and other connective tissue. It was introduced to histology by American neuropsychiatrist and pathologist Ira Van Gieson.
HvG stain generally refers to the combination of hematoxylin and Van Gieson's stain, but can possibly refer to a combination of hibiscus extract-iron solution and Van Gieson's stain.
Other dyes
Other dyes used in connection with Van Gieson staining include:
Alcian blue
Amido black 10B
Verhoeff's stain
References
Histology
Staining | Van Gieson's stain | [
"Chemistry",
"Biology"
] | 144 | [
"Staining",
"Biotechnology stubs",
"Biochemistry stubs",
"Histology",
"Microbiology techniques",
"Microscopy",
"Biochemistry",
"Cell imaging"
] |
9,564,390 | https://en.wikipedia.org/wiki/Swiss%20cheese%20model | The Swiss cheese model of accident causation is a model used in risk analysis and risk management. It likens human systems to multiple slices of Swiss cheese, which has randomly placed and sized holes in each slice, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize (e.g. a hole in each slice in the stack aligning with holes in all other slices), since other defenses also exist (e.g. other slices of cheese), to prevent a single point of failure.
The model was originally formally propounded by James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect". Applications include aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth.
Although the Swiss cheese model is respected and considered a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.
Holes and slices
In the Swiss cheese model, an organization's defenses against failure are modeled as a series of imperfect barriers, represented as slices of cheese, specifically Swiss cheese with holes known as "eyes", such as Emmental cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason's words) "a trajectory of accident opportunity", so that a hazard passes through holes in all of the slices, leading to a failure.
Frosch described Reason's model in mathematical terms as a model in percolation theory, which he analyses as a Bethe lattice.
Active and latent failures
The model includes active and latent failures. Active failures encompass the unsafe acts that can be directly linked to an accident, such as (in the case of aircraft accidents) a navigation error. Latent failures include contributory factors that may lie dormant for days, weeks, or months until they contribute to the accident. Latent failures span the first three domains of failure in Reason's model.
In the early days of the Swiss cheese model, late 1980 to about 1992, attempts were made to combine two theories: James Reason's multi-layer defence model and Willem Albert Wagenaar's tripod theory of accident causation. This resulted in a period in which the Swiss cheese diagram was represented with the slices of cheese labelled 'active failures', 'preconditions' and 'latent failures'.
These attempts to combine these theories still causes confusion today. A more correct version of the combined theories is shown with the active failures (now called immediate causes), preconditions and latent failures (now called underlying causes) shown as the reason each barrier (slice of cheese) has a hole in it, and the slices of cheese as the barriers.
Examples of applications
The framework has been applied to a range of areas including aviation safety, various engineering domains, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth.
The model was used in some areas of healthcare. For example, a latent failure could be the similar packaging of two drugs that are then stored close to each other in a pharmacy. This failure would be a contributory factor in the administration of the wrong drug to a patient. Such research led to the realization that medical error can be the result of "system flaws, not character flaws", and that greed, ignorance, malice or laziness are not the only causes of error.
The Swiss cheese model is nowadays widely used within process safety. Each slice of cheese is usually associated to a safety-critical system, often with the support of bow-tie diagrams. This use has become particularly common when applied to oil and gas drilling and production, both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation.
Lubnau, Lubnau, and Okray apply the model to the engineering of firefighting systems, aiming to reduce human errors by "inserting additional layers of cheese into the system", namely the techniques of Crew Resource Management.
Olson and Raz apply the model to improve deception in the methodology of experimental studies, with multiple thin layers of cheese representing subtle components of deception which hide the study hypothesis.
See also
Chain of events (accident analysis)
Healthcare error proliferation model
Iteration
Latent human error
Mitigation
Proximate and ultimate causation
Proximate cause
Redundancy (engineering)
Root cause analysis
System accident
Systems engineering
Systems modelling
References
Aviation safety
Error
Failure
Metaphors referring to food and drink
Process safety
Safety engineering
Scientific models | Swiss cheese model | [
"Chemistry",
"Engineering"
] | 1,007 | [
"Chemical process engineering",
"Systems engineering",
"Safety engineering",
"Process safety"
] |
9,567,916 | https://en.wikipedia.org/wiki/Mechanical%20screening | Mechanical screening, often just called screening, is the practice of taking granulated or crushed ore material and separating it into multiple grades by particle size.
This practice occurs in a variety of industries such as mining and mineral processing, agriculture, pharmaceutical, food, plastics, and recycling.
A method of separating solid particles according to size alone is called screening.
General categories
Screening falls under two general categories: dry screening, and wet screening. From these categories, screening separates a flow of material into grades, these grades are then either further processed to an intermediary product or a finished product. Additionally, the machines can be categorized into a moving screen and static screen machines, as well as by whether the screens are horizontal or inclined.
Applications
The mining and mineral processing industry uses screening for a variety of processing applications. For example, after mining the minerals, the material is transported to a primary crusher. Before crushing large boulder are scalped on a shaker with thick shielding screening. Further down stream after crushing the material can pass through screens with openings or slots that continue to become smaller. Finally, screening is used to make a final separation to produce saleable products based on a grade or a size range.
Process
A screening machine consist of a drive that induces vibration, a screen media that causes particle separation, and a deck which holds the screen media and the drive and is the mode of transport for the vibration.
There are physical factors that makes screening practical. For example, vibration, g force, bed density, and material shape all facilitate the rate or cut. Electrostatic forces can also hinder screening efficiency in way of water attraction causing sticking or plugging, or very dry material generate a charge that causes it to attract to the screen itself.
As with any industrial process there is a group of terms that identify and define what screening is. Terms like blinding, contamination, frequency, amplitude, and others describe the basic characteristics of screening, and those characteristics in turn shape the overall method of dry or wet screening.
In addition, the way a deck is vibrated differentiates screens. Different types of motion have their advantages and disadvantages. In addition media types also have their different properties that lead to advantages and disadvantages.
Finally, there are issues and problems associated with screening. Screen tearing, contamination, blinding, and dampening all affect screening efficiency.
Physical principles
Vibration - either sinusoidal vibration or gyratory vibration.
Sinusoidal Vibration occurs at an angled plane relative to the horizontal. The vibration is in a wave pattern determined by frequency and amplitude.
Gyratory Vibration occurs at near level plane at low angles in a reciprocating side to side motion.
Gravity - This physical interaction is after material is thrown from the screen causing it to fall to a lower level. Gravity also pulls the particles through the screen media.
Density - The density of the material relates to material stratification.
Electrostatic Force - This force applies to screening when particles are extremely dry or is wet.
Screening terminology
Like any mechanical and physical entity there are scientific, industrial, and layman terminology. The following is a partial list of terms that are associated with mechanical screening.
Amplitude - This is a measurement of the screen cloth as it vertically peaks to its tallest height and troughs to its lowest point. Measured in multiples of the acceleration constant g (g-force).
Acceleration - Applied Acceleration to the screen mesh in order to overcome the van der waal forces
Blinding - When material plugs into the open slots of the screen cloth and inhibits overflowing material from falling through.
Brushing - This procedure is performed by an operator who uses a brush to brush over the screen cloth to dislodged blinded opening.
Contamination - This is unwanted material in a given grade. This occurs when there is oversize or fine size material relative to the cut or grade. Another type of contamination is foreign body contamination.
Oversize contamination occurs when there is a hole in the screen such that the hole is larger than the mesh size of the screen. Other instances where oversize occurs is material overflow falling into the grade from overhead, or there is the wrong mesh size screen in place.
Fines contamination is when large sections of the screen cloth is blinded over, and material flowing over the screen does not fall through. The fines are then retained in the grade.
Foreign body contamination is unwanted material that differs from the virgin material going over and through the screen. It can be anything ranging from tree twigs, grass, metal slag to other mineral types and composition. This contamination occurs when there is a hole in the scalping screen or a foreign material's mineralogy or chemical composition differs from the virgin material.
Deck - a deck is frame or apparatus that holds the screen cloth in place. It also contains the screening drive. It can contain multiple sections as the material travels from the feed end to the discharge end. Multiple decks are screen decks placed in a configuration where there are a series of decks attached vertically and lean at the same angle as it preceding and exceeding decks. Multiple decks are often referred to as single deck, double deck, triple deck, etc.
Frequency - Measured in hertz (Hz) or revolutions per minute (RPM). Frequency is the number of times the screen cloth sinusoidally peaks and troughs within a second. As for a gyratory screening motion it is the number of revolutions the screens or screen deck takes in a time interval, such as revolution per minute (RPM).
Gradation, grading - Also called "cut" or "cutting." Given a feed material in an initial state, the material can be defined to have a particle size distribution. Grading is removing the maximum size material and minimum size material by way of mesh selection.
Screen Media (Screen cloth) - it is the material defined by mesh size, which can be made of any type of material such steel, stainless steel, rubber compounds, polyurethane, brass, etc.
Shaker - the whole assembly of any type mechanical screening machine.
Stratification - This phenomenon occurs as vibration is passed through a bed of material. This causes coarse (larger) material to rise and finer (smaller) material to descend within the bed. The material in contact with screen cloth either falls through a slot or blinds the slot or contacts the cloth material and is thrown from the cloth to fall to the next lower level.
Mesh - The number of open slots per linear inch. Mesh is arranged in multiple configuration. Mesh can be a square pattern, long-slotted rectangular pattern, circular pattern, or diamond pattern.
Scalp, scalping - this is the very first cut of the incoming material with the sum of all its grades. Scalping is removing the largest size particles. This includes enormously large particles relative to the other particle's sizes. Scalping also cleans the incoming material from foreign body contamination such as twigs, trash, glass, or other unwanted oversize material.
Types of mechanical screening
There are a number of types of mechanical screening equipment that cause segregation. These types are based on the motion of the machine through its motor drive.
Circle-throw vibrating equipment - This type of equipment has an eccentric shaft that causes the frame of the shaker to lurch at a given angle. This lurching action literally throws the material forward and up. As the machine returns to its base state the material falls by gravity to physically lower level. This type of screening is used also in mining operations for large material with sizes that range from six inches to +20 mesh.
High frequency vibrating equipment - This type of equipment drives the screen cloth only. Unlike above the frame of the equipment is fixed and only the screen vibrates. However, this equipment is similar to the above such that it still throws material off of it and allows the particles to cascade down the screen cloth. These screens are for sizes smaller than 1/8 of an inch to +150 mesh.
Gyratory equipment - This type of equipment differs from the above two such that the machine gyrates in a circular motion at a near level plane at low angles. The drive is an eccentric gear box or eccentric weights.
Trommel screens - Does not require vibration. Instead, material is fed into a horizontal rotating drum with screen panels around the diameter of the drum.
Tumbler screening technique
An improvement on vibration, vibratory, and linear screeners, a tumbler screener uses elliptical action which aids in screening of even very fine material. As like panning for gold, the fine particles tend to stay towards the center and the larger go to the outside. It allows for segregation and unloads the screen surface so that it can effectively do its job. With the addition of multiple decks and ball cleaning decks, even difficult products can be screened at high capacity to very fine separations.
Circle-throw vibrating equipment
Circle-Throw Vibrating Equipment is a shaker or a series of shakers as to where the drive causes the whole structure to move. The structure extends to a maximum throw or length and then contracts to a base state. A pattern of springs are situated below the structure to where there is vibration and shock absorption as the structure returns to the base state.
This type of equipment is used for very large particles, sizes that range from pebble size on up to boulder size material. It is also designed for high volume output. As a scalper, this shaker will allow oversize material to pass over and fall into a crusher such a cone crusher, jaw crusher, or hammer mill. The material that passes the screen by-passes the crusher and is conveyed and combined with the crush material.
Also this equipment is used in washing processes, as material passes under spray bars, finer material and foreign material is washed through the screen. This is one example of wet screening.
High frequency vibrating equipment
High-frequency vibrating screening equipment is a shaker whose frame is fixed and the drive vibrates only the screen cloth. High frequency vibration equipment is for particles that are in this particle size range of an 1/8 in (3 mm) down to a +150 mesh. Traditional shaker screeners have a difficult time making separations at sizes like 44 microns. At the same time, other high energy sieves like the Elcan Industries' advanced screening technology allow for much finer separations down to as fine as 10um and 5um, respectively.
These shakers usually make a secondary cut for further processing or make a finished product cut.
These shakers are usually set at a steep angle relative to the horizontal level plane. Angles range from 25 to 45 degrees relative to the horizontal level plane.
Gyratory equipment
This type of equipment has an eccentric drive or weights that causes the shaker to travel in an orbital path. The material rolls over the screen and falls with the induction of gravity and directional shifts. Rubber balls and trays provide an additional mechanical means to cause the material to fall through. The balls also provide a throwing action for the material to find an open slot to fall through.
The shaker is set a shallow angle relative to the horizontal level plane. Usually, no more than 2 to 5 degrees relative to the horizontal level plane.
These types of shakers are used for very clean cuts. Generally, a final material cut will not contain any oversize or any fines contamination.
These shakers are designed for the highest attainable quality at the cost of a reduced feed rate.
Trommel screens
Trommel screens have a rotating drum on a shallow angle with screen panels around the diameter of the drum. The feed material always sits at the bottom of the drum and, as the drum rotates, always comes into contact with clean screen. The oversize travels to the end of the drum as it does not pass through the screen, while the undersize passes through the screen into a launder below.
Screen Media Attachment Systems
There are many ways to install screen media into a screen box deck (shaker deck). Also, the type of attachment system has an influence on the dimensions of the media.
Tensioned screen media
Tensioned screen cloth is typically 4 feet by the width or the length of the screening machine depending on whether the deck is side or end tensioned. Screen cloth for tensioned decks can be made with hooks and are attached with clamp rails bolted on both sides of the screen box. When the clamp rail bolts are tightened, the cloth is tensioned or even stretched in the case of some types of self-cleaning screen media. To ensure that the center of the cloth does not tap repeatedly on the deck due to the vibrating shaker and that the cloth stays tensioned, support bars are positioned at different heights on the deck to create a crown curve from hook to hook on the cloth. Tensioned screen cloth is available in various materials: stainless steel, high carbon steel and oil tempered steel wires, as well as moulded rubber or polyurethane and hybrid screens (a self-cleaning screen cloth made of rubber or polyurethane and metal wires).
Commonly, vibratory-type screening equipment employs rigid, circular sieve frames to which woven wire mesh is attached. Conventional methods of producing tensioned meshed screens has given way in recent years to bonding, whereby the mesh is no longer tensioned and trapped between a sieve frame body and clamping ring; instead, developments in modern adhesive technologies has allowed the industry to adopt high strength structural adhesives to bond tensioned mesh directly to frames.
Modular screen media
Modular screen media is typically 1 foot large by 1 or 2 feet long (4 feet long for ISEPREN WS 85 ) steel reinforced polyurethane or rubber panels. They are installed on a flat deck (no crown) that normally has a larger surface than a tensioned deck. This larger surface design compensates for the fact that rubber and polyurethane modular screen media offers less open area than wire cloth. Over the years, numerous ways have been developed to attach modular panels to the screen deck stringers (girders). Some of these attachment systems have been or are currently patented. Self-cleaning screen media is also available on this modular system.
Types of Screen Media
There are several types of screen media manufactured with different types of material that use the two common types of screen media attachment systems, tensioned and modular.
Woven Wire Cloth (Mesh)
Woven wire cloth, typically produced from stainless steel, is commonly employed as a filtration medium for sieving in a wide range of industries. Most often woven with a plain weave, or a twill weave for the lightest of meshes, apertures can be produced from a few microns upwards (e.g. 25 microns), employing wires with diameters from as little as 25 microns. A twill weave allows a mesh to be woven when the wire diameter is too thick in proportion to the aperture. Other, less commonplace, weaves, such as Dutch/Hollander, allow the production of meshes that are stronger and/or having smaller apertures.
Today wire cloth is woven to strict international standards, e.g. ISO1944:1999, which dictates acceptable tolerance regarding nominal mesh count and blemishes. The nominal mesh count, to which mesh is generally defined is a measure of the number of openings per lineal inch, determined by counting the number of openings from the centre of one wire to the centre of another wire one lineal inch away. For example, a 2 mesh woven with a wire of 1.6mm wire diameter has an aperture of 11.1mm (see picture below of a 2 mesh with an intermediate crimp). The formula for calculating the aperture of a mesh, with a known mesh count and wire diameter, is as follows:
where a = aperture, b = mesh count and c = wire diameter.
Other calculations regarding woven wire cloth/mesh can be made including weight and open area determination. Of note, wire diameters are often referred to by their standard wire gauge (swg); e.g. a 1.6mm wire is a 16 swg.
Traditionally, screen cloth was made with metal wires woven with a loom. Today, woven cloth is still widely used primarily because they are less expensive than other types of screen media. Over the years, different weaving techniques have been developed; either to increase the open area percentage or add wear-life. Slotted opening woven cloth is used where product shape is not a priority and where users need a higher open area percentage. Flat-top woven cloth is used when the consumer wants to increase wear-life. On regular woven wire, the crimps (knuckles on woven wires) wear out faster than the rest of the cloth resulting in premature breakage. On flat-top woven wire, the cloth wears out equally until half of the wire diameter is worn, resulting in a longer wear life. Unfortunately flat-top woven wire cloth is not widely used because of the lack of crimps that causes a pronounced reduction of passing fines resulting in premature wear of con crushers.
Perforated & Punch Plate
On a crushing and screening plant, punch plates or perforated plates are mostly used on scalper vibrating screens, after raw products pass on grizzly bars. Most likely installed on a tensioned deck, punch plates offer excellent wear life for high-impact and high material flow applications.
Synthetic screen media (typically rubber or polyurethane)
Synthetic screen media is used where wear life is an issue. Large producers such as mines or huge quarries use them to reduce the frequency of having to stop the plant for screen deck maintenance. Rubber is also used as a very resistant high-impact screen media material used on the top deck of a scalper screen. To compete with rubber screen media fabrication, polyurethane manufacturers developed screen media with lower Shore Hardness. To compete with self-cleaning screen media that is still primarily available in tensioned cloth, synthetic screen media manufacturers also developed membrane screen panels, slotted opening panels and diamond opening panels. Due to the 7-degree demoulding angle, polyurethane screen media users can experience granulometry changes of product during the wear life of the panel.
Self-Cleaning Screen Media
Self-cleaning screen media was initially engineered to resolve screen cloth blinding, clogging and pegging problems. The idea was to place crimped wires side by side on a flat surface, creating openings and then, in some way, holding them together over the support bars (crown bars or bucker bars). This would allow the wires to be free to vibrate between the support bars, preventing blinding, clogging and pegging of the cloth. Initially, crimped longitudinal wires on self-cleaning cloth were held together over support bars with woven wire. In the 50s, some manufacturers started to cover the woven cross wires with caulking or rubber to prevent premature wear of the crimps (knuckles on woven wires). One of the pioneer products in this category was ONDAP GOMME made by the French manufacturer Giron. During the mid 90s, Major Wire Industries Ltd., a Quebec manufacturer, developed a “hybrid” self-cleaning screen cloth called Flex-Mat, without woven cross wires. In this product, the crimped longitudinal wires are held in place by polyurethane strips. Rather than locking (impeding) vibration over the support bars due to woven cross wires, polyurethane strips reduce vibration of longitudinal wires over the support bars, thus allowing vibration from hook to hook. Major Wire quickly started to promote this product as a high-performance screen that helped producers screen more in-specification material for less cost and not simply a problem solver. They claimed that the independent vibrating wires helped produce more product compared to a woven wire cloth with the same opening (aperture) and wire diameter. This higher throughput would be a direct result of the higher vibration frequency of each independent wire of the screen cloth (calculated in hertz) compared to the shaker vibration (calculated in RPM), accelerating the stratification of the material bed. Another benefit that helped the throughput increase is that hybrid self-cleaning screen media offered a better open area percentage than woven wire screen media. Due to its flat surface (no knuckles), hybrid self-cleaning screen media can use a smaller wire diameter for the same aperture than woven wire and still lasts as long, resulting in a greater opening percentage.
References
Mining equipment
Plastics industry
Metallurgical processes
Industrial processes
Solid-solid separation | Mechanical screening | [
"Chemistry",
"Materials_science",
"Engineering"
] | 4,158 | [
"Solid-solid separation",
"Mining equipment",
"Separation processes by phases",
"Metallurgical processes",
"Metallurgy"
] |
12,033,440 | https://en.wikipedia.org/wiki/Pyrolysis%E2%80%93gas%20chromatography%E2%80%93mass%20spectrometry | Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
How it works
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie Point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest bonds, producing smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprint to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside programmable temperature vaporizer (PTV) injectors that provide quick heating (up to 60 °C/s) and high maximum temperatures of 600-650 °C. This is sufficient for many pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case quartz GC inlet liners can be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Applications
Pyrolysis gas chromatography is useful for the identification of involatile compounds. These materials include polymeric materials, such as acrylics or alkyds. The way in which the polymer fragments, before it is separated in the GC, can help in identification. Pyrolysis gas chromatography is also used for environmental samples, including fossils. Pyrolysis GC is used in forensic laboratories to analyze evidence found in crime scenes such as paints, adhesives, plastics, synthetic fibres and soil extracts.
References
Mass spectrometry | Pyrolysis–gas chromatography–mass spectrometry | [
"Physics",
"Chemistry"
] | 530 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
12,034,118 | https://en.wikipedia.org/wiki/Cambridge%20IT%20Skills%20Diploma | The Cambridge IT Skills Diploma is a certificate that is based on the Microsoft Office software, this certificate assesses a range of the most important IT skills required and is available at two levels: Foundation and Standard.
Exam methodology
These Online based examinations consist of two levels from which the candidate can choose. Standard and Foundation assessments are computer-based and available on-demand throughout the year to provide a high-quality and flexible assessment service for individuals and centers.
Diploma modules
The program's modules cover the following topics:
Introduction to IT.
PC Usage and Managing files.
Word Processing, Spreadsheets, Presentations and Databases using Microsoft Office.
Electronic Communication using Microsoft Internet Explorer.
Diploma types
The name of the certificate awarded to the successful candidate is the “Cambridge International Diploma in IT Skills”
There are four types of Diploma:
Single-Module Diploma, the basic requirement for it is any of the seven applications.
Four-Module Diploma, the basic requirements for it are Introduction to IT, Windows, Word and Electronic communication.
Five-Module Diploma, the basic requirements for it are Windows, Word, Excel, (Access or PowerPoint) and Internet communication.
Seven-Module Diploma, the basic requirements for it are Introduction to IT, Windows, Microsoft Office and Internet communication.
Recognition and accreditation
Due to the importance of the Cambridge Diploma in IT Skills, many professional bodies and international organizations have given their support, ranging from official approval of the Diploma to requiring the Diploma for their employees.
The Cambridge Diploma in IT Skills is recognized by many organizations and governments such as Jordan, Kuwait, Kingdom of Bahrain, Lebanon, United Arab Emirates (UAE), United Nations Educational, Scientific and Cultural Organization (UNESCO) and United Nations Relief and Works Agency (UNRWA).
Abu-Ghazaleh Cambridge IT SkillCenter is the exclusive center in the Middle East that provides the Cambridge IT Skills Diploma in Arabic.
References
CIE.org
terabyteit.com
ameinfo.com
External links
Cambridge International Diploma in Information Technology
Abu-Ghazaleh Cambridge Information Technology Skills Center
Information technology qualifications
IT Skills Diploma | Cambridge IT Skills Diploma | [
"Technology"
] | 419 | [
"Computer occupations",
"Information technology qualifications"
] |
12,035,083 | https://en.wikipedia.org/wiki/Federal%20Signal%20Corporation | Federal Signal Corporation is an American manufacturer headquartered in Downers Grove, Illinois. Federal Signal manufactures street sweeper vehicles, public address systems, emergency vehicle equipment, and emergency vehicle lighting.
The company operates two groups: Federal Signal Environmental Solutions and Federal Signal Safety and Security Systems. Federal Signal Environmental Solutions Group manufactures street sweeper vehicles, sewer cleaner and vacuum loader trucks, hydro excavators, waterblasting equipment, dump truck bodies, and trailers. Federal Signal Safety and Security Systems Group manufactures campus alerting systems, emergency vehicle lighting, emergency sirens, alarm systems, outdoor warning sirens, and public address systems.
Currently, the company has 14 manufacturing facilities in 5 different countries.
History
Federal Signal was founded in Chicago, Illinois, as the Federal Electric Company in 1901 by brothers John and James Gilchrist and partner John Goehst, manufacturing and selling store signs lit by incandescent lamps. By 1915, they began manufacturing and selling electrically operated mechanical sirens (such as the Q Siren and the Model 66 Siren). During this time, Federal Electric came under the ownership of Commonwealth Edison, eventually becoming a part of the utility empire of Samuel Insull.
By the 1950s, the company was manufacturing outdoor warning sirens, most notably the Thunderbolt series, primarily intended for warning of air raid attacks or fallout during the Cold War. Many of these sirens have been removed, but some still are operating in tornado siren systems. Longtime engineer Earl Gosswiller patented the Beacon-Ray and TwinSonic products, which were popular emergency vehicle lightbars.
In 1955, the company became a corporation, renaming itself "Federal Sign and Signal Corporation" . By this time, it made outdoor warning sirens, police sirens, fire alarms, and outdoor lighting.
By 1961, Federal Sign and Signal had gone public, trading on the NASDAQ market. This was when new products started being manufactured and sold, such as the Federal Signal STH-10. In 1976, the company became Federal Signal Corporation.
On Feb 22, 2000, Federal Signal Corporation announced the signing of a definitive agreement for the acquisition of P.C.S. Company.
On June 27, 2005, Federal Signal Corporation announced the signing of a joint venture agreement to establish a Chinese company, Federal Signal (Shanghai) Environmental & Sanitary Vehicle Company Limited, based near Shanghai, China.
On February 29, 2016, Federal Signal announced the signing of a definitive agreement for the acquisition of Canada's largest infrastructure-maintenance equipment supplier Joe Johnson Equipment (1), and the rights to the name and company.
On May 8, 2017, Federal Signal announced the acquisition of Truck Bodies and Equipment International (TBEI), making it the owner of six dump body and trailer brands, including Crysteel, Duraclass, Rugby Manufacturing, Ox Bodies, Travis and J-Craft.
On July 2, 2019, Federal Signal completed the acquisition of the assets and operations of Mark Rite Lines Equipment Company, Inc., a manufacturer of road-marking equipment. along with HighMark Traffic Services, Inc., which provides road-marking services in Montana. The signing of the purchase agreement was previously announced on May 14, 2019.
On November 17, 2022, Federal Signal announced the signing of a definitive agreement to acquire substantially all the assets and operations of Blasters, Inc. (“Blasters”), a leading manufacturer of truck-mounted waterblasting equipment, for an initial purchase price of $14 million, subject to post-closing adjustments. In addition, there is a contingent earn-out payment of up to $8 million.
See also
Rumbler (siren)
Q2B
Federal Signal Model 2
References
Company History - Federal Signal Corp. (Funding Universe)
Companies listed on the New York Stock Exchange
Fire detection and alarm companies
Emergency population warning systems
Sirens
Oak Brook, Illinois
Companies based in DuPage County, Illinois
Emergency services equipment makers
Articles containing video clips | Federal Signal Corporation | [
"Technology"
] | 786 | [
"Warning systems",
"Emergency population warning systems"
] |
12,036,119 | https://en.wikipedia.org/wiki/Gromov%27s%20inequality%20for%20complex%20projective%20space | In Riemannian geometry, Gromov's optimal stable 2-systolic inequality is the inequality
,
valid for an arbitrary Riemannian metric on the complex projective space, where the optimal bound is attained
by the symmetric Fubini–Study metric, providing a natural geometrisation of quantum mechanics. Here is the stable 2-systole, which in this case can be defined as the infimum of the areas of rational 2-cycles representing the class of the complex projective line in 2-dimensional homology.
The inequality first appeared in as Theorem 4.36.
The proof of Gromov's inequality relies on the Wirtinger inequality for exterior 2-forms.
Projective planes over division algebras
In the special case n=2, Gromov's inequality becomes . This inequality can be thought of as an analog of Pu's inequality for the real projective plane . In both cases, the boundary case of equality is attained by the symmetric metric of the projective plane. Meanwhile, in the quaternionic case, the symmetric metric on is not its systolically optimal metric. In other words, the manifold admits Riemannian metrics with higher systolic ratio than for its symmetric metric .
See also
Loewner's torus inequality
Pu's inequality
Gromov's inequality (disambiguation)
Gromov's systolic inequality for essential manifolds
Systolic geometry
References
Geometric inequalities
Differential geometry
Riemannian geometry
Systolic geometry | Gromov's inequality for complex projective space | [
"Mathematics"
] | 314 | [
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
12,037,838 | https://en.wikipedia.org/wiki/Lens%20clock | A lens clock is a mechanical dial indicator that is used to measure the dioptric power of a lens. It is a specialized version of a spherometer. A lens clock measures the curvature of a surface, but gives the result as an optical power in diopters, assuming the lens is made of a material with a particular refractive index.
How it works
The lens clock has three pointed probes that make contact with the surface of the lens. The outer two probes are fixed while the center one moves, retracting as the instrument is pressed down on the lens's surface. As the probe retracts, the hand on the face of the dial turns by an amount proportional to the distance.
The optical power of the surface is given by
where is the index of refraction of the glass, is the vertical distance (sagitta) between the center and outer probes, and is the horizontal separation of the outer probes. To calculate in diopters, both
and must be specified in meters.
A typical lens clock is calibrated to display the power of a crown glass surface, with a refractive index of 1.523. If the lens is made of some other material, the reading must be adjusted to correct for the difference in refractive index.
Measuring both sides of the lens and adding the surface powers together gives the approximate optical power of the whole lens. (This approximation relies on the assumption that the lens is relatively thin.)
Radius of curvature
The radius of curvature of the surface can be obtained from the optical power given by the lens clock using the formula
where is the index of refraction for which the lens clock is calibrated, regardless of the actual index of the lens being measured. If the lens is made of glass with some other index , the true optical power of the surface can be obtained using
Example—correcting for refractive index
A biconcave lens made of flint glass with an index of 1.7 is measured with a lens clock calibrated for crown glass with an index of 1.523. For this particular lens, the lens clock gives surface powers of −3.0 and −7.0 diopters (dpt). Because the clock is calibrated for a different refractive index the optical power of the lens is not the sum of the surface powers given by the clock. The optical power of the lens is instead obtained as follows:
First, the radii of curvature are obtained:
Next, the optical powers of each surface are obtained:
Finally, if the lens is thin the powers of each surface can be added to give the approximate optical power of the whole lens: −13.4 diopters. The actual power, as read by a vertometer or lensometer, might differ by as much as 0.1 diopters.
Estimating thickness
A lens clock can also be used to estimate the thickness of thin objects, such as a hard or gas-permeable contact lens. Ideally, a contact lens dial thickness gauge would be used for this, but a lens clock can be used if a dial thickness gauge is not available. To do this, the contact lens is placed concave side up on a table or other hard surface. The lens clock is then brought down on it such that the center prong contacts the lens as close to its center as possible, and the outer prongs rest on the table. The thickness of the lens is then the sagitta in the formula above, and can be calculated from the optical power reading, if the distance between the outer prongs is known.
See also
Astigmatism
Eyeglass prescription
Corrective lens
Galileo
Lapidary
George Ravenscroft
Optometry
Vertex (optics)
Clock
Gear ratio
References
Ophthalmic equipment
Dimensional instruments
pl:Sferometr | Lens clock | [
"Physics",
"Mathematics"
] | 780 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
13,590,437 | https://en.wikipedia.org/wiki/Tenidap | Tenidap was a COX/5-LOX inhibitor and cytokine-modulating anti-inflammatory drug candidate that was under development by Pfizer as a promising potential treatment for rheumatoid arthritis, but Pfizer halted development after marketing approval was rejected by the FDA in 1996 due to liver and kidney toxicity, which was attributed to metabolites of the drug with a thiophene moiety that caused oxidative damage.
References
Indoles
Thiophenes
Ureas
Nonsteroidal anti-inflammatory drugs
Chloroarenes
Aromatic ketones
Hydroxyarenes
Carboxamides
Abandoned drugs
Drugs developed by Pfizer | Tenidap | [
"Chemistry"
] | 137 | [
"Organic compounds",
"Ureas",
"Drug safety",
"Abandoned drugs"
] |
13,603,155 | https://en.wikipedia.org/wiki/Industrial%20process%20imaging | Industrial process imaging, or industrial process tomography or process tomography are methods used to form an image of a cross-section of vessel or pipe in a chemical engineering or mineral processing, or petroleum extraction or refining plant.
Process imaging is used for the development of process equipment such as filters, separators and conveyors, as well as monitoring of production plant including flow rate measurement. As well as conventional tomographic methods widely used in medicine such as X-ray computed tomography, magnetic resonance imaging and gamma ray tomography, and ultra-sound tomography, new and emerging methods such as electrical capacitance tomography and magnetic induction tomography and electrical resistivity tomography (similar to medical electrical impedance tomography) are also used.
Although such techniques are not in widespread deployment in industrial plant there is an active research community, including a Virtual Center for industrial Process Tomography, and a regular World Congress on Industrial Process Tomography, now organized by a learned society for this area, the International Society for Industrial Process Tomography
A number of applications of tomography of process equipment were described in the 1970s, using Ionising Radiation from X-ray or isotope sources but routine use was limited by the high cost involved and safety constraints. Radiation-based methods used long exposure times which meant that dynamic measurements of the real-time behaviour of process systems were not feasible. The use of electrical methods to image industrial processes was pioneered by Maurice Beck at the UMIST in the mid-1980s
See also
Industrial Tomography Systems
Process tomography
Imaging
References
Chemical process engineering | Industrial process imaging | [
"Chemistry",
"Engineering"
] | 314 | [
"Chemical process engineering",
"Chemical engineering"
] |
7,389,351 | https://en.wikipedia.org/wiki/MEMS%20electrothermal%20actuator | A MEMS electrothermal actuator is a microelectromechanical device that typically generates motion by thermal expansion. It relies on the equilibrium between the thermal energy produced by an applied electric current and the heat dissipated into the environment or the substrate. Its working principle is based on resistive heating. Fabrication processes for electrothermal actuators include deep X-ray lithography, LIGA (lithography, electroplating, and molding), and deep reactive ion etching (DRIE). These techniques allow for the creation of devices with high aspect ratios. Additionally, these actuators are relatively easy to fabricate and are compatible with standard Integrated Circuits (IC) and MEMS fabrication methods. These electrothermal actuators can be utilized in different kind of MEMS devices like microgrippers, micromirrors, tunable inductors and resonators.
Types of MEMS electrothermal actuators
Generally, there are three types of MEMS electrothermal actuators. One is asymmetric thermal actuator, also known as hot-and-cold-arm or U-shaped actuator. Its working principle is based on the unequal thermal expansion of its components. The second type of electrothermal actuators is the symmetric thermal actuator, also known as chevron or bent beam actuator. Its operation is based on the total thermal expansion and its output motion is limited to one direction. The third type of MEMS electrothermal actuator is the bimorph actuator. Its motion relies on the varying coefficients of thermal expansion of the materials used in their fabrication.
Asymmetric (hot-and-cold-arm actuator, U-Shaped)
An asymmetric MEMS electrothermal actuator, often referred to as a bimorph or U-shaped thermal actuator, consists of a narrow "hot" arm and a wider "cold" arm connected in series to an electrical circuit. When current flows through the actuator, Joule heating occurs, producing more heat in the narrow arm due to its higher electrical resistance, resulting in greater thermal expansion compared to the wide arm. This differential thermal expansion creates a bending moment, causing the actuator to bend towards the cold arm. This design allows for precise actuation and is suitable for various MEMS applications, including micro and nano manipulation tools like microgrippers and micro positioners. These tools are essential for tasks such as micro assembly, biological cell manipulation, and material characterization, offering advantages such as low driving voltages and easy control. Various microgripper designs have been developed to enhance performance, including different arm widths and lengths, electro-thermo-compliant actuators, three-beam actuators, folded and meander heaters, sandwiched structures, inclined arms, and curved hot arms. These actuators are used in applications requiring precise control of temperature and force, such as handling fragile micro-particles and single-cell manipulation. Additionally, they are employed in switching mechanisms, optical devices, and bi-directional actuators for applications like RF MEMS switches and micro-positioning platforms, providing larger displacement ranges and improved functionality.
Symmetric (Chevron, bent beam)
The symmetric or Chevron actuator, also known as the V-shape or bent-beam actuator, is a widely used in-plane electrothermal actuator. It features a V-shaped design but can also be found in other shapes. Unlike the differential expansion in hot-and-cold-arm actuators, the Chevron actuator relies on the total thermal expansion for actuation. It consists of two equal slanted beams connected at an apex and anchored to the substrate, forming a single conduction path. When current passes through the beams, resistive heating causes thermal expansion, pushing the apex forward. A comprehensive deflection model for this actuator involves solving a transcendental function numerically to determine the tip displacement, influenced by factors like beam length, pre-bending angle, and temperature increase. The critical parameters include the beam length, pre-bending angle, and thickness. Smaller inclination angles yield larger displacements but risk out-of-plane buckling and fabrication issues. The stiffness and output force can be increased by stacking multiple beams. Chevron actuators are versatile, being used in MEMS applications like micro-switches, microgrippers, and material characterization tools. They can produce substantial gripping force but with limited lateral displacement. To amplify displacement, mechanical amplifiers are often used. Applications include pick-and-place operations for nanomaterials, biological cell manipulation, and RF MEMS switches, where the actuator's stability and high force are advantageous. Variants like Z-shape and kink actuators offer alternative designs for specific needs, such as larger displacement or easier fabrication. Cascaded Chevron actuators enhance displacement further by connecting multiple stages, albeit with increased buckling risk. Applications include micro-engines and advanced microgrippers. These actuators provide significant advantages over other types due to their rectilinear motion, high output force, and low driving voltage, making them suitable for a wide range of precise, small-scale tasks.
Bimorph
The bimorph design is a prominent type of electrothermal actuator consisting of two or more layers of different materials with varied coefficients of thermal expansion (CTE). When subjected to thermal stimuli, the differential expansion causes the actuator to bend, producing out-of-plane displacement. This makes bimorph actuators ideal for applications where in-plane actuators are unsuitable, offering a broad range of applications. The deflection mechanism relies on material properties, such as Young’s modulus and CTE mismatch, as well as the thickness ratio of the layers and the beam's geometrical parameters. A basic bimorph cantilever consists of two layers: one with a high CTE and another with a low CTE. Joule heating induces more expansion in the high-CTE layer, causing the structure to bend towards the low-CTE layer. The theoretical models for the behavior of bimorph actuators, such as tip deflection and output force, are well-established. For a simple two-layer cantilever, the curvature due to thermal expansion mismatch can be calculated using specific formulas involving temperature change, CTE, width, thickness, and Young’s modulus of each layer. The choice of materials for bimorph actuators is diverse, with metals and polymers commonly used for high-CTE layers, and dielectrics or semiconductors for low-CTE layers. Recent advancements include the use of carbon materials like graphene, which has a negative CTE, and graphene/polymer composites.
Bimorph actuators are typically designed for out-of-plane actuation due to the planar deposition of layers, innovative designs such as the "vertical bimorph" and lateral actuators have been developed to achieve in-plane actuation using techniques like angled electron-beam evaporation and post-CMOS micromachining. Bimorph actuators find applications in various fields. In micromanipulation, conventional bimorph actuators are less feasible for in-plane microgrippers, but novel designs like a four-finger microgripper provide stable and reliable gripping by curling upwards when open. In micromirrors, bimorph actuators enable large displacement with low power consumption, ideal for tilting and piston motion in applications like projection displays, optical switches, barcode readers, biomedical imaging, tunable lasers, spectroscopy, and adaptive optics. They are also used in atomic force microscopy (AFM) and scanning probe nanolithography (SPN), offering nanometer-scale resolution imaging and efficient patterning. Additionally, bimorph actuators are utilized in tunable RF devices due to their precise control and actuation capabilities. However, challenges such as shear stress at layer interfaces must be managed to ensure the longevity of bimorph devices.
Advantages
Electrothermal actuators offer several advantages over other types of actuators, making them valuable components for MEMS. They operate with relatively low driving voltages yet can generate large forces and displacements, either parallel or perpendicular to the substrate. Unlike actuators that rely on electrostatic or magnetic fields, electrothermal actuators are suitable for manipulating biological samples and electronic chips. These actuators are also easy to control, as they do not exhibit significant hysteresis like piezoresistive and shape memory alloy (SMA) actuators. Electrothermal actuators are scalable in size and typically have a more compact structure compared to electrostatic actuators, which use large arrays of comb drives, or electromagnetic and SMA actuators, which are challenging to implement on a small scale. They are versatile in their operating environments, functioning well in air, vacuum, dusty conditions, liquid media, and under the electron beam in scanning electron microscopy (SEM). However, electrothermal actuators generally have low switching speeds due to the large time constants of thermal processes. Despite this, high-frequency thermal actuation has been demonstrated. The method of electrothermal excitation is also attractive for actuation in resonance mode, particularly for microcantilever-based sensing and probing applications. MEMS resonators using this method have shown high-quality factors and wide frequency tuning ranges.
Other types of MEMS Actuators
Electrostatic — parallel plate or comb drive
Piezoelectric
Magnetic
Thermostatic — linear motion, paraffin wax drive
See also
MEMS magnetic actuator
MEMS electrostatic actuator
References
Further reading
Experimental and numerical study of MEMS electrothermal actuators: Comparing dynamic behavior and heat transfer process in vacuum and non-vacuum environments
External links
Micro-particles manipulation and sorting
Electrothermal actuator simulation on Comsol
Actuators
Materials science
Nanoelectronics
Microtechnology | MEMS electrothermal actuator | [
"Physics",
"Materials_science",
"Engineering"
] | 2,098 | [
"Applied and interdisciplinary physics",
"Microtechnology",
"Materials science",
"Nanoelectronics",
"nan",
"Nanotechnology"
] |
7,391,012 | https://en.wikipedia.org/wiki/Combes%20quinoline%20synthesis | The Combes quinoline synthesis is a chemical reaction, which was first reported by Combes in 1888. Further studies and reviews of the Combes quinoline synthesis and its variations have been published by Alyamkina et al., Bergstrom and Franklin, Born, and Johnson and Mathews.
The Combes quinoline synthesis is often used to prepare the 2,4-substituted quinoline backbone and is unique in that it uses a β-diketone substrate, which is different from other quinoline preparation methods, such as the Conrad-Limpach synthesis and the Doebner reaction.
It involves the condensation of unsubstituted anilines (1) with β-diketones (2) to form substituted quinolines (4) after an acid-catalyzed ring closure of an intermediate Schiff base (3).
Mechanism
The reaction mechanism undergoes three major steps, the first one being the protonation of the oxygen on the carbonyl in the β-diketone, which then undergoes a nucleophilic addition reaction with the aniline. An intramolecular proton transfer is followed by an E2 mechanism, which causes a molecule of water to leave. Deprotonation at the nitrogen atom generates a Schiff base, which tautomerizes to form an enamine that gets protonated via the acid catalyst, which is commonly concentrated sulfuric acid (H2SO4). The second major step, which is also the rate-determining step, is the annulation of the molecule. Immediately following the annulation, there is a proton transfer, which eliminates the positive formal charge on the nitrogen atom. The alcohol is then protonated, followed by the dehydration of the molecule, resulting in the end product of a substituted quinoline.
Regioselectivity
The formation of the quinoline product is influenced by the interaction of both steric and electronic effects. In a recent study, Sloop investigated how substituents would influence the regioselectivity of the product as well as the rate of reaction during the rate-determining step in a modified Combes pathway, which produced trifluoromethylquinoline as the product. Sloop focused specifically on the influences that substituted trifluoro-methyl-β-diketones and substituted anilines would have on the rate of quinoline formation. One modification to the generic Combes quinoline synthesis was the use of a mixture of polyphosphoric acid (PPA) and various alcohols (Sloop used ethanol in his experiment). The mixture produced a polyphosphoric ester (PPE) catalyst that proved to be more effective as the dehydrating agent than concentrated sulfuric acid (H2SO4), which is commonly used in the Combes quinoline synthesis. Using the modified Combes synthesis, two possible regioisomers were found: 2-CF3- and 4-CF3-quinolines. It was observed that the steric effects of the substituents play a more important role in the electrophilic aromatic annulation step, which is the rate-determining step, compared to the initial nucleophilic addition of the aniline to the diketone. It was also observed that increasing the bulk of the R group on the diketone and using methoxy-substituted anilines leads to the formation of 2-CF3-quinolines. If chloro- or fluoroanilines are used, the major product would be the 4-CF3 regioisomer. The study concludes that the interaction of steric and electronic effects leads to the preferred formation of 2-CF3-quinolines, which provides us with some information on how to manipulate the Combes quinoline synthesis to form a desired regioisomer as the product.
Importance of Quinoline Synthesis
There are multiple ways to synthesize quinoline, one of which is the Combes quinoline synthesis. The synthesis of quinoline derivatives has been prevalent in biomedical studies due to the efficiency of the synthetic methods as well as the relative low-cost production of these compounds, which can also be produced in large scales. Quinoline is an important heterocyclic derivative that serves as a building block for many pharmacological synthetic compounds. Quinoline and its derivatives are commonly used in antimalarial drugs, fungicides, antibiotics, dyes, and flavoring agents. Quinoline and its derivatives also have important roles in other biological compounds that are involved in cardiovascular, anticancer, and anti-inflammatory activities. Additionally, researchers, such as Luo Zai-gang et al., recently looked at the synthesis and use of quinoline derivatives as HIV-1 integrase inhibitors. They also looked at how the substituent placement on the quinoline derivatives affected the primary anti-HIV inhibitory activity.
See also
Conrad-Limpach reaction
Doebner reaction
Doebner-Miller reaction
Skraup synthesis
References
Further reading
Bergstrom, F.W. and Franklin, E.C. Hexaacylic Compounds: Pyridine, Quinoline, and Isoquinoline In Heterocyclic Nitrogen Compounds. California: Department of Chemistry, Stanford University, 1944, 156.
Carbon-carbon bond forming reactions
Condensation reactions
Quinoline forming reactions
Name reactions | Combes quinoline synthesis | [
"Chemistry"
] | 1,122 | [
"Name reactions",
"Condensation reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
7,391,204 | https://en.wikipedia.org/wiki/Transport%20hub | A transport hub is a place where passengers and cargo are exchanged between vehicles and/or between transport modes. Public transport hubs include railway stations, rapid transit stations, bus stops, tram stops, airports, and ferry slips. Freight hubs include classification yards, airports, seaports, and truck terminals, or combinations of these. For private transport by car, the parking lot functions as an unimodal hub.
History
Historically, an interchange service in the scheduled passenger air transport industry involved a "through plane" flight operated by two or more airlines where a single aircraft was used with the individual airlines operating it with their own flight crews on their respective portions of a direct, no-change-of-plane multi-stop flight. In the U.S., a number of air carriers including Alaska Airlines, American Airlines, Braniff International Airways, Continental Airlines, Delta Air Lines, Eastern Airlines, Frontier Airlines (1950-1986), Hughes Airwest, National Airlines (1934-1980), Pan Am, Trans World Airlines (TWA), United Airlines and Western Airlines previously operated such cooperative "through plane" interchange flights on both domestic and/or international services with these schedules appearing in their respective system timetables.
Delta Air Lines pioneered the hub and spoke system for aviation in 1955 from its hub in Atlanta, Georgia, United States, in an effort to compete with Eastern Air Lines. FedEx adopted the hub and spoke model for overnight package delivery during the 1970s. When the United States airline industry was deregulated in 1978, Delta's hub and spoke paradigm was adopted by several airlines. Many airlines around the world operate hub-and-spoke systems facilitating passenger connections between their respective flights.
Public transport
Intermodal passenger transport hubs in public transport include bus stations, railway stations and metro stations, while a major transport hub, often
multimodal (bus and rail), may be referred to as a transport centre or, in American English, as a transit center. Sections of city streets that are devoted to functioning as transit hubs are referred to as transit malls. In cities with a central station, that station often also functions as a transport hub in addition to being a railway station.
Journey planning involving transport hubs is more complicated than direct trips, as journeys will typically require a transfer at the hub. Modern electronic journey planners for public transport have a digital representation of both the stops and transport hubs in a network, to allow them to calculate journeys that include transfers at hubs.
Airports
Airports have a twofold hub function. First, they concentrate passenger traffic into one place for onward transportation. This makes it important for airports to be connected to the surrounding transport infrastructure, including roads, bus services, and railway and rapid transit systems. Secondly some airports function as intra-modular hubs for the airlines, or airline hubs. This is a common strategy among network airlines who fly only from limited number of airports and usually will make their customers change planes at one of their hubs if they want to get between two cities the airline does not fly directly between.
Airlines have extended the hub-and-spoke model in various ways. One method is to create additional hubs on a regional basis, and to create major routes between the hubs. This reduces the need to travel long distances between nodes that are close together. Another method is to use focus cities to implement point-to-point service for high traffic routes, bypassing the hub entirely.
Freight
There are usually three kinds of freight hubs: sea-road, sea-rail, and road-rail, though they can also be sea-road-rail. With the growth of containerization, intermodal freight transport has become more efficient, often making multiple legs cheaper than through services—increasing the use of hubs.
See also
Central station
Infrastructure security
Intermodal journey planner
Junction (traffic)
Layover
Spoke-hub distribution paradigm
Transit desert
Transit mall
References
Hub
Transit centers | Transport hub | [
"Physics"
] | 795 | [
"Physical systems",
"Transport",
"Transportation geography"
] |
7,392,872 | https://en.wikipedia.org/wiki/Ensemble%20interpretation | The ensemble interpretation of quantum mechanics considers the quantum state description to apply only to an ensemble of similarly prepared systems, rather than supposing that it exhaustively represents an individual physical system.
The advocates of the ensemble interpretation of quantum mechanics claim that it is minimalist, making the fewest physical assumptions about the meaning of the standard mathematical formalism. It proposes to take to the fullest extent the statistical interpretation of Max Born, for which he won the Nobel Prize in Physics in 1954. On the face of it, the ensemble interpretation might appear to contradict the doctrine proposed by Niels Bohr, that the wave function describes an individual system or particle, not an ensemble, though he accepted Born's statistical interpretation of quantum mechanics. It is not quite clear exactly what kind of ensemble Bohr intended to exclude, since he did not describe probability in terms of ensembles. The ensemble interpretation is sometimes, especially by its proponents, called "the statistical interpretation", but it seems perhaps different from Born's statistical interpretation.
As is the case for "the" Copenhagen interpretation, "the" ensemble interpretation might not be uniquely defined. In one view, the ensemble interpretation may be defined as that advocated by Leslie E. Ballentine, Professor at Simon Fraser University. His interpretation does not attempt to justify, or otherwise derive, or explain quantum mechanics from any deterministic process, or make any other statement about the real nature of quantum phenomena; it intends simply to interpret the wave function. It does not propose to lead to actual results that differ from orthodox interpretations. It makes the statistical operator primary in reading the wave function, deriving the notion of a pure state from that.
History
In his 1926 paper introducing the concept of quantum scattering theory Max Born proposed to view "the motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws", where the causal laws are Schrödinger's equations. As related in his 1954 Nobel Prize in Physics lecture Born viewed the statistical character of quantum mechanics as an empirical observation with philosophical implications.
Einstein maintained consistently that the quantum mechanics only supplied a statistical view. In 1936 he wrote "“the function does not in any way describe a condition which could be
that of a single system; it relates rather to many systems, to ‘an ensemble of systems’ in the sense of statistical mechanics.” However Einstein did not provide a detailed study of the ensemble, ultimately because he considered quantum mechanics itself to be incomplete primarily because it was only an ensemble theory. Einstein believed quantum mechanics was correct in the same sense that thermodynamics is correct, but that it was insufficient as means of unifying physics.
Also in the years around 1936, Karl Popper published philosophical studies countering the work of Heisenberg and Bohr. Popper considered their work as essentially subjectivist, unfalsifiable, and thus unscientific. He held that the quantum state represented statistical assertions which have no predictive power for individual particles. Popper described "propensities" as the correct notion of probability for quantum mechanics.
Although several other notable physicists championed the ensemble concept, including John C. Slater, Edwin C. Kemble, and Dmitry Blokhintsev, Leslie Ballentine's 1970 paper 'The statistical interpretation of quantum mechanics" and his textbook have become the main sources. Ballentine followed up with axiomatic development of propensity theory, analysis of decoherence in the ensemble interpretation and other papers spanning 40 years.
States, systems, and ensembles
Perhaps the first expression of an ensemble interpretation was that of Max Born. In a 1968 article, he used the German words 'gleicher Haufen', which are often translated into English, in this context, as 'ensemble' or 'assembly'. The atoms in his assembly were uncoupled, meaning that they were an imaginary set of independent atoms that defines its observable statistical properties. Born did not develop a more detailed specification of ensembles to complete his scattering theory work.
Although Einstein described quantum mechanics as clearly an ensemble theory he did present a formal definition of an ensemble. Einstein sought a theory of individual entities, which he argued was not quantum mechanics.
Ballentine distinguish his particular ensemble interpretation the Statistical Interpretation.
According to Ballentine, the distinguishing difference between many of the Copenhagen-like interpretations (CI) and the Statistical Interpretation (EI) is the following:
CI: A pure state provides a complete description of an individual system, e.g. an electron.
EI: A pure state describes the statistical properties of an ensemble of identically prepared systems.
Ballentine defines a quantum state as an ensemble of similarly prepared systems. For example, the system may be a single electron, then the ensemble will be "the set of all single electrons subjected to the same state preparation technique." He uses the example of a low-intensity electron beam prepared with a narrow range of momenta. Each prepared electron is a system, the ensemble consists of many such systems.
Ballentine emphasizes that the meaning of the "Quantum State" or "State Vector" may be described, essentially, by a one-to-one correspondence to the probability distributions of measurement results, not the individual measurement results themselves. A mixed state is a description only of the probabilities, and of positions, not a description of actual individual positions. A mixed state is a mixture of probabilities of physical states, not a coherent superposition of physical states.
Probability; propensity
Quantum observations are inherently statistical. For example, the electrons in a low-intensity double slit experiment arrive at random times and seemingly random places and yet eventually show an interference pattern.
The theory of quantum mechanics offer only statistical results. Given that we have prepared a system in a state , the theory predicts a result as a probability distribution:
.
Different approaches to probability can be applied to connect the probability distribution of theory to the observed randomness.
Popper, Ballentine, Paul Humphreys, and others point to propensity as the correct interpretation of probability in science. Propensity, a form of causality that is weaker than determinism, is the tendency of a physical system to produce a result. Thus the mathematical statement
means the propensity for event to occur given the physical scenario is . The physical scenario is viewed as weakly causal condition.
The weak causation invalidates Bayes' theorem and correlation is no longer symmetric. As noted by Paul Humphreys, many physical examples show the lack of reciprocal correlation, for example, the propensity for smokers to get lung cancer does not imply lung cancer has a propensity to cause smoking.
Propensity closely matches the application of quantum theory: single event probability can be predicted by theory but only verified by repeated samples in experiment. Popper explicitly developed propensity theory to eliminate subjectivity in quantum mechanics.
Preparative and observing devices as origins of quantum randomness
An isolated quantum mechanical system, specified by a wave function, evolves in time in a deterministic way according to the Schrödinger equation that is characteristic of the system. Though the wave function can generate probabilities, no randomness or probability is involved in the temporal evolution of the wave function itself. This is agreed, for example, by Born, Dirac, von Neumann, London & Bauer, Messiah, and Feynman & Hibbs. An isolated system is not subject to observation; in quantum theory, this is because observation is an intervention that violates isolation.
The system's initial state is defined by the preparative procedure; this is recognized in the ensemble interpretation, as well as in the Copenhagen approach. The system's state as prepared, however, does not entirely fix all properties of the system. The fixing of properties goes only as far as is physically possible, and is not physically exhaustive; it is, however, physically complete in the sense that no physical procedure can make it more detailed. This is stated clearly by Heisenberg in his 1927 paper. It leaves room for further unspecified properties. For example, if the system is prepared with a definite energy, then the quantum mechanical phase of the wave function is left undetermined by the mode of preparation. The ensemble of prepared systems, in a definite pure state, then consists of a set of individual systems, all having one and the same definite energy, but each having a different quantum mechanical phase, regarded as probabilistically random. The wave function, however, does have a definite phase, and thus specification by a wave function is more detailed than specification by state as prepared. The members of the ensemble are logically distinguishable by their distinct phases, though the phases are not defined by the preparative procedure. The wave function can be multiplied by a complex number of unit magnitude without changing the state as defined by the preparative procedure.
The preparative state, with unspecified phase, leaves room for the several members of the ensemble to interact in respectively several various ways with other systems. An example is when an individual system is passed to an observing device so as to interact with it. Individual systems with various phases are scattered in various respective directions in the analyzing part of the observing device, in a probabilistic way. In each such direction, a detector is placed, in order to complete the observation. When the system hits the analyzing part of the observing device, that scatters it, it ceases to be adequately described by its own wave function in isolation. Instead it interacts with the observing device in ways partly determined by the properties of the observing device. In particular, there is in general no phase coherence between system and observing device. This lack of coherence introduces an element of probabilistic randomness to the system–device interaction. It is this randomness that is described by the probability calculated by the Born rule. There are two independent originative random processes, one that of preparative phase, the other that of the phase of the observing device. The random process that is actually observed, however, is neither of those originative ones. It is the phase difference between them, a single derived random process.
The Born rule describes that derived random process, the observation of a single member of the preparative ensemble. In the ordinary language of classical or Aristotelian scholarship, the preparative ensemble consists of many specimens of a species. The quantum mechanical technical term 'system' refers to a single specimen, a particular object that may be prepared or observed. Such an object, as is generally so for objects, is in a sense a conceptual abstraction, because, according to the Copenhagen approach, it is defined, not in its own right as an actual entity, but by the two macroscopic devices that should prepare and observe it. The random variability of the prepared specimens does not exhaust the randomness of a detected specimen. Further randomness is injected by the quantum randomness of the observing device. It is this further randomness that makes Bohr emphasize that there is randomness in the observation that is not fully described by the randomness of the preparation. This is what Bohr means when he says that the wave function describes "a single system". He is focusing on the phenomenon as a whole, recognizing that the preparative state leaves the phase unfixed, and therefore does not exhaust the properties of the individual system. The phase of the wave function encodes further detail of the properties of the individual system. The interaction with the observing device reveals that further encoded detail. It seems that this point, emphasized by Bohr, is not explicitly recognized by the ensemble interpretation, and this may be what distinguishes the two interpretations. It seems, however, that this point is not explicitly denied by the ensemble interpretation.
Einstein perhaps sometimes seemed to interpret the probabilistic "ensemble" as a preparative ensemble, recognizing that the preparative procedure does not exhaustively fix the properties of the system; therefore he said that the theory is "incomplete". Bohr, however, insisted that the physically important probabilistic "ensemble" was the combined prepared-and-observed one. Bohr expressed this by demanding that an actually observed single fact should be a complete "phenomenon", not a system alone, but always with reference to both the preparing and the observing devices. The Einstein–Podolsky–Rosen criterion of "completeness" is clearly and importantly different from Bohr's. Bohr regarded his concept of "phenomenon" as a major contribution that he offered for quantum theoretical understanding. The decisive randomness comes from both preparation and observation, and may be summarized in a single randomness, that of the phase difference between preparative and observing devices. The distinction between these two devices is an important point of agreement between Copenhagen and ensemble interpretations. Though Ballentine claims that Einstein advocated "the ensemble approach", a detached scholar would not necessarily be convinced by that claim of Ballentine. There is room for confusion about how "the ensemble" might be defined.
"Each photon interferes only with itself"
Niels Bohr famously insisted that the wave function refers to a single individual quantum system. He was expressing the idea that Dirac expressed when he famously wrote: "Each photon then interferes only with itself. Interference between different photons never occurs.". Dirac clarified this by writing: "This, of course, is true only provided the two states that are superposed refer to the same beam of light, i.e. all that is known about the position and momentum of a photon in either of these states must be the same for each." Bohr wanted to emphasize that a superposition is different from a mixture. He seemed to think that those who spoke of a "statistical interpretation" were not taking that into account. To create, by a superposition experiment, a new and different pure state, from an original pure beam, one can put absorbers and phase-shifters into some of the sub-beams, so as to alter the composition of the re-constituted superposition. But one cannot do so by mixing a fragment of the original unsplit beam with component split sub-beams. That is because one photon cannot both go into the unsplit fragment and go into the split component sub-beams. Bohr felt that talk in statistical terms might hide this fact.
The physics here is that the effect of the randomness contributed by the observing apparatus depends on whether the detector is in the path of a component sub-beam, or in the path of the single superposed beam. This is not explained by the randomness contributed by the preparative device.
Measurement and collapse
Bras and kets
The ensemble interpretation is notable for its relative de-emphasis on the duality and theoretical symmetry between bras and kets. The approach emphasizes the ket as signifying a physical preparation procedure. There is little or no expression of the dual role of the bra as signifying a physical observational procedure. The bra is mostly regarded as a mere mathematical object, without very much physical significance. It is the absence of the physical interpretation of the bra that allows the ensemble approach to by-pass the notion of "collapse". Instead, the density operator expresses the observational side of the ensemble interpretation. It hardly needs saying that this account could be expressed in a dual way, with bras and kets interchanged, mutatis mutandis. In the ensemble approach, the notion of the pure state is conceptually derived by analysis of the density operator, rather than the density operator being conceived as conceptually synthesized from the notion of the pure state.
An attraction of the ensemble interpretation is that it appears to dispense with the metaphysical issues associated with reduction of the state vector, Schrödinger cat states, and other issues related to the concepts of multiple simultaneous states. The ensemble interpretation postulates that the wave function only applies to an ensemble of systems as prepared, but not observed. There is no recognition of the notion that a single specimen system could manifest more than one state at a time, as assumed, for example, by Dirac. Hence, the wave function is not envisaged as being physically required to be "reduced". This can be illustrated by an example:
Consider a quantum die. If this is expressed in Dirac notation, the "state" of the die can be represented by a "wave" function describing the probability of an outcome given by:
Where the "+" sign of a probabilistic equation is not an addition operator, it is the standard probabilistic Boolean operator OR. The state vector is inherently defined as a probabilistic mathematical object such that the result of a measurement is one outcome OR another outcome.
It is clear that on each throw, only one of the states will be observed, but this is not expressed by a bra. Consequently, there appears to be no requirement for a notion of collapse of the wave function/reduction of the state vector, or for the die to physically exist in the summed state. In the ensemble interpretation, wave function collapse would make as much sense as saying that the number of children a couple produced, collapsed to 3 from its average value of 2.4.
The state function is not taken to be physically real, or be a literal summation of states. The wave function, is taken to be an abstract statistical function, only applicable to the statistics of repeated preparation procedures. The ket does not directly apply to a single particle detection, but only the statistical results of many. This is why the account does not refer to bras, and mentions only kets.
Diffraction
The ensemble approach differs significantly from the Copenhagen approach in its view of diffraction. The Copenhagen interpretation of diffraction, especially in the viewpoint of Niels Bohr, puts weight on the doctrine of wave–particle duality. In this view, a particle that is diffracted by a diffractive object, such as for example a crystal, is regarded as really and physically behaving like a wave, split into components, more or less corresponding to the peaks of intensity in the diffraction pattern. Though Dirac does not speak of wave–particle duality, he does speak of "conflict" between wave and particle conceptions. He indeed does describe a particle, before it is detected, as being somehow simultaneously and jointly or partly present in the several beams into which the original beam is diffracted. So does Feynman, who speaks of this as "mysterious".
The ensemble approach points out that this seems perhaps reasonable for a wave function that describes a single particle, but hardly makes sense for a wave function that describes a system of several particles. The ensemble approach demystifies this situation along the lines advocated by Alfred Landé, accepting Duane's hypothesis. In this view, the particle really and definitely goes into one or other of the beams, according to a probability given by the wave function appropriately interpreted. There is definite quantal transfer of translative momentum between particle and diffractive object. This is recognized also in Heisenberg's 1930 textbook, though usually not recognized as part of the doctrine of the so-called "Copenhagen interpretation". This gives a clear and utterly non-mysterious physical or direct explanation instead of the debated concept of wave function "collapse". It is presented in terms of quantum mechanics by other present day writers also, for example, Van Vliet. For those who prefer physical clarity rather than mysterianism, this is an advantage of the ensemble approach, though it is not the sole property of the ensemble approach. With a few exceptions, this demystification is not recognized or emphasized in many textbooks and journal articles.
Criticism
David Mermin sees the ensemble interpretation as being motivated by an adherence ("not always acknowledged") to classical principles.
"[...] the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation".
However, according to Einstein and others, a key motivation for the ensemble interpretation is not about any alleged, implicitly assumed probabilistic ignorance, but the removal of "…unnatural theoretical interpretations…". A specific example being the Schrödinger cat problem, but this concept applies to any system where there is an interpretation that postulates, for example, that an object might exist in two positions at once.
Mermin also emphasises the importance of describing single systems, rather than ensembles.
"The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are."
Schrödinger's cat
The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. That being the case, the state vector would not apply to individual cat experiments, but only to the statistics of many similar prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial non-issue. However, the application of state vectors to individual systems, rather than ensembles, has claimed explanatory benefits, in areas like single-particle twin-slit experiments and quantum computing (see Schrödinger's cat applications). As an avowedly minimalist approach, the ensemble interpretation does not offer any specific alternative explanation for these phenomena.
The frequentist probability variation
The claim that the wave functional approach fails to apply to single particle experiments cannot be taken as a claim that quantum mechanics fails in describing single-particle phenomena. In fact, it gives correct results within the limits of a probabilistic or stochastic theory.
Probability always requires a set of multiple data, and thus single-particle experiments are really part of an ensemble — an ensemble of individual experiments that are performed one after the other over time. In particular, the interference fringes seen in the double-slit experiment require repeated trials to be observed.
The quantum Zeno effect
Leslie Ballentine promoted the ensemble interpretation in his book Quantum Mechanics, A Modern Development. In it, he described what he called the "Watched Pot Experiment". His argument was that, under certain circumstances, a repeatedly measured system, such as an unstable nucleus, would be prevented from decaying by the act of measurement itself. He initially presented this as a kind of reductio ad absurdum of wave function collapse.
The effect has been shown to be real. Ballentine later wrote papers claiming that it could be explained without wave function collapse.
See also
Atomic electron transition
Interpretations of quantum mechanics
References
External links
Quantum mechanics as Wim Muynk sees it
Kevin Aylwards's account of the ensemble interpretation
Detailed ensemble interpretation by Marcel Nooijen
Pechenkin, A.A. The early statistical interpretations of quantum mechanics
Krüger, T. An attempt to close the Einstein–Podolsky–Rosen debate
Duda, J. Four-dimensional understanding of quantum mechanics
Ulf Klein's website on the statistical interpretation of quantum theory
Mamas, D.L. An intrinsic quantum state interpretation of quantum mechanics
Klein, U. From probabilistic mechanics to quantum theory
Quantum measurement
Interpretations of quantum mechanics | Ensemble interpretation | [
"Physics"
] | 4,851 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
7,394,916 | https://en.wikipedia.org/wiki/Gun%20data%20computer | The gun data computer was a series of artillery computers used by the U.S. Army for coastal artillery, field artillery and anti-aircraft artillery applications. For antiaircraft applications they were used in conjunction with a director computer.
Variations
M1: This was used by seacoast artillery for major-caliber seacoast guns. It computed continuous firing data for a battery of two guns that were separated by not more than . It utilised the same type of input data furnished by a range section with the then-current (1940) types of position-finding and fire-control equipment.
M3: This was used in conjunction with the M9 and M10 directors to compute all required firing data, i.e. azimuth, elevation and fuze time. The computations were made continuously, so that the gun was at all times correctly pointed and the fuze correctly timed for firing at any instant. The computer was mounted in the M13 or M14 director trailer.
M4: This was identical to the M3 except for some mechanisms and parts which were altered to allow for different ammunition being used.
M8: This was an electronic computer (using vacuum tube technology) built by Bell Labs and used by coast artillery with medium-caliber guns (up to ). It made the following corrections: wind, drift, Earth's rotation, muzzle velocity, air density, height of site and spot corrections.
M9: This was identical to the M8 except for some mechanisms and parts which were altered to accommodate anti-aircraft ammunition and guns.
M10: A ballistics computer, part of the M38 fire control system, for Skysweeper anti-aircraft guns.
M13: A ballistics computer for M48 tanks.
M14: A ballistics computer for M103 heavy tanks.
M15: A part of the M35 field artillery fire-control system, which included the M1 gunnery officer console and M27 power supply.
M16: A ballistics computer for M60A1 tanks.
M18: FADAC (field artillery digital automatic computer), an all-transistorized general-purpose digital computer manufactured by Amelco (Teledyne Systems, Inc.,) and North American—Autonetics. FADAC was first fielded during 1960, and was the first semiconductor-based digital electronics field-artillery computer.
M19: A ballistics computer for M60A2 tanks.
M21: A ballistics computer for M60A3 tanks.
M23: A mortar ballistics computer.
M26: A fire-control computer for AH-1 Cobra helicopters, (AH-1F).
M31: A mortar ballistics computer.
M32: A mortar ballistics computer, (handheld).
M1: A ballistics computer for M1 Abrams main battle tanks.
Systems
The Battery Computer System (BCS) AN/GYK-29 was a computer used by the United States Army for computing artillery fire mission data. It replaced the FADAC and was small enough to fit into the HMMWV combat vehicle.
The AN/GSG-10 TACFIRE (Tactical Fire) direction system automated field artillery command and control functions. It was composed of computers and remote devices such as the Variable Format Message Entry Device (VFMED), the AN/PSG-2 Digital Message Device (DMD) and the AN/TPQ-36 Firefinder field artillery target acquisition radar system linked by digital communications using existing radio and wire communications equipment. Later it also linked with the BCS which had more advanced targeting algorithms.
The last TACFIRE fielding was completed during 1987. Replacement of TACFIRE equipment began during 1994.
TACFIRE used the AN/GYK-12, a second-generation mainframe computer developed primarily by Litton Industries for Army divisional field artillery (DIVARTY) units. It had two configurations (division and battalion level) housed in mobile command shelters. Field artillery brigades also use the division configuration.
Components of the system were identified using acronyms:
CPU Central Processing Unit
IOU Input/Output Unit
MCMU Mass Core Memory Unit
DDT Digital Data Terminal
MTU Magnetic Tape Unit
PCG Power Converter Group
ELP Electronic Line Printer
DPM Digital Plotter Map
ACC Artillery Control Console
RCMU Remote Control Monitoring Unit
The successor to the TACFIRE system is the Advanced Field Artillery Tactical Data System (AFATDS). The AFATDS is the "Fires XXI" computer system for both tactical and technical fire control. It replaced both BCS (for technical fire solutions) and IFSAS/L-TACFIRE (for tactical fire control) systems in U.S. Field Artillery organizations, as well as in maneuver fire support elements at the battalion level and higher. As of 2009, the U.S. Army was transitioning from a version based on a Sun Microsystems SPARC computer running the Linux kernel to a version based on laptop computers running the Microsoft Windows operating system.
Surviving examples
One reason for a lack of surviving examples of early units was the use of radium on the dials. As a result they were classified as hazardous waste and were disposed of by the United States Department of Energy. Currently there is one surviving example of FADAC at the Fort Sill artillery museum.
See also
Director (military)
Fire-control system
Kerrison Predictor
List of military electronics of the United States
Mark I Fire Control Computer – US Navy system for 5-inch guns
Numerical control
Project Manager Battle Command
Rangekeeper
References
Sources
TM 9-2300 Standard Artillery and Fire Control Materiel dated 1944
TM 9-2300 Artillery Materiel and Associated Equipment. dated May 1949
ST 9-159 Handbook of Ordnance materiel dated 1968
Gun Data Computers, Coast Artillery Journal March–April 1946, pp. 45–47
External links
http://www.globalsecurity.org/military/library/report/1988/MJR.htm
http://ed-thelen.org/comp-hist/BRL61.html#TOC
modern system
https://web.archive.org/web/20110617062042/http://sill-www.army.mil/famag/1960/sep_1960/SEP_1960_PAGES_8_15.pdf
Article title
https://web.archive.org/web/20040511174351/http://combatindex.com/mil_docs/pdf/hdbk/0700/MIL-HDBK-799.pdf
https://web.archive.org/web/20110720002347/https://rdl.train.army.mil/soldierPortal/atia/adlsc/view/public/12288-1/FM/3-22.91/chap1.htm
https://web.archive.org/web/20110617062233/http://sill-www.army.mil/famag/1958/FEB_1958/FEB_1958_PAGES_32_35.pdf
Bell labs patent
http://web.mit.edu/STS.035/www/PDFs/Newell.pdf
tacfire Archived at
BCS components
Military electronics of the United States
Artillery operation
Applications of control engineering
Analog computers
Ballistics
World War II American electronics
Fire-control computers of World War II | Gun data computer | [
"Physics",
"Engineering"
] | 1,535 | [
"Control engineering",
"Applied and interdisciplinary physics",
"Ballistics",
"Applications of control engineering"
] |
7,395,352 | https://en.wikipedia.org/wiki/Lixiviant | A lixiviant is a chemical used in hydrometallurgy to extract elements from its ore. One of the most famous lixiviants is cyanide, which is used in extracting 90% of mined gold. The combination of cyanide and air converts gold particles into a soluble salt. Once separated from the bulk gangue, the solution is processed in a series of steps to give the metal.
Etymology
The origin is the word lixiviate, meaning to leach or to dissolve out, deriving from the Latin lixivium. A lixiviant assists in rapid and complete leaching, for example during in situ leaching. The metal can be recovered from it in a concentrated form after leaching.
Further reading
References
Metallurgical processes | Lixiviant | [
"Chemistry",
"Materials_science",
"Engineering"
] | 161 | [
"Metallurgical processes",
"Materials science stubs",
"Materials science",
"Metallurgy"
] |
8,888,246 | https://en.wikipedia.org/wiki/NeSSI | NeSSI (for New Sampling/Sensor Initiative) is a global and open initiative sponsored by the Center for Process Analysis and Control (CPAC) at the University of Washington, in Seattle.
The NeSSI initiative was begun to simplify the tasks and reduce the overall costs associated with engineering, installing, and maintaining chemical process analytical systems. Process analytical systems are commonly used by the chemical, oil refining and petrochemical industries to measure and control both chemical composition as well as certain intrinsic physical properties (such as viscosity). The specific objectives of NeSSI are:
Increasing the reliability of these systems through the use of increased automation,
Shrinking their physical size and energy use by means of miniaturization,
Promoting the creation and use of industry standards for process analytical systems,
Helping create the infrastructure needed to support the use of the emerging class of robust and selective microAnalytical sensors.
To date, NeSSI has served as a forum for the adoption and improvement of an industrial standard which specifies the use of miniature and modular Lego-like flow components. NeSSI has also issued a specification which has been instrumental in spurring the development and commercialization of a plug and play low power communication bus (NeSSI-bus) specifically designed for use with process analytical sample systems in electrically hazardous environments. As part of its development road map, NeSSI has defined the electrical and mechanical interfaces, as well as compiled a list of automated (smart) software features, which are now beginning to be used by microanalytical manufacturers for industrial applications.
Background
Modern chemical and petrochemical processing plants are complex systems containing many steps (often called unit operations) involved in producing one or more products from various raw materials. In order to control the many processes, for both improved product quality and operational safety, many measurements are made at the different stages of processing. These measurements, either from simple sensors (such as temperature, pressure, flow, etc.) or from sophisticated chemical analyzers (providing composition of one or more components in the chemical stream), are typically used as inputs to process control algorithms to give a "snapshot" of the process operation and to control the process to ensure it is operating efficiently and safely.
Traditionally, most of the measurements (with the exception of temperature, pressure and flow) were performed "off-line" by taking a sample from the process and analyzing it in the laboratory. Beginning in the latter of part of the 1930s, a trend aimed at moving the analysis from the laboratory to the process plant began. With the advent of more sophisticated analyzers, this concept known as Process Analytics become much more prevalent in the 1980s and a new discipline called Process Analytical Chemistry (PAC) emerged which combined chemical engineering and analytical chemistry.
One of the main driving forces for PAC (See also: PAT) is to remove the bottleneck and time lag associated with sending the samples to the lab and waiting for the analysis results. By moving the analysis to the process, results can be obtained closer to real-time which effectively improves the ability for the control action to correct for process changes (i.e., feedback and feed forward control).
By far, the most common implementation of PAC (especially for more complex analyzers) utilizes what is known as extractive sampling. This typically involves the continuous (or sometimes periodic) removal of a small portion of sample from a much larger piping system or process vessel. This sample is then conditioned (filtered, pressure regulated, flow controlled, etc.) and introduced to the analyzer where the chemical composition or the intrinsic physical properties of process fluids (vapours and liquids) are measured. In industrial plants, the majority of sample systems and their related analyzers are installed in analyzer houses.
The hardware (traditionally metal tubing, compression fittings, valves, regulators, rotameters and filters) associated with extractive sampling is collectively referred to as the sampling system. Sample systems are used to condition or adjust the sample conditions (pressure, amount of particulate allowed, temperature and flow) to a level suitable for use with an analytical device (analyzer) such as a gas chromatograph, an oxygen analyzer or an infra red spectrometer. Despite the simple explanation just given, modern sampling systems can be quite large, complex, and expensive. The design features of analytical sample systems have changed little, when the discipline of Process Analytics began in Germany, right through until the present day. An example of an early analyzer and sample system used at the Buna Chemical Works (Schkopau, Germany), is shown in the following photograph. Process analytics remains exceptional in the fact that it is the last outpost of low level automation (retains manual adjustments and visible checks) within the process industries.
History
The rationale for NeSSI originated from focus group meetings held in 1999 at the Center for Process Analytical Chemistry (CPAC) which called out for more reliable sampling and analysis for the manufacturing processes. Early work with NeSSI was started in July, 2000 by Peter van Vuuren (of ExxonMobil Chemical) and Rob Dubois (of Dow Chemical) with the initial aim of adopting new types of modular and miniature hardware which were being addressed in a standard being developed by an ISA (Instrumentation, Systems and Automation Society) technical committee. (Reference 1)
The term NeSSI, along with the futuristic concepts of a communication/power bus specifically designed for process analytical (the NeSSI-bus) and fully automated sampling systems were first introduced outside of CPAC at a presentation given in January 2001 at the International Forum of Process Analytical Chemistry (IFPAC) at Amelia Island, Florida, USA. These new concepts were collected in the NeSSI Generation II Specification and released by CPAC in 2003 as an open publication. The specification is located on the CPAC website. (Reference 2)
NeSSI Technical Objectives
Facilitate the acceptance and implementation of modular, miniature and automated (smart) sample system technology using the mechanical design based on the ANSI/ISA SP76.00.02-2002 standard.
Provide the mechanical, electrical and software infrastructure needed to accelerate the use of microanalytical sensors within the process industries.
Move the analytical systems out of the analyzer houses by promoting the use of field-mounted analytical systems (similar to pressure transmitters) which are close-coupled to the major process equipment. (NeSSI refers to this concept as By-Line analysis)
Lay the groundwork for the adoption of an open communication standard(s) for process analytical. This includes communication between sample system components such as flow sensors, actuators, and microanalytical sensors, as well as communication to a Distributed Control System (DCS).
Comparison of Current Technology vs. NeSSI Technology (Extractive Systems)
Technology Development Roadmap
The NeSSI Technology Development Roadmap groups the technology into three generations, which are backward compatible. Generation I is a commercial product and proven in numerous industrial and laboratory applications. Generation II products have been proven in the laboratory but have yet to be commercialized. Generation III (microanalytical) is in development.
Technical Development Generations
Generation I Fluid Components
Generation I covers the commercially available mechanical systems associated with the fluid handling components. Generation I has adopted the ANSI/ISA SP76.00.2002 miniature, modular mechanical standard. This standard precisely defines inlet and outlet ports and overall dimensions which allow Lego-like interchangeability of components, between different manufacturers. The ANSI/ISA standard is referenced by the International Electrotechnical Commission in publication IEC 62339-1:2006.
Currently three manufacturers produce the mechanical mounting system (known as a substrate) which serves as the platform for attaching various components. Since the components are bolted to the surface of the substrate, sealed by O-rings, they are sometimes referred to as surface mount devices. (The semiconductor industry has a related system; however, the sealing is done by metallic seals rather than elastomeric O-rings.) There are currently over 60 different types of surface mount components available from various suppliers who provide valves, filters and regulators as well as pressure and flow sensing devices. Although the platform for mounting various components is common among the manufacturers, the interconnections below the surface are proprietary. The following figure shows three of the common designs. (From left to right) A Swagelok system which uses various lengths of tube connectors set in rigid channels; a CIRCORTech design which uses a single block with assorted flow-tubes; and a Parker Hannifin design which uses various blocks ported together with small connectors which also serve as flow paths.
Generation II Connectivity using NeSSI-bus and the SAM
The key elements of the NeSSI Generation II Specification are as follows.
Adoption of a digital communication bus (NeSSI-bus) that is specifically tailored for process analytics and intended to replace 4-20 mA systems. This bus can handle up to 30 devices. (This bus would be equivalent to a plug-and-play USB bus on a personal computer but with special requirements.)
For electrical equipment in hazardous areas, classifying the interior of an enclosure handling hazardous (flammable) fluids (e.g. hydrogen and ethylene) as Division 1/Zone 1 rather than Division/Zone 2.
Adopting the use of a safe low energy, globally accepted method of electrical protection called intrinsic safety for the NeSSI-bus.
Adopting the use of miniature smart/automated electronic devices including sensors (flow, pressure temperature), on/off and proportional actuators and enclosure heater controls.
A move away from the use of local indicating devices such as gauges and rotameters in order to reduce labor-intensive manual checking (rounds).
A move away from centralized control (automation) model to a local/field control model which is represented by a small computing device called the Sensor Actuator Manager (SAM).
Adopting the concept of portable, commercially available software smart applets for the purpose of automating specific sample system functions. These applets would be resident in the SAM.
Employing an Ethernet network between the SAM, the DCS and the Operator & Maintenance (O & M) user station. (NeSSI refers to this bus as the ANLAN)
Introduction of a graphical user interface (GUI) for better visualization of physically compact sampling systems.
The first prototype of a multi node/miniature Generation II system was demonstrated by Siemens Process Analytics in 2006. Siemens has adapted an existing bus system called I2C to operate in an intrinsically safe mode. This work was undertaken once it was determined that existing intrinsically safe capable digital communication systems such as Foundation Fieldbus and Profibus could not meet the requirements of reduced physical size as well as the lower cost and power draw defined by the NeSSI-bus. Whether or not this bus will go into wide commercial production is unknown at this time.
A nonprofit organization, CAN in Automation (CiA) released a 2007 Draft Standard Proposal (DSP-103), that specifies the physical layer of an intrinsically safe bus. [CAN = Controller Area Network] The specification has been developed by members of the CiA organization among them ABB, Pepperl+Fuchs, Texas Instruments, and Siemens. By using a lower voltage (9.5 V) for its power supply, this bus can provide more current (up to 1,000 mA) to power multiple devices in a hazardous environment. This group has standardized upon the 5-pin M8 pico connector for providing both power and signal to the devices. A commercial implementation of a process analytical system using this bus has yet to be demonstrated.
An interim development called Generation 1.5 uses both conventional 4-20 mA analogue sensors and discrete signals to actuate valves. A Programmable Logic Controller (PLC) is used as the Sensor Actuator Manager (SAM).
Generation III - microAnalytical
The introduction of new microAnalytical devices to the process industries can be enabled by employing standard physical, electrical and software interfaces. Generation III will allow tighter integration of the sample conditioning and analytical measurement devices.
Applications
NeSSI is used for process analytical measurements in the petrochemical, chemical and oil refining industries. These measurements may be for quality control of raw material or final product, environmental compliance, safety, energy reduction or process control purposes. Vapour applications may include hydrocarbon feed stocks and intermediates (ethylene, ethane, propylene, etc.), natural gas streams, liquefied petroleum gas (LPG) streams, hydrogen and air gas streams.
Liquid systems suitable for use with the Generation I mechanical portion of NeSSI are hydrocarbons such as diesel fuel as well as aqueous streams. Highly viscous fluids and solids are not suitable for use with NeSSI. Very dirty, high particulate streams need to be filtered. Some liquid service applications may be limited by pressure drops associated with components hooked up in a serial configuration. NeSSI systems have found applications in areas other than the process analytical environments including micro reactor, mini plant and laboratory environments where small size, unskilled assembly and flexible configuration is important.
Role of CPAC
The development of NeSSI has been a collaborative effort between industrial end-users, manufacturers who supply the industries, and academic researchers working in the area of process analytics. CPAC continues as the focal point for NeSSI development, and sponsor of the NeSSI steering team. CPAC provides a neutral umbrella under which interested companies have been able to meet, discuss needs and issues, and make progress towards defining the future of industrial sampling and analyzer systems. The NeSSI name is trademarked by the University of Washington to ensure that it remains freely associated with the open nature of the initiative anyone can use the name NeSSI to refer to products or services that are consistent with the specifications and guidelines of NeSSI as long as they refrain from exclusively tying the name to a proprietary product or service.
Criticism, Impact and Summary
Criticism of NeSSI mechanical systems have included higher initial cost, inability to troubleshoot at a component level (due to compact/intensive spacing), and the lack of performance data associated with the use of elastomeric seals in long term installations. From a design perspective, it may be difficult to design a modular, mechanical system which meets the needs of the diverse process applications found in industry. Development of the NeSSI-bus has been an iterative exercise, and it will need the close cooperation of both component and analyzer manufacturers to make their equipment NeSSI-bus compliant. At this time, there are missing elements such as a low cost, low power flow sensor which is capable of providing a continuous reading of sample system flow as well as a proportional, miniature control valve.
The predicted impact of NeSSI systems are as follows:
Adoption of a universally accepted method of protection (intrinsic safety) for sample systems will globalize and harmonize system design and help overcome geographical restrictions currently mandated by various electrical certification/approval bodies such as Factory Mutual (FM) and Underwriters Laboratories (UL), ATEX (Europe), GOST (Russian Federation) and Canadian Standards Association (CSA).
Analyzer technical staff will have the capability of accessing the status of all the key indicators of analytical sample system performance both locally and remotely. Predictive rather than preventive maintenance can be performed and remote diagnostics and graphical user interfaces are the norm. Analyzer rounds will be eliminated. Analyzer systems will become more reliable and trustworthy. The analyzer technician will have the power to configure a sampling/analytical system, as he/she desires using the smart applets. The adjustable wrench and screwdriver will be replaced with software.
Molecular management meaning tighter process control by more analysis of the chemical processes - will become feasible with better, faster, less costly and more abundant analysis. This will help reduce manufacturing energy costs and minimize environmental emissions in the process industries.
Since its debut in 2000, NeSSI the mechanical portion has seen gradual but steady acceptance in industry. Currently, there are three major commercial suppliers of NeSSI compliant mechanical systems along with dozens of components available for mounting on these systems. There is also a growing list of companies implementing NeSSI systems in their manufacturing and pilot-plant facilities. Recently, two of the largest suppliers of process analyzers have committed to supporting NeSSI hardware and the development of the intrinsically safe NeSSI-bus communication into their products. NeSSI is gaining status as a de facto standard for many process sampling system applications.
NeSSI (Generation I) acceptance has spread beyond its initial chemical and petrochemical industry roots to find applications in the automotive, food, and pharmaceutical industries, as well as applications as an analytical development system in research laboratories. Generation II electrical systems are now close to commercialization with the first industrial systems scheduled for operation in 2008.
References
"ANSI/ISA 76.00.02-2002 Modular Component Interfaces for Surface-Mount Fluid Distribution Components – Part1: Elastomeric Seals," Instrumentation, Systems, and Automation Society (ISA), Compositional Analyzers Committee, (2002), www.isa.org
Dubois, Robert N.; van Vuuren, Peter; Gunnell, Jeffrey J. "NeSSI (New Sampling/Sensor Initiative) Generation II Specification", A Conceptual and Functional Specification Describing the Use of Miniature, Modular Electrical Components for Adaptation to the ANSI/ISA SP76 Substrate in Electrically Hazardous Areas. Center for Process Analytical Chemistry (CPAC), University of Washington, Seattle WA, (2003)
External links
— provides more technical information about NeSSI as well as a complete history of its development through a compendium of papers and talks presented at various meetings, workshops, and conferences since its inception.
AVENISENSE— provides NeSSI miniaturized fluid properties sensors & transmitters(liquid and gas) such as viscosity, density, pressure, temperature and molar mass.
Chemical engineering
Systems analysis | NeSSI | [
"Chemistry",
"Engineering"
] | 3,631 | [
"Chemical engineering",
"nan"
] |
8,890,014 | https://en.wikipedia.org/wiki/Perfect%20totient%20number | In number theory, a perfect totient number is an integer that is equal to the sum of its iterated totients. That is, one applies the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and adds together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number.
Examples
For example, there are six positive integers less than 9 and relatively prime to it, so the totient of 9 is 6; there are two numbers less than 6 and relatively prime to it, so the totient of 6 is 2; and there is one number less than 2 and relatively prime to it, so the totient of 2 is 1; and , so 9 is a perfect totient number.
The first few perfect totient numbers are
3, 9, 15, 27, 39, 81, 111, 183, 243, 255, 327, 363, 471, 729, 2187, 2199, 3063, 4359, 4375, ... .
Notation
In symbols, one writes
for the iterated totient function. Then if c is the integer such that
one has that n is a perfect totient number if
Multiples and powers of three
It can be observed that many perfect totient are multiples of 3; in fact, 4375 is the smallest perfect totient number that is not divisible by 3. All powers of 3 are perfect totient numbers, as may be seen by induction using the fact that
Venkataraman (1975) found another family of perfect totient numbers: if is prime, then 3p is a perfect totient number. The values of k leading to perfect totient numbers in this way are
0, 1, 2, 3, 6, 14, 15, 39, 201, 249, 1005, 1254, 1635, ... .
More generally if p is a prime number greater than 3, and 3p is a perfect totient number, then p ≡ 1 (mod 4) (Mohan and Suryanarayana 1982). Not all p of this form lead to perfect totient numbers; for instance, 51 is not a perfect totient number. Iannucci et al. (2003) showed that if 9p is a perfect totient number then p is a prime of one of three specific forms listed in their paper. It is not known whether there are any perfect totient numbers of the form 3kp where p is prime and k > 3.
References
Integer sequences | Perfect totient number | [
"Mathematics"
] | 551 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
8,891,344 | https://en.wikipedia.org/wiki/Gibbs%E2%80%93Duhem%20equation | In thermodynamics, the Gibbs–Duhem equation describes the relationship between changes in chemical potential for components in a thermodynamic system:
where is the number of moles of component the infinitesimal increase in chemical potential for this component, the entropy, the absolute temperature, volume and the pressure. is the number of different components in the system. This equation shows that in thermodynamics intensive properties are not independent but related, making it a mathematical statement of the state postulate. When pressure and temperature are variable, only of components have independent values for chemical potential and Gibbs' phase rule follows. The Gibbs−Duhem equation cannot be used for small thermodynamic systems due to the influence of surface effects and other microscopic phenomena.
The equation is named after Josiah Willard Gibbs and Pierre Duhem.
Derivation
Deriving the Gibbs–Duhem equation from the fundamental thermodynamic equation is straightforward. The total differential of the extensive Gibbs free energy in terms of its natural variables is
Since the Gibbs free energy is the Legendre transformation of the internal energy, the derivatives can be replaced by their definitions, transforming the above equation into:
The chemical potential is simply another name for the partial molar Gibbs free energy (or the partial Gibbs free energy, depending on whether N is in units of moles or particles). Thus the Gibbs free energy of a system can be calculated by collecting moles together carefully at a specified T, P and at a constant molar ratio composition (so that the chemical potential does not change as the moles are added together), i.e.
.
The total differential of this expression is
Combining the two expressions for the total differential of the Gibbs free energy gives
which simplifies to the Gibbs–Duhem relation:
Alternative derivation
Another way of deriving the Gibbs–Duhem equation can be found by taking the extensivity of energy into account. Extensivity implies that
where denotes all extensive variables of the internal energy . The internal energy is thus a first-order homogenous function. Applying Euler's homogeneous function theorem, one finds the following relation when taking only volume, number of particles, and entropy as extensive variables:
Taking the total differential, one finds
Finally, one can equate this expression to the definition of to find the Gibbs–Duhem equation
Applications
By normalizing the above equation by the extent of a system, such as the total number of moles, the Gibbs–Duhem equation provides a relationship between the intensive variables of the system. For a simple system with different components, there will be independent parameters or "degrees of freedom". For example, if we know a gas cylinder filled with pure nitrogen is at room temperature (298 K) and 25 MPa, we can determine the fluid density (258 kg/m3), enthalpy (272 kJ/kg), entropy (5.07 kJ/kg⋅K) or any other intensive thermodynamic variable. If instead the cylinder contains a nitrogen/oxygen mixture, we require an additional piece of information, usually the ratio of oxygen-to-nitrogen.
If multiple phases of matter are present, the chemical potentials across a phase boundary are equal. Combining expressions for the Gibbs–Duhem equation in each phase and assuming systematic equilibrium (i.e. that the temperature and pressure is constant throughout the system), we recover the Gibbs' phase rule.
One particularly useful expression arises when considering binary solutions. At constant P (isobaric) and T (isothermal) it becomes:
or, normalizing by total number of moles in the system substituting in the definition of activity coefficient and using the identity :
This equation is instrumental in the calculation of thermodynamically consistent and thus more accurate expressions for the vapor pressure of a fluid mixture from limited experimental data.
Ternary and multicomponent solutions and mixtures
Lawrence Stamper Darken has shown that the Gibbs–Duhem equation can be applied to the determination of chemical potentials of components from a multicomponent system from experimental data regarding the chemical potential of only one component (here component 2) at all compositions. He has deduced the following relation
xi, amount (mole) fractions of components.
Making some rearrangements and dividing by (1 – x2)2 gives:
or
or
as formatting variant
The derivative with respect to one mole fraction x2 is taken at constant ratios of amounts (and therefore of mole fractions) of the other components of the solution representable in a diagram like ternary plot.
The last equality can be integrated from to gives:
Applying LHopital's rule gives:
.
This becomes further:
.
Express the mole fractions of component 1 and 3 as functions of component 2 mole fraction and binary mole ratios:
and the sum of partial molar quantities
gives
and are constants which can be determined from the binary systems 1_2 and 2_3. These constants can be obtained from the previous equality by putting the complementary mole fraction x3 = 0 for x1 and vice versa.
Thus
and
The final expression is given by substitution of these constants into the previous equation:
See also
Margules activity model
Darken's equations
Gibbs–Helmholtz equation
References
External links
J. Phys. Chem. Gokcen 1960
A lecture from www.chem.neu.edu
A lecture from www.chem.arizona.edu
Encyclopædia Britannica entry
Chemical thermodynamics
Thermodynamic equations
fr:Potentiel chimique#Relation de Gibbs-Duhem | Gibbs–Duhem equation | [
"Physics",
"Chemistry"
] | 1,146 | [
"Chemical thermodynamics",
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
8,892,966 | https://en.wikipedia.org/wiki/Hydrocarbon%20dew%20point | The hydrocarbon dew point is the temperature (at a given pressure) at which the hydrocarbon components of any hydrocarbon-rich gas mixture, such as natural gas, will start to condense out of the gaseous phase. It is often also referred to as the HDP or the HCDP. The maximum temperature at which such condensation takes place is called the cricondentherm. The hydrocarbon dew point is a function of the gas composition as well as the pressure.
The hydrocarbon dew point is universally used in the natural gas industry as an important quality parameter, stipulated in contractual specifications and enforced throughout the natural gas supply chain, from producers through processing, transmission and distribution companies to final end users.
The hydrocarbon dew point of a gas is a different concept from the water dew point, the latter being the temperature (at a given pressure) at which water vapor present in a gas mixture will condense out of the gas.
Relation to the term GPM
In the United States, the hydrocarbon dew point of processed, pipelined natural gas is related to and characterized by the term GPM which is the gallons of liquefiable hydrocarbons contained in of natural gas at a stated temperature and pressure. When the liquefiable hydrocarbons are characterized as being hexane or higher molecular weight components, they are reported as GPM (C6+).
However, the quality of raw produced natural gas is also often characterized by the term GPM meaning the gallons of liquefiable hydrocarbons contained in of the raw natural gas. In such cases, when the liquefiable hydrocarbons in the raw natural gas are characterized as being ethane or higher molecular weight components, they are reported as GPM (C2+). Similarly, when characterized as being propane or higher molecular weight components, they are reported as GPM (C3+).
Care must be taken not to confuse the two different definitions of the term GPM.
Although GPM is an additional parameter of some value, most pipeline operators and others who process, transport, distribute or use natural gas are primarily interested in the actual HCDP, rather than GPM. Furthermore, GPM and HCDP are not interchangeable and one should be careful not to confuse what each one exactly means.
Methods of HCDP determination
There are primarily two categories of HCDP determination. One category involves "theoretical" methods, and the other involves "experimental" methods.
Theoretical methods
The theoretical methods use the component analysis of the gas mixture (usually via gas chromatography, GC) and then use an equation of state (EOS) to calculate what the dew point of the mixture should be at a given pressure. The Peng–Robinson and Kwong–Redlich–Soave equations of state are the most commonly used for determining the HCDP in the natural gas industry.
The theoretical methods using GC analysis suffer from four sources of error:
The first source of error is the sampling error. Pipelines operate at high pressure. To do an analysis using a field GC, the pressure has to be regulated down to close to atmospheric pressure. In the process of reducing pressure, some of the heavier components may drop out, particularly if the pressure reduction is done in the retrograde region. Therefore, the gas reaching the GC is fundamentally different (usually leaner in the heavy components) than the actual gas in the pipeline. Alternatively, if a sample bottle is collected for delivery to a laboratory for analysis, significant care must be taken not to introduce any contaminants to the sample, to make sure that the sample bottle represents the actual gas in the pipeline, and to extract the complete sample correctly in to the laboratory GC.
The second source is the error on the analysis of the gas mix components. A typical field GC will have at best (under ideal conditions and frequent calibration) ~2% (of range) error in the quantity of each gas analyzed. Since the range for most field-GCs for C6 components is 0-1 mol%, there will be about 0.02 mol% uncertainty in the quantity of C6+ components. While this error does not change the heating value by much, it will introduce a significant error in the HCDP determination. Furthermore, since the exact distribution of C6+ components is an unknown (the amount of C6, C7, C8, ...), this further introduces additional errors in any HCDP calculations. When using a C6+ GC these errors can be as high as 100 °F or more, depending on the gas mixture and the assumptions made regarding the composition of the C6+ fraction. For "pipeline quality" natural gas, a C9+ GC analysis may reduce the uncertainty, because it eliminates the C6-C8 distribution error. However, independent studies have shown that the cumulative error can still be very significant, in some cases in excess of 30 C. A laboratory C12+ GC analysis using a Flame Ionization Detector (FID) can reduce the error further. However, using a C12 laboratory system can introduce additional errors, namely sampling error. If the gas has to be collected in a sample bottle and shipped to a laboratory for C12 analysis, sampling errors can be significant. Obviously there is also a lag time error between the time the sample was collected and the time it was analyzed.
The third source of errors is calibration errors. All GCs have to be calibrated routinely with a calibration gas representative of the gas under analysis. If the calibration gas is not representative, or calibrations are not routinely performed, there will be errors introduced.
The fourth source of error relates to the errors embedded in the equation of state model used to calculate the dew point. Different models are prone to varying amounts error at different pressure regimes and gas mixes. There is sometimes a significant divergence of calculated dew point based solely on the choice of equation of state used.
The significant advantage of using the theoretical models is that the HCDP at several pressures (as well as the cricondentherm) can be determined from a single analysis. This provides for operational uses such as determining the phase of the stream flowing through the flow-meter, determining if the sample has been affected by ambient temperature in the sample system, and avoiding amine foaming from liquid hydrocarbons in the amine contactor. However, recent developments in combining experimental methods and software enhancements have eliminated this shortcoming (see combined experimental and theoretical approach below).
GC vendors with a product targeting the HCDP analysis include Emerson, ABB, Thermo-fisher, as well as other companies.
Experimental methods
In the "experimental" methods, one actually cools a surface on which gas condenses and then measures the temperature at which the condensation takes place. The experimental methods can be divided into manual and automated systems. Manual systems, such as the Bureau of Mines dewpoint tester, depend on an operator to manually cool the chilled mirror slowly and to visually detect the onset of condensation. The automated methods use automatic mirror chilling controls and sensors to detect the amount of light reflected by the mirror and detect when condensation occurs through changes in the reflected light. The chilled mirror technique is a first principle measurement. Depending on the specific method used to establish the dew point temperature, some correction calculations may be necessary. As condensation must necessarily have already occurred for it to be detected, the reported temperature is lower than when using theoretical methods.
Similar to GC analysis, the experimental method is subject to potential sources of error. The first error is in the detection of condensation. A key component in chilled mirror dew point measurements is the subtlety with which condensate can be detected — in other words, the thinner the film is when detected, the better. A manual chilled mirror device relies on the operator to determine when a mist has formed on the mirror, and, depending on the device, can be highly subjective. It is also not always clear what is condensing: water or hydrocarbons. Because of the low resolution that has traditionally been available, the operator has been prone to under report the dew point, in other words, to report the dew point temperature as being below what it actually is. This is due to the fact that by the time condensation had accumulated enough to be visible, the dew point had already been reached and passed. The most modern manual devices make possible greatly improved reporting accuracy. There are two manufacturers of manual devices, and each of their devices meet the requirements for dew point measurement apparatus as defined in the ASTM Manual for Hydrocarbon Analysis. However, there are significant differences between the devices – including the optical resolution of the mirror and the method of mirror cooling – depending on the manufacturer.
Automated chilled mirror devices provide significantly more repeatable results, but these measurements can be affected by contaminants that may compromise the mirror's surface. In many instances it is important to incorporate an effective filtration system that prepares the gas for analysis. On the other hand, filtration may alter the gas composition slightly and filter elements are subject to clogging and saturation. Advances in technology have led to analyzers that are less affected by contaminants and certain devices can also measure the dew point of water that may be present in the gas. One recent innovation is the use of spectroscopy to determine the nature of the condensate at dewpoint. Another device user laser interferometry to register extremely tenuous amounts of condensation. It is asserted that these technologies are less affected by interference from contaminants. Another source of error is the speed of the cooling of the mirror and the measurement of the temperature of the mirror when the condensation is detected. This error can be minimized by controlling the cooling speed, or having a fast condensation detection system.
Experimental methods only provide a HCDP at the pressure at which the measurement is taken, and cannot provide the cricondentherm or the HCDP at other pressures. As the cricondentherm of natural gas is typically around 27 bar, there are gas preparation systems currently available which adjust input pressure to this value. Although, as pipeline operators often wish to know the HCDP at their current line pressure, the input pressure of many experimental systems can be adjusted by a regulator.
There are instruments that can be operated in either manual or automatic mode from the Vympel company.
Companies who offer an automated chilled mirror system include: Vympel, Ametek, Michell Instruments, ZEGAZ Instruments and Bartec Benke (Model: Hygrophil HCDT).
Combined experimental and theoretical approach
A recent innovation is to combine the experimental method with theoretical. If the composition of the gas is analyzed by a C6+ GC, and a dewpoint is experimentally measured at any pressure, then the experimental dewpoint can be used in combination with the GC analysis to provide a more exact phase diagram. This approach overcome the main shortcoming of the experimental method which is not knowing the whole phase diagram. An example of this software is provided by Starling Associates.
See also
Natural-gas processing
Natural-gas condensate
References
https://www.bartec.de/en/products/analyzers-and-measurement-technology/trace-moisture-measurement-for-gases/hygrophil-hcdt/
External links
https://www.zegaz.com/blog
Pipeline and Gas Journal - Hydrocarbon Dew Point Measurement Using a Gas Chromatograph
Emerson Hydrocarbon Dew Point Application Note
Natural Gas Processing: The Crucial Link Between Natural Gas Production and Its Transportation
Identification of Hydrocarbon Dew Point, Cricondentherm, Cricondenbar and critical points
Hydrocarbon Dew-point – A Key Natural Gas Quality Parameter
(ISO 6570:2001) Natural gas -- Determination of potential hydrocarbon liquid content
Engineering thermodynamics
Hydrocarbons
Natural gas
Threshold temperatures | Hydrocarbon dew point | [
"Physics",
"Chemistry",
"Engineering"
] | 2,459 | [
"Hydrocarbons",
"Physical phenomena",
"Phase transitions",
"Engineering thermodynamics",
"Threshold temperatures",
"Organic compounds",
"Thermodynamics",
"Mechanical engineering"
] |
351,769 | https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour%20theorem | In graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the undirected graphs, partially ordered by the graph minor relationship, form a well-quasi-ordering. Equivalently, every family of graphs that is closed under minors can be defined by a finite set of forbidden minors, in the same way that Wagner's theorem characterizes the planar graphs as being the graphs that do not have the complete graph K5 or the complete bipartite graph K3,3 as minors.
The Robertson–Seymour theorem is named after mathematicians Neil Robertson and Paul D. Seymour, who proved it in a series of twenty papers spanning over 500 pages from 1983 to 2004. Before its proof, the statement of the theorem was known as Wagner's conjecture after the German mathematician Klaus Wagner, although Wagner said he never conjectured it.
A weaker result for trees is implied by Kruskal's tree theorem, which was conjectured in 1937 by Andrew Vázsonyi and proved in 1960 independently by Joseph Kruskal and S. Tarkowski.
Statement
A minor of an undirected graph G is any graph that may be obtained from G by a sequence of zero or more contractions of edges of G and deletions of edges and vertices of G. The minor relationship forms a partial order on the set of all distinct finite undirected graphs, as it obeys the three axioms of partial orders: it is reflexive (every graph is a minor of itself), transitive (a minor of a minor of G is itself a minor of G), and antisymmetric (if two graphs G and H are minors of each other, then they must be isomorphic). However, if graphs that are isomorphic may nonetheless be considered as distinct objects, then the minor ordering on graphs forms a preorder, a relation that is reflexive and transitive but not necessarily antisymmetric.
A preorder is said to form a well-quasi-ordering if it contains neither an infinite descending chain nor an infinite antichain. For instance, the usual ordering on the non-negative integers is a well-quasi-ordering, but the same ordering on the set of all integers is not, because it contains the infinite descending chain 0, −1, −2, −3... Another example is the set of positive integers ordered by divisibility, which has no infinite descending chains, but where the prime numbers constitute an infinite antichain.
The Robertson–Seymour theorem states that finite undirected graphs and graph minors form a well-quasi-ordering. The graph minor relationship does not contain any infinite descending chain, because each contraction or deletion reduces the number of edges and vertices of the graph (a non-negative integer). The nontrivial part of the theorem is that there are no infinite antichains, infinite sets of graphs that are all unrelated to each other by the minor ordering. If S is a set of graphs, and M is a subset of S containing one representative graph for each equivalence class of minimal elements (graphs that belong to S but for which no proper minor belongs to S), then M forms an antichain; therefore, an equivalent way of stating the theorem is that, in any infinite set S of graphs, there must be only a finite number of non-isomorphic minimal elements.
Another equivalent form of the theorem is that, in any infinite set S of graphs, there must be a pair of graphs one of which is a minor of the other. The statement that every infinite set has finitely many minimal elements implies this form of the theorem, for if there are only finitely many minimal elements, then each of the remaining graphs must belong to a pair of this type with one of the minimal elements. And in the other direction, this form of the theorem implies the statement that there can be no infinite antichains, because an infinite antichain is a set that does not contain any pair related by the minor relation.
Forbidden minor characterizations
A family F of graphs is said to be closed under the operation of taking minors if every minor of a graph in F also belongs to F. If F is a minor-closed family, then let S be the class of graphs that are not in F (the complement of F). According to the Robertson–Seymour theorem, there exists a finite set H of minimal elements in S. These minimal elements form a forbidden graph characterization of F: the graphs in F are exactly the graphs that do not have any graph in H as a minor. The members of H are called the excluded minors (or forbidden minors, or minor-minimal obstructions) for the family F.
For example, the planar graphs are closed under taking minors: contracting an edge in a planar graph, or removing edges or vertices from the graph, cannot destroy its planarity. Therefore, the planar graphs have a forbidden minor characterization, which in this case is given by Wagner's theorem: the set H of minor-minimal nonplanar graphs contains exactly two graphs, the complete graph K5 and the complete bipartite graph K3,3, and the planar graphs are exactly the graphs that do not have a minor in the set {K5, K3,3}.
The existence of forbidden minor characterizations for all minor-closed graph families is an equivalent way of stating the Robertson–Seymour theorem. For, suppose that every minor-closed family F has a finite set H of minimal forbidden minors, and let S be any infinite set of graphs. Define F from S as the family of graphs that do not have a minor in S. Then F is minor-closed and has a finite set H of minimal forbidden minors. Let C be the complement of F. S is a subset of C since S and F are disjoint, and H are the minimal graphs in C. Consider a graph G in H. G cannot have a proper minor in S since G is minimal in C. At the same time, G must have a minor in S, since otherwise G would be an element in F. Therefore, G is an element in S, i.e., H is a subset of S, and all other graphs in S have a minor among the graphs in H, so H is the finite set of minimal elements of S.
For the other implication, assume that every set of graphs has a finite subset of minimal graphs and let a minor-closed set F be given. We want to find a set H of graphs such that a graph is in F if and only if it does not have a minor in H. Let E be the graphs which are not minors of any graph in F, and let H be the finite set of minimal graphs in E. Now, let an arbitrary graph G be given. Assume first that G is in F. G cannot have a minor in H since G is in F and H is a subset of E. Now assume that G is not in F. Then G is not a minor of any graph in F, since F is minor-closed. Therefore, G is in E, so G has a minor in H.
Examples of minor-closed families
The following sets of finite graphs are minor-closed, and therefore (by the Robertson–Seymour theorem) have forbidden minor characterizations:
forests, linear forests (disjoint unions of path graphs), pseudoforests, and cactus graphs;
planar graphs, outerplanar graphs, apex graphs (formed by adding a single vertex to a planar graph), toroidal graphs, and the graphs that can be embedded on any fixed two-dimensional manifold;
graphs that are linklessly embeddable in Euclidean 3-space, and graphs that are knotlessly embeddable in Euclidean 3-space;
graphs with a feedback vertex set of size bounded by some fixed constant; graphs with Colin de Verdière graph invariant bounded by some fixed constant; graphs with treewidth, pathwidth, or branchwidth bounded by some fixed constant.
Obstruction sets
Some examples of finite obstruction sets were already known for specific classes of graphs before the Robertson–Seymour theorem was proved. For example, the obstruction for the set of all forests is the loop graph (or, if one restricts to simple graphs, the cycle with three vertices). This means that a graph is a forest if and only if none of its minors is the loop (or, the cycle with three vertices, respectively). The sole obstruction for the set of paths is the tree with four vertices, one of which has degree 3. In these cases, the obstruction set contains a single element, but in general this is not the case. Wagner's theorem states that a graph is planar if and only if it has neither K5 nor K3,3 as a minor. In other words, the set {K5, K3,3} is an obstruction set for the set of all planar graphs, and in fact the unique minimal obstruction set. A similar theorem states that K4 and K2,3 are the forbidden minors for the set of outerplanar graphs.
Although the Robertson–Seymour theorem extends these results to arbitrary minor-closed graph families, it is not a complete substitute for these results, because it does not provide an explicit description of the obstruction set for any family. For example, it tells us that the set of toroidal graphs has a finite obstruction set, but it does not provide any such set. The complete set of forbidden minors for toroidal graphs remains unknown, but it contains at least 17,535 graphs.
Polynomial time recognition
The Robertson–Seymour theorem has an important consequence in computational complexity, due to the proof by Robertson and Seymour that, for each fixed graph h, there is a polynomial time algorithm for testing whether a graph has h as a minor. This algorithm's running time is cubic (in the size of the graph to check), though with a constant factor that depends superpolynomially on the size of the minor h. The running time has been improved to quadratic by Kawarabayashi, Kobayashi, and Reed. As a result, for every minor-closed family F, there is polynomial time algorithm for testing whether a graph belongs to F: simply check whether the given graph contains h for each forbidden minor h in F’s obstruction set.
However, this method requires a specific finite obstruction set to work, and the theorem does not provide one. The theorem proves that such a finite obstruction set exists, and therefore the problem is polynomial because of the above algorithm. However, the algorithm can be used in practice only if such a finite obstruction set is provided. As a result, the theorem proves that the problem can be solved in polynomial time, but does not provide a concrete polynomial-time algorithm for solving it. Such proofs of polynomiality are non-constructive: they prove polynomiality of problems without providing an explicit polynomial-time algorithm. In many specific cases, checking whether a graph is in a given minor-closed family can be done more efficiently: for example, checking whether a graph is planar can be done in linear time.
Fixed-parameter tractability
For graph invariants with the property that, for each k, the graphs with invariant at most k are minor-closed, the same method applies. For instance, by this result, treewidth, branchwidth, and pathwidth, vertex cover, and the minimum genus of an embedding are all amenable to this approach, and for any fixed k there is a polynomial time algorithm for testing whether these invariants are at most k, in which the exponent in the running time of the algorithm does not depend on k. A problem with this property, that it can be solved in polynomial time for any fixed k with an exponent that does not depend on k, is known as fixed-parameter tractable.
However, this method does not directly provide a single fixed-parameter-tractable algorithm for computing the parameter value for a given graph with unknown k, because of the difficulty of determining the set of forbidden minors. Additionally, the large constant factors involved in these results make them highly impractical. Therefore, the development of explicit fixed-parameter algorithms for these problems, with improved dependence on k, has continued to be an important line of research.
Finite form of the graph minor theorem
showed that the following theorem exhibits the independence phenomenon by being unprovable in various formal systems that are much stronger than Peano arithmetic, yet being provable in systems much weaker than ZFC:
Theorem: For every positive integer n, there is an integer m so large that if G1, ..., Gm is a sequence of finite undirected graphs,
where each Gi has size at most n+i, then Gj ≤ Gk for some j < k.
(Here, the size of a graph is the total number of its vertices and edges, and ≤ denotes the minor ordering.)
See also
Graph structure theorem
Notes
References
.
.
.
.
.
.
.
.
.
.
External links
Graph minor theory
Wellfoundedness
Theorems in graph theory | Robertson–Seymour theorem | [
"Mathematics"
] | 2,689 | [
"Mathematical induction",
"Wellfoundedness",
"Graph theory",
"Theorems in graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Order theory",
"Graph minor theory"
] |
351,806 | https://en.wikipedia.org/wiki/Fractionating%20column | A fractionating column or fractional column is equipment used in the distillation of liquid mixtures to separate the mixture into its component parts, or fractions, based on their differences in volatility. Fractionating columns are used in small-scale laboratory distillations as well as large-scale industrial distillations.
Laboratory fractionating columns
A laboratory fractionating column is a piece of glassware used to separate vaporized mixtures of liquid compounds with close volatility. Most commonly used is either a Vigreux column or a straight column packed with glass beads or metal pieces such as Raschig rings. Fractionating columns help to separate the mixture by allowing the mixed vapors to cool, condense, and vaporize again in accordance with Raoult's law. With each condensation-vaporization cycle, the vapors are enriched in a certain component. A larger surface area allows more cycles, improving separation. This is the rationale for a Vigreux column or a packed fractionating column. Spinning band distillation achieves the same outcome by using a rotating band within the column to force the rising vapors and descending condensate into close contact, achieving equilibrium more quickly.
In a typical fractional distillation, a liquid mixture is heated in the distilling flask, and the resulting vapor rises up the fractionating column (see Figure 1). The vapor condenses on glass spurs (known as theoretical trays or theoretical plates) inside the column, and returns to the distilling flask, refluxing the rising distillate vapor. The hottest tray is at the bottom of the column and the coolest tray is at the top. At steady-state conditions, the vapor and liquid on each tray reach an equilibrium. Only the most volatile of the vapors stays in gas form all the way to the top, where it may then proceed through a condenser, which cools the vapor until it condenses into a liquid distillate. The separation may be enhanced by the addition of more trays (to a practical limitation of heat, flow, etc.).
Industrial fractionating columns
Fractional distillation is one of the unit operations of chemical engineering. Fractionating columns are widely used in chemical process industries where large quantities of liquids have to be distilled. Such industries are petroleum processing, petrochemical production, natural gas processing, coal tar processing, brewing, liquefied air separation, and hydrocarbon solvents production. Fractional distillation finds its widest application in petroleum refineries. In such refineries, the crude oil feedstock is a complex, multicomponent mixture that must be separated. Yields of pure chemical compounds are generally not expected, however, yields of groups of compounds within a relatively small range of boiling points, also called fractions, are expected. This process is the origin of the name fractional distillation or fractionation.
Distillation is one of the most common and energy-intensive separation processes. Effectiveness of separation is dependent upon the height and diameter of the column, the ratio of the column's height to diameter, and the material that comprises the distillation column itself. In a typical chemical plant, it accounts for about 40% of the total energy consumption. Industrial distillation is typically performed in large, vertical cylindrical columns (as shown in Figure 2) known as "distillation towers" or "distillation columns" with diameters ranging from about 65 centimeters to 6 meters and heights ranging from about 6 meters to 60 meters or more.
Industrial distillation towers are usually operated at a continuous steady state. Unless disturbed by changes in feed, heat, ambient temperature, or condensing, the amount of feed being added normally equals the amount of product being removed.
The amount of heat entering the column from the reboiler and with the feed must equal the amount heat removed by the overhead condenser and with the products. The heat entering a distillation column is a crucial operating parameter, addition of excess or insufficient heat to the column can lead to foaming, weeping, entrainment, or flooding.
Figure 3 depicts an industrial fractionating column separating a feed stream into one distillate fraction and one bottoms fraction. However, many industrial fractionating columns have outlets at intervals up the column so that multiple products having different boiling ranges may be withdrawn from a column distilling a multi-component feed stream. The "lightest" products with the lowest boiling points exit from the top of the columns and the "heaviest" products with the highest boiling points exit from the bottom.
Industrial fractionating columns use external reflux to achieve better separation of products. Reflux refers to the portion of the condensed overhead liquid product that returns to the upper part of the fractionating column as shown in Figure 3.
Inside the column, the downflowing reflux liquid provides cooling and condensation of upflowing vapors thereby increasing the efficacy of the distillation tower. The more reflux and/or more trays provided, the better is the tower's separation of lower boiling materials from higher boiling materials.
The design and operation of a fractionating column depends on the composition of the feed as well as the composition of the desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design, operation, and construction.
Bubble-cap "trays" or "plates" are one of the types of physical devices, which are used to provide good contact between the upflowing vapor and the downflowing liquid inside an industrial fractionating column. Such trays are shown in Figures 4 and 5.
The efficiency of a tray or plate is typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a fractionating column almost always needs more actual, physical plates than the required number of theoretical vapor–liquid equilibrium stages.
In industrial uses, sometimes a packing material is used in the column instead of trays, especially when low pressure drops across the column are required, as when operating under vacuum. This packing material can either be random dumped packing ( wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing, and the vapors pass across this wetted surface, where mass transfer takes place. Differently shaped packings have different surface areas and void space between packings. Both of these factors affect packing performance.
See also
Azeotropic distillation
Batch distillation
Continuous distillation
Extractive distillation
Laboratory glassware
Steam distillation
Theoretical plate
Vacuum distillation
Fractional distillation
References
External links
Use of distillation columns in Oil & Gas
More drawings of glassware including Vigreux columns
Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology, Norway
Distillation, An Introduction by Ming Tham, Newcastle University, UK
Distillation by the Distillation Group, USA
Distillation simulation software
Fractional Distillation Explained for High School Students
Distillation
Chemical equipment
Fractionation
fr:Distillation fractionnée
it:Colonna di distillazione | Fractionating column | [
"Chemistry",
"Engineering"
] | 1,501 | [
"Fractionation",
"Separation processes",
"Chemical equipment",
"Distillation",
"nan"
] |
352,267 | https://en.wikipedia.org/wiki/Lyapunov%20function | In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state-space Markov chains usually under the name Foster–Lyapunov functions.
For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. Whereas there is no general technique for constructing Lyapunov functions for ODEs, in many specific cases the construction of Lyapunov functions is known. For instance, quadratic functions suffice for systems with one state, the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov functions for physical systems.
Definition
A Lyapunov function for an autonomous dynamical system
with an equilibrium point at is a scalar function that is continuous, has continuous first derivatives, is strictly positive for , and for which the time derivative is non positive (these conditions are required on some region containing the origin). The (stronger) condition that is strictly positive for is sometimes stated as is locally positive definite, or is locally negative definite.
Further discussion of the terms arising in the definition
Lyapunov functions arise in the study of equilibrium points of dynamical systems. In an arbitrary autonomous dynamical system can be written as
for some smooth
An equilibrium point is a point such that Given an equilibrium point, there always exists a coordinate transformation such that:
Thus, in studying equilibrium points, it is sufficient to assume the equilibrium point occurs at .
By the chain rule, for any function, the time derivative of the function evaluated along a solution of the dynamical system is
A function is defined to be locally positive-definite function (in the sense of dynamical systems) if both and there is a neighborhood of the origin, , such that:
Basic Lyapunov theorems for autonomous systems
Let be an equilibrium point of the autonomous system
and use the notation to denote the time derivative of the Lyapunov-candidate-function :
Locally asymptotically stable equilibrium
If the equilibrium point is isolated, the Lyapunov-candidate-function is locally positive definite, and the time derivative of the Lyapunov-candidate-function is locally negative definite:
for some neighborhood of origin, then the equilibrium is proven to be locally asymptotically stable.
Stable equilibrium
If is a Lyapunov function, then the equilibrium is Lyapunov stable. The converse is also true, and was proved by José Luis Massera.
Globally asymptotically stable equilibrium
If the Lyapunov-candidate-function is globally positive definite, radially unbounded, the equilibrium isolated and the time derivative of the Lyapunov-candidate-function is globally negative definite:
then the equilibrium is proven to be globally asymptotically stable.
The Lyapunov-candidate function is radially unbounded if
(This is also referred to as norm-coercivity.)
Example
Consider the following differential equation on :
Considering that is always positive around the origin it is a natural candidate to be a Lyapunov function to help us study . So let on . Then,
This correctly shows that the above differential equation, is asymptotically stable about the origin. Note that using the same Lyapunov candidate one can show that the equilibrium is also globally asymptotically stable.
See also
Lyapunov stability
Ordinary differential equations
Control-Lyapunov function
Chetaev function
Foster's theorem
Lyapunov optimization
References
External links
Example of determining the stability of the equilibrium solution of a system of ODEs with a Lyapunov function
Stability theory | Lyapunov function | [
"Mathematics"
] | 819 | [
"Stability theory",
"Dynamical systems"
] |
352,353 | https://en.wikipedia.org/wiki/Spirit%20level | A spirit level, bubble level, or simply a level, is an instrument designed to indicate whether a surface is horizontal (level) or vertical (plumb).
Two basic designs exist: tubular (or linear) and bull's eye (or circular).
Different types of spirit levels may be used by carpenters, stonemasons, bricklayers, other building trades workers, surveyors, millwrights and other metalworkers, and in some photographic or videographic work.
History
The history of the spirit level was discussed in brief in an 1887 article appearing in Scientific American. Melchisédech Thévenot, a French scientist, invented the instrument some time before February 2, 1661. This date can be established from Thevenot's correspondence with scientist Christiaan Huygens. Within a year of this date the inventor circulated details of his invention to others, including Robert Hooke in London and Vincenzo Viviani in Florence. It is occasionally argued that these "bubble levels" did not come into widespread use until the beginning of the the earliest surviving examples being from that but Adrien Auzout had recommended that the Académie Royale des Sciences take "levels of the Thevenot type" on its expedition to Madagascar in 1666. It is very likely that these levels were in use in France and elsewhere long before the turn of the century.
The Fell All-Way precision level, one of the first successful American made bull's eye levels for machine tool use, was invented by William B. Fell of Rockford, Illinois in 1939. The device was unique in that it could be placed on a machine bed and show tilt on the x-y axes simultaneously, eliminating the need to rotate the level 90 degrees. The level was so accurate it was restricted from export during World War II. The device set a new standard of .0005 inches per foot resolution (five ten thousands per foot or five arc seconds tilt). Production of the level stopped around 1970, and was restarted in the 1980s by Thomas Butler Technology, also of Rockford, Illinois, but finally ended in the mid-1990s. However, there are still hundreds of the devices in existence.
Design and construction
Early tubular spirit levels had very slightly curved glass vials with constant inner diameter at each viewing point. These vials are filled, incompletely, with a usually a colored spirit or leaving a bubble in the tube. They have a slight upward curve, so that the bubble naturally rests in the center, the highest point. At slight inclinations the bubble travels away from the marked center position. Where a spirit level must also be usable upside-down or on its side, the curved constant-diameter tube is replaced by an uncurved barrel-shaped tube with a slightly larger diameter in its middle.
Alcohols such as ethanol are often used rather than water. Alcohols have low viscosity and surface tension, which allows the bubble to travel the tube quickly and settle accurately with minimal interference from the glass surface. Alcohols also have a much wider liquid temperature range, and will not break the vial as water could due to ice expansion. A colorant such as fluorescein, typically yellow or green, may be added to increase the visibility of the bubble.
A variant of the linear spirit level is the bull's eye level: a circular, flat-bottomed device with the liquid under a slightly convex glass face with a circle at the center. It serves to level a surface across a plane, while the tubular level only does so in the direction of the tube.
Calibration
To check the accuracy of a carpenter's type level, a perfectly horizontal surface is not needed. The level is placed on a flat and roughly level surface and the reading on the bubble tube is noted. This reading indicates to what extent the surface is parallel to the horizontal plane, according to the level, which at this stage is of unknown accuracy. The spirit level is then rotated through 180 degrees in the horizontal plane, and another reading is noted. If the level is accurate, it will indicate the same orientation with respect to the horizontal plane. A difference implies that the level is inaccurate.
Adjustment of the spirit level is performed by successively rotating the level and moving the bubble tube within its housing to take up roughly half of the discrepancy, until the magnitude of the reading remains constant when the level is flipped.
A similar procedure is applied to more sophisticated instruments such as a surveyor's optical level or a theodolite and is a matter of course each time the instrument is set up. In this latter case, the plane of rotation of the instrument is levelled, along with the spirit level. This is done in two horizontal perpendicular directions.
Sensitivity
Sensitivity is an important specification for a spirit level, as the device's accuracy depends on its sensitivity. The sensitivity of a level is given as the change of angle or gradient required to move the bubble by unit distance. If the bubble housing has graduated divisions, then the sensitivity is the angle or gradient change that moves the bubble by one of these divisions. is the usual spacing for graduations; on a surveyor's level, the bubble will move when the vial is tilted about 0.005 degree. For a precision machinist level with divisions, when the vial is tilted one division, the level will change one meter from the pivot point, referred to by machinists as 5 tenths per foot. This terminology is unique to machinists and indicates a length of 5 tenths of 1 thousandth of an inch.
Types
There are different types of spirit levels for different uses:
Carpenter's level (either wood, aluminium or composite materials)
Mason's level
Torpedo level
Post level
Line level
Engineer's precision level
Electronic level
Inclinometer
Slip or Skid Indicator
Bull's eye level
A spirit level is usually found on the head of combination squares.
Carpenter's level
A traditional carpenter's spirit level looks like a short plank of wood and often has a wide body to ensure stability, and that the surface is being measured correctly. In the middle of the spirit level is a small window where the bubble and the tube is mounted. Two notches (or rings) designate where the bubble should be if the surface is level. Often an indicator for a 45 degree inclination is included.
Line level
A line level is a level designed to hang on a builder's string line. The body of the level incorporates small hooks to allow it to attach and hang from the string line. The body is lightweight, so as not to weigh down the string line, it is also small in size as the string line in effect becomes the body; when the level is hung in the center of the string, each 'leg' of the string line extends the level's plane.
Engineer's precision levels
An engineer's precision level permits leveling items to greater accuracy than a plain spirit level. They are used to level the foundations, or beds of machines to ensure the machine can output workpieces to the accuracy pre-built in the machine.
Surveyor's leveling instrument
Combining a spirit level with an optical telescope results in a tilting level or dumpy level. These leveling instruments as used in surveying to measure height differences over larger distances. A surveyor's leveling instrument has a spirit level mounted on a telescope (perhaps 30 power) with cross-hairs, itself mounted on a tripod. The observer reads height values off two graduated vertical rods, one 'behind' and one 'in front', to obtain the height difference between the ground points on which the rods are resting. Starting from a point with a known elevation and going cross country (successive points being perhaps apart) height differences can be measured cumulatively over long distances and elevations can be calculated. Precise levelling is supposed to give the difference in elevation between two points apart correct to within a few millimeters.
Alternatives
Alternatives include:
Reed level
Laser line level
Water level
Today level tools are available in most smartphones by using the device's accelerometer. These mobile apps come with various features and easy designs. Also new web standards allow websites to get orientation of devices.
Digital spirit levels are increasingly common in replacing conventional spirit levels, particularly in civil engineering applications such as traditional building construction and steel structure erection, for on-site angle alignment and leveling tasks. The industry practitioners often refer to those levelling tools as a "construction level", "heavy duty level", "inclinometer", or "protractor". These modern electronic levels are capable of displaying precise numeric angles within 360° with 0.1° to 0.05° accuracy, can be read from a distance with clarity, and are affordably priced due to mass adoption. They provide features that traditional levels are unable to match. Typically, these features enable steel beam frames under construction to be precisely aligned and levelled to the required orientation, which is vital to ensure the stability, strength and rigidity of steel structures on sites. Digital levels, embedded with angular MEMS technology effectively improve productivity and quality of many modern civil structures. Some recent models feature waterproof IP65 and impact resistance features for harsh working environments.
See also
Glossary of levelling terms
Horizontal and vertical
Inclinometer
Plumb bob
Theodolite
Turn and bank indicator
References
External links
Surveying
Geodesy
Inclinometers
Woodworking measuring instruments | Spirit level | [
"Mathematics",
"Engineering"
] | 1,901 | [
"Applied mathematics",
"Civil engineering",
"Surveying",
"Geodesy"
] |
352,905 | https://en.wikipedia.org/wiki/R-process | In nuclear astrophysics, the rapid neutron-capture process, also known as the r-process, is a set of nuclear reactions that is responsible for the creation of approximately half of the atomic nuclei heavier than iron, the "heavy elements", with the other half produced by the p-process and s-process. The r-process usually synthesizes the most neutron-rich stable isotopes of each heavy element. The r-process can typically synthesize the heaviest four isotopes of every heavy element; of these, the heavier two are called r-only nuclei because they are created exclusively via the r-process. Abundance peaks for the r-process occur near mass numbers (elements Se, Br, and Kr), (elements Te, I, and Xe) and (elements Os, Ir, and Pt).
The r-process entails a succession of rapid neutron captures (hence the name) by one or more heavy seed nuclei, typically beginning with nuclei in the abundance peak centered on 56Fe. The captures must be rapid in the sense that the nuclei must not have time to undergo radioactive decay (typically via β− decay) before another neutron arrives to be captured. This sequence can continue up to the limit of stability of the increasingly neutron-rich nuclei (the neutron drip line) to physically retain neutrons as governed by the short range nuclear force. The r-process therefore must occur in locations where there exists a high density of free neutrons.
Early studies theorized that 1024 free neutrons per cm3 would be required, for temperatures of about 1 GK, in order to match the waiting points, at which no more neutrons can be captured, with the mass numbers of the abundance peaks for r-process nuclei. This amounts to almost a gram of free neutrons in every cubic centimeter, an astonishing number requiring extreme locations. Traditionally this suggested the material ejected from the reexpanded core of a core-collapse supernova, as part of supernova nucleosynthesis, or decompression of neutron star matter thrown off by a binary neutron star merger in a kilonova. The relative contribution of each of these sources to the astrophysical abundance of r-process elements is a matter of ongoing research .
A limited r-process-like series of neutron captures occurs to a minor extent in thermonuclear weapon explosions. These led to the discovery of the elements einsteinium (element 99) and fermium (element 100) in nuclear weapon fallout.
The r-process contrasts with the s-process, the other predominant mechanism for the production of heavy elements, which is nucleosynthesis by means of slow captures of neutrons. In general, isotopes involved in the s-process have half-lives long enough to enable their study in laboratory experiments, but this is not typically true for isotopes involved in the r-process. The s-process primarily occurs within ordinary stars, particularly AGB stars, where the neutron flux is sufficient to cause neutron captures to recur every 10–100 years, much too slow for the r-process, which requires 100 captures per second. The s-process is secondary, meaning that it requires pre-existing heavy isotopes as seed nuclei to be converted into other heavy nuclei by a slow sequence of captures of free neutrons. The r-process scenarios create their own seed nuclei, so they might proceed in massive stars that contain no heavy seed nuclei. Taken together, the r- and s-processes account for almost the entire abundance of chemical elements heavier than iron. The historical challenge has been to locate physical settings appropriate to their time scales.
History
Following pioneering research into the Big Bang and the formation of helium in stars, an unknown process responsible for producing heavier elements found on Earth from hydrogen and helium was suspected to exist. One early attempt at explanation came from Subrahmanyan Chandrasekhar and Louis R. Henrich who postulated that elements were produced at temperatures between 6×109 and 8×109 K. Their theory accounted for elements up to chlorine, though there was no explanation for elements of atomic weight heavier than 40 amu at non-negligible abundances.
This became the foundation of a study by Fred Hoyle, who hypothesized that conditions in the core of collapsing stars would enable nucleosynthesis of the remainder of the elements via rapid capture of densely packed free neutrons. However, there remained unanswered questions about equilibrium in stars that was required to balance beta-decays and precisely account for abundances of elements that would be formed in such conditions.
The need for a physical setting providing rapid neutron capture, which was known to almost certainly have a role in element formation, was also seen in a table of abundances of isotopes of heavy elements by Hans Suess and Harold Urey in 1956. Their abundance table revealed larger than average abundances of natural isotopes containing magic numbers of neutrons as well as abundance peaks about 10 amu lighter than stable nuclei containing magic numbers of neutrons which were also in abundance, suggesting that radioactive neutron-rich nuclei having the magic neutron numbers but roughly ten fewer protons were formed. These observations also implied that rapid neutron capture occurred faster than beta decay, and the resulting abundance peaks were caused by so-called waiting points at magic numbers. This process, rapid neutron capture by neutron-rich isotopes, became known as the r-process, whereas the s-process was named for its characteristic slow neutron capture. A table apportioning the heavy isotopes phenomenologically between s-process and r-process isotopes was published in 1957 in the B2FH review paper, which named the r-process and outlined the physics that guides it. Alastair G. W. Cameron also published a smaller study about the r-process in the same year.
The stationary r-process as described by the B2FH paper was first demonstrated in a time-dependent calculation at Caltech by Phillip A. Seeger, William A. Fowler and Donald D. Clayton, who found that no single temporal snapshot matched the solar r-process abundances, but, that when superposed, did achieve a successful characterization of the r-process abundance distribution. Shorter-time distributions emphasize abundances at atomic weights less than , whereas longer-time distributions emphasized those at atomic weights greater than . Subsequent treatments of the r-process reinforced those temporal features. Seeger et al. were also able to construct more quantitative apportionment between s-process and r-process of the abundance table of heavy isotopes, thereby establishing a more reliable abundance curve for the r-process isotopes than B2FH had been able to define. Today, the r-process abundances are determined using their technique of subtracting the more reliable s-process isotopic abundances from the total isotopic abundances and attributing the remainder to r-process nucleosynthesis. That r-process abundance curve (vs. atomic weight) has provided for many decades the target for theoretical computations of abundances synthesized by the physical r-process.
The creation of free neutrons by electron capture during the rapid collapse to high density of a supernova core along with quick assembly of some neutron-rich seed nuclei makes the r-process a primary nucleosynthesis process, a process that can occur even in a star initially of pure H and He. This in contrast to the B2FH designation which is a secondary process building on preexisting iron. Primary stellar nucleosynthesis begins earlier in the galaxy than does secondary nucleosynthesis. Alternatively the high density of neutrons within neutron stars would be available for rapid assembly into r-process nuclei if a collision were to eject portions of a neutron star, which then rapidly expands freed from confinement. That sequence could also begin earlier in galactic time than would s-process nucleosynthesis; so each scenario fits the earlier growth of r-process abundances in the galaxy. Each of these scenarios is the subject of active theoretical research.
Observational evidence of the early r-process enrichment of interstellar gas and of subsequent newly formed stars, as applied to the abundance evolution of the galaxy of stars, was first laid out by James W. Truran in 1981. He and subsequent astronomers showed that the pattern of heavy-element abundances in the earliest metal-poor stars matched that of the shape of the solar r-process curve, as if the s-process component were missing. This was consistent with the hypothesis that the s-process had not yet begun to enrich interstellar gas when these young stars missing the s-process abundances were born from that gas, for it requires about 100 million years of galactic history for the s-process to get started whereas the r-process can begin after two million years. These s-process–poor, r-process–rich stellar compositions must have been born earlier than any s-process, showing that the r-process emerges from quickly evolving massive stars that become supernovae and leave neutron-star remnants that can merge with another neutron star. The primary nature of the early r-process thereby derives from observed abundance spectra in old stars that had been born early, when the galactic metallicity was still small, but that nonetheless contain their complement of r-process nuclei.
Either interpretation, though generally supported by supernova experts, has yet to achieve a totally satisfactory calculation of r-process abundances because the overall problem is numerically formidable. However, existing results are supportive; in 2017, new data about the r-process was discovered when the LIGO and Virgo gravitational-wave observatories discovered a merger of two neutron stars ejecting r-process matter. See Astrophysical sites below.
Noteworthy is that the r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
Nuclear physics
There are three natural candidate sites for r-process nucleosynthesis where the required conditions are thought to exist: low-mass supernovae, Type II supernovae, and neutron star mergers.
Immediately after the severe compression of electrons in a Type II supernova, beta-minus decay is blocked. This is because the high electron density fills all available free electron states up to a Fermi energy which is greater than the energy of nuclear beta decay. However, nuclear capture of those free electrons still occurs, and causes increasing neutronization of matter. This results in an extremely high density of free neutrons which cannot decay, on the order of 1024 neutrons per cm3, and high temperatures. As this re-expands and cools, neutron capture by still-existing heavy nuclei occurs much faster than beta-minus decay. As a consequence, the r-process runs up along the neutron drip line and highly-unstable neutron-rich nuclei are created.
Three processes which affect the climbing of the neutron drip line are a notable decrease in the neutron-capture cross section in nuclei with closed neutron shells, the inhibiting process of photodisintegration, and the degree of nuclear stability in the heavy-isotope region. Neutron captures in r-process nucleosynthesis leads to the formation of neutron-rich, weakly bound nuclei with neutron separation energies as low as 2 MeV. At this stage, closed neutron shells at N = 50, 82, and 126 are reached, and neutron capture is temporarily paused. These so-called waiting points are characterized by increased binding energy relative to heavier isotopes, leading to low neutron capture cross sections and a buildup of semi-magic nuclei that are more stable toward beta decay. In addition, nuclei beyond the shell closures are susceptible to quicker beta decay owing to their proximity to the drip line; for these nuclei, beta decay occurs before further neutron capture. Waiting point nuclei are then allowed to beta decay toward stability before further neutron capture can occur, resulting in a slowdown or freeze-out of the reaction.
Decreasing nuclear stability terminates the r-process when its heaviest nuclei become unstable to spontaneous fission, when the total number of nucleons approaches 270. The fission barrier may be low enough before 270 such that neutron capture might induce fission instead of continuing up the neutron drip line. After the neutron flux decreases, these highly unstable radioactive nuclei undergo a rapid succession of beta decays until they reach more stable, neutron-rich nuclei. While the s-process creates an abundance of stable nuclei having closed neutron shells, the r-process, in neutron-rich predecessor nuclei, creates an abundance of radioactive nuclei about 10 amu below the s-process peaks. These abundance peaks correspond to stable isobars produced from successive beta decays of waiting point nuclei having N = 50, 82, and 126—which are about 10 protons removed from the line of beta stability.
The r-process also occurs in thermonuclear weapons, and was responsible for the initial discovery of neutron-rich almost stable isotopes of actinides like plutonium-244 and the new elements einsteinium and fermium (atomic numbers 99 and 100) in the 1950s. It has been suggested that multiple nuclear explosions would make it possible to reach the island of stability, as the affected nuclides (starting with uranium-238 as seed nuclei) would not have time to beta decay all the way to the quickly spontaneously fissioning nuclides at the line of beta stability before absorbing more neutrons in the next explosion, thus providing a chance to reach neutron-rich superheavy nuclides like copernicium-291 and -293 which may have half-lives of centuries or millennia.
Astrophysical sites
The most probable candidate site for the r-process has long been suggested to be core-collapse supernovae (spectral types Ib, Ic and II), which may provide the necessary physical conditions for the r-process. However, the very low abundance of r-process nuclei in the interstellar gas limits the amount each can have ejected. It requires either that only a small fraction of supernovae eject r-process nuclei to the interstellar medium, or that each supernova ejects only a very small amount of r-process material. The ejected material must be relatively neutron-rich, a condition which has been difficult to achieve in models, so that astrophysicists remain uneasy about their adequacy for successful r-process yields.
In 2017, new astronomical data about the r-process was discovered in data from the merger of two neutron stars. Using the gravitational wave data captured in GW170817 to identify the location of the merger, several teams observed and studied optical data of the merger, finding spectroscopic evidence of r-process material thrown off by the merging neutron stars. The bulk of this material seems to consist of two types: hot blue masses of highly radioactive r-process matter of lower-mass-range heavy nuclei ( such as strontium) and cooler red masses of higher mass-number r-process nuclei () rich in actinides (such as uranium, thorium, and californium). When released from the huge internal pressure of the neutron star, these ejecta expand and form seed heavy nuclei that rapidly capture free neutrons, and radiate detected optical light for about a week. Such duration of luminosity would not be possible without heating by internal radioactive decay, which is provided by r-process nuclei near their waiting points. Two distinct mass regions ( and ) for the r-process yields have been known since the first time dependent calculations of the r-process. Because of these spectroscopic features it has been argued that such nucleosynthesis in the Milky Way has been primarily ejecta from neutron-star mergers rather than from supernovae.
These results offer a new possibility for clarifying six decades of uncertainty over the site of origin of r-process nuclei. Confirming relevance to the r-process is that it is radiogenic power from radioactive decay of r-process nuclei that maintains the visibility of these spun off r-process fragments. Otherwise they would dim quickly. Such alternative sites were first seriously proposed in 1974 as decompressing neutron star matter. It was proposed such matter is ejected from neutron stars merging with black holes in compact binaries. In 1989 (and 1999) this scenario was extended to binary neutron star mergers (a binary star system of two neutron stars that collide). After preliminary identification of these sites, the scenario was confirmed in GW170817. Current astrophysical models suggest that a single neutron star merger event may have generated between 3 and 13 Earth masses of gold.
See also
HD 222925
Notes
References
Concepts in astrophysics
Neutron
Nuclear physics
Nucleosynthesis
Supernovae | R-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,442 | [
"Supernovae",
"Nuclear fission",
"Concepts in astrophysics",
"Astronomical events",
"Astrophysics",
"Nucleosynthesis",
"Explosions",
"Nuclear physics",
"Nuclear fusion"
] |
352,908 | https://en.wikipedia.org/wiki/S-process | The slow neutron-capture process, or s-process, is a series of reactions in nuclear astrophysics that occur in stars, particularly asymptotic giant branch stars. The s-process is responsible for the creation (nucleosynthesis) of approximately half the atomic nuclei heavier than iron.
In the s-process, a seed nucleus undergoes neutron capture to form an isotope with one higher atomic mass. If the new isotope is stable, a series of increases in mass can occur, but if it is unstable, then beta decay will occur, producing an element of the next higher atomic number. The process is slow (hence the name) in the sense that there is sufficient time for this radioactive decay to occur before another neutron is captured. A series of these reactions produces stable isotopes by moving along the valley of beta-decay stable isobars in the table of nuclides.
A range of elements and isotopes can be produced by the s-process, because of the intervention of alpha decay steps along the reaction chain. The relative abundances of elements and isotopes produced depends on the source of the neutrons and how their flux changes over time. Each branch of the s-process reaction chain eventually terminates at a cycle involving lead, bismuth, and polonium.
The s-process contrasts with the r-process, in which successive neutron captures are rapid: they happen more quickly than the beta decay can occur. The r-process dominates in environments with higher fluxes of free neutrons; it produces heavier elements and more neutron-rich isotopes than the s-process. Together the two processes account for most of the relative abundance of chemical elements heavier than iron.
History
The s-process was seen to be needed from the relative abundances of isotopes of heavy elements and from a newly published table of abundances by Hans Suess and Harold Urey in 1956. Among other things, these data showed abundance peaks for strontium, barium, and lead, which, according to quantum mechanics and the nuclear shell model, are particularly stable nuclei, much like the noble gases are chemically inert. This implied that some abundant nuclei must be created by slow neutron capture, and it was only a matter of determining how other nuclei could be accounted for by such a process. A table apportioning the heavy isotopes between s-process and r-process was published in the famous B2FH review paper in 1957. There it was also argued that the s-process occurs in red giant stars. In a particularly illustrative case, the element technetium, whose longest half-life is 4.2 million years, had been discovered in s-, M-, and N-type stars in 1952 by Paul W. Merrill. Since these stars were thought to be billions of years old, the presence of technetium in their outer atmospheres was taken as evidence of its recent creation there, probably unconnected with the nuclear fusion in the deep interior of the star that provides its power.
A calculable model for creating the heavy isotopes from iron seed nuclei in a time-dependent manner was not provided until 1961. That work showed that the large overabundances of barium observed by astronomers in certain red-giant stars could be created from iron seed nuclei if the total neutron flux (number of neutrons per unit area) was appropriate. It also showed that no one single value for neutron flux could account for the observed s-process abundances, but that a wide range is required. The numbers of iron seed nuclei that were exposed to a given flux must decrease as the flux becomes stronger. This work also showed that the curve of the product of neutron-capture cross section times abundance is not a smoothly falling curve, as B2FH had sketched, but rather has a ledge-precipice structure. A series of papers in the 1970s by Donald D. Clayton utilizing an exponentially declining neutron flux as a function of the number of iron seed exposed became the standard model of the s-process and remained so until the details of AGB-star nucleosynthesis became sufficiently advanced that they became a standard model for s-process element formation based on stellar structure models. Important series of measurements of neutron-capture cross sections were reported from Oak Ridge National Lab in 1965 and by Karlsruhe Nuclear Physics Center in 1982 and subsequently, these placed the s-process on the firm quantitative basis that it enjoys today.
The s-process in stars
The s-process is believed to occur mostly in asymptotic giant branch stars, seeded by iron nuclei left by a supernova during a previous generation of stars. In contrast to the r-process which is believed to occur over time scales of seconds in explosive environments, the s-process is believed to occur over time scales of thousands of years, passing decades between neutron captures. The extent to which the s-process moves up the elements in the chart of isotopes to higher mass numbers is essentially determined by the degree to which the star in question is able to produce neutrons. The quantitative yield is also proportional to the amount of iron in the star's initial abundance distribution. Iron is the "starting material" (or seed) for this neutron capture-beta minus decay sequence of synthesizing new elements.
The main neutron source reactions are:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|}
One distinguishes the main and the weak s-process component. The main component produces heavy elements beyond Sr and Y, and up to Pb in the lowest metallicity stars. The production sites of the main component are low-mass asymptotic giant branch stars. The main component relies on the 13C neutron source above. The weak component of the s-process, on the other hand, synthesizes s-process isotopes of elements from iron group seed nuclei to 58Fe on up to Sr and Y, and takes place at the end of helium- and carbon-burning in massive stars. It employs primarily the 22Ne neutron source. These stars will become supernovae at their demise and spew those s-process isotopes into interstellar gas.
The s-process is sometimes approximated over a small mass region using the so-called "local approximation", by which the ratio of abundances is inversely proportional to the ratio of neutron-capture cross-sections for nearby isotopes on the s-process path. This approximation is – as the name indicates – only valid locally, meaning for isotopes of nearby mass numbers, but it is invalid at magic numbers where the ledge-precipice structure dominates.
Because of the relatively low neutron fluxes expected to occur during the s-process (on the order of 105 to 1011 neutrons per cm2 per second), this process does not have the ability to produce any of the heavy radioactive isotopes such as thorium or uranium. The cycle that terminates the s-process is:
captures a neutron, producing , which decays to by β− decay. in turn decays to by α decay:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ ||
|}
then captures three neutrons, producing , which decays to by β− decay, restarting the cycle:
:{|border="0"
|- style="height:2em;"
| ||+ ||3 ||→ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|}
The net result of this cycle therefore is that 4 neutrons are converted into one alpha particle, two electrons, two anti-electron neutrinos and gamma radiation:
:{|border="0"
|- style="height:2em;"
| || ||4 ||→ || ||+ ||2 ||+ ||2 ||+ ||
|}
The process thus terminates in bismuth, the heaviest "stable" element, and polonium, the first non-primordial element after bismuth. Bismuth is actually slightly radioactive, but with a half-life so long—a billion times the present age of the universe—that it is effectively stable over the lifetime of any existing star. Polonium-210, however, decays with a half-life of 138 days to stable lead-206.
The s-process measured in stardust
Stardust is one component of cosmic dust. Stardust is individual solid grains that condensed during mass loss from various long-dead stars. Stardust existed throughout interstellar gas before the birth of the Solar System and was trapped in meteorites when they assembled from interstellar matter contained in the planetary accretion disk in early Solar System. Today they are found in meteorites, where they have been preserved. Meteoriticists habitually refer to them as presolar grains. The s-process enriched grains are mostly silicon carbide (SiC). The origin of these grains is demonstrated by laboratory measurements of extremely unusual isotopic abundance ratios within the grain. First experimental detection of s-process xenon isotopes was made in 1978, confirming earlier predictions that s-process isotopes would be enriched, nearly pure, in stardust from red giant stars. These discoveries launched new insight into astrophysics and into the origin of meteorites in the Solar System. Silicon carbide (SiC) grains condense in the atmospheres of AGB stars and thus trap isotopic abundance ratios as they existed in that star. Because the AGB stars are the main site of the s-process in the galaxy, the heavy elements in the SiC grains contain almost pure s-process isotopes in elements heavier than iron. This fact has been demonstrated repeatedly by sputtering-ion mass spectrometer studies of these stardust presolar grains. Several surprising results have shown that within them the ratio of s-process and r-process abundances is somewhat different from that which was previously assumed. It has also been shown with trapped isotopes of krypton and xenon that the s-process abundances in the AGB-star atmospheres changed with time or from star to star, presumably with the strength of neutron flux in that star or perhaps the temperature. This is a frontier of s-process studies in the 2000s.
References
Nuclear physics
Neutron
Astrophysics
Nucleosynthesis | S-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,259 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion",
"Astronomical sub-disciplines"
] |
352,912 | https://en.wikipedia.org/wiki/P-process | The term p-process (p for proton) is used in two ways in the scientific literature concerning the astrophysical origin of the elements (nucleosynthesis). Originally it referred to a proton capture process which was proposed to be the source of certain, naturally occurring, neutron-deficient isotopes of the elements from selenium to mercury. These nuclides are called p-nuclei and their origin is still not completely understood. Although it was shown that the originally suggested process cannot produce the p-nuclei, later on the term p-process was sometimes used to generally refer to any nucleosynthesis process supposed to be responsible for the p-nuclei.
Often, the two meanings are confused. Recent scientific literature therefore suggests to use the term p-process only for the actual proton capture process, as it is customary with other nucleosynthesis processes in astrophysics.
The proton capture p-process
Proton-rich nuclides can be produced by sequentially adding one or more protons to an atomic nucleus. Such a nuclear reaction of type (p,γ) is called proton capture reaction. By adding a proton to a nucleus, the element is changed because the chemical element is defined by the proton number of a nucleus. At the same time the ratio of protons to neutrons is changed, resulting in a more neutron-deficient isotope of the next element. This led to the original idea for the production of p-nuclei: free protons (the nuclei of hydrogen atoms are present in stellar plasmas) should be captured on heavy nuclei (seed nuclei) also already present in the stellar plasma (previously produced in the s-process and/or r-process).
Such proton captures on stable nuclides (or nearly stable), however, are not very efficient in producing p-nuclei, especially the heavier ones, because the electric charge increases with each added proton, leading to an increased repulsion of the next proton to be added, according to Coulomb's law. In the context of nuclear reactions this is called a Coulomb barrier. The higher the Coulomb barrier, the more kinetic energy a proton requires to get close to a nucleus and be captured by it. The average energy of the available protons is given by the temperature of the stellar plasma. Even if this temperature could be increased arbitrarily (which is not the case in stellar environments), protons would be removed faster from a nucleus by photodisintegration than they could be captured at high temperature. A possible alternative would be to have a very large number of protons available to increase the effective number of proton captures per second without having to raise the temperature too much. Such conditions, however, are not found in core-collapse supernovae which were supposed to be the site of the p-process.
Proton captures at extremely high proton densities are called rapid proton capture processes. They are distinct from the p-process not only by the required high proton density but also by the fact that very short-lived radionuclides are involved and the reaction path is located close to the proton drip line. Rapid proton capture processes are the rp-process, the νp-process, and the pn-process.
History
The term p-process was originally proposed in the famous B2FH paper in 1957. The authors assumed that this process was solely responsible for the p-nuclei and proposed that it occurs in the hydrogen-shell (see also stellar evolution) of a star exploding as a type II supernova. It was shown later that the required conditions are not found in such supernovae.
At the same time as B2FH, Alastair Cameron independently realized the necessity to add another nucleosynthesis process to neutron capture nucleosynthesis but simply mentioned proton captures without assigning a special name to the process. He also thought about alternatives, for example photodisintegration (called the γ-process today) or a combination of p-process and photodisintegration.
See also
p-nuclei
Nucleosynthesis
rp-process
References
Nuclear physics
Nucleosynthesis
Supernovae
Proton
Concepts in stellar astronomy | P-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 856 | [
"Nuclear fission",
"Supernovae",
"Concepts in astrophysics",
"Astronomical events",
"Astrophysics",
"Nucleosynthesis",
"Explosions",
"Nuclear physics",
"Concepts in stellar astronomy",
"Nuclear fusion"
] |
352,950 | https://en.wikipedia.org/wiki/List%20of%20general%20topology%20topics | This is a list of general topology topics.
Basic concepts
Topological space
Topological property
Open set, closed set
Clopen set
Closure (topology)
Boundary (topology)
Dense (topology)
G-delta set, F-sigma set
closeness (mathematics)
neighbourhood (mathematics)
Continuity (topology)
Homeomorphism
Local homeomorphism
Open and closed maps
Germ (mathematics)
Base (topology), subbase
Open cover
Covering space
Atlas (topology)
Limits
Limit point
Net (topology)
Filter (topology)
Ultrafilter
Topological properties
Baire category theorem
Nowhere dense
Baire space
Banach–Mazur game
Meagre set
Comeagre set
Compactness and countability
Compact space
Relatively compact subspace
Heine–Borel theorem
Tychonoff's theorem
Finite intersection property
Compactification
Measure of non-compactness
Paracompact space
Locally compact space
Compactly generated space
Axiom of countability
Sequential space
First-countable space
Second-countable space
Separable space
Lindelöf space
Sigma-compact space
Connectedness
Connected space
Separation axioms
T0 space
T1 space
Hausdorff space
Completely Hausdorff space
Regular space
Tychonoff space
Normal space
Urysohn's lemma
Tietze extension theorem
Paracompact
Separated sets
Topological constructions
Direct sum and the dual construction product
Subspace and the dual construction quotient
Topological tensor product
Examples
Discrete space
Locally constant function
Trivial topology
Cofinite topology
Finer topology
Product topology
Restricted product
Quotient space
Unit interval
Continuum (topology)
Extended real number line
Long line (topology)
Sierpinski space
Cantor set, Cantor space, Cantor cube
Space-filling curve
Topologist's sine curve
Uniform norm
Weak topology
Strong topology
Hilbert cube
Lower limit topology
Sorgenfrey plane
Real tree
Compact-open topology
Zariski topology
Kuratowski closure axioms
Unicoherent
Solenoid (mathematics)
Uniform spaces
Uniform continuity
Lipschitz continuity
Uniform isomorphism
Uniform property
Uniformly connected space
Metric spaces
Metric topology
Manhattan distance
Ultrametric space
P-adic numbers, p-adic analysis
Open ball
Bounded subset
Pointwise convergence
Metrization theorems
Complete space
Cauchy sequence
Banach fixed-point theorem
Polish space
Hausdorff distance
Intrinsic metric
Category of metric spaces
Topology and order theory
Stone duality
Stone's representation theorem for Boolean algebras
Specialization (pre)order
Sober space
Spectral space
Alexandrov topology
Upper topology
Scott topology
Scott continuity
Lawson topology
Descriptive set theory
Polish Space
Cantor space
Dimension theory
Inductive dimension
Lebesgue covering dimension
Lebesgue's number lemma
Combinatorial topology
Polytope
Simplex
Simplicial complex
CW complex
Manifold
Triangulation
Barycentric subdivision
Sperner's lemma
Simplicial approximation theorem
Nerve of an open covering
Foundations of algebraic topology
Simply connected
Semi-locally simply connected
Path (topology)
Homotopy
Homotopy lifting property
Pointed space
Wedge sum
Smash product
Cone (topology)
Adjunction space
Topology and algebra
Topological algebra
Topological group
Topological ring
Topological vector space
Topological module
Topological abelian group
Properly discontinuous
Sheaf space
See also
Topology glossary
List of topology topics
List of geometric topology topics
List of algebraic topology topics
Publications in topology
Mathematics-related lists
Outlines of mathematics and logic
Outlines | List of general topology topics | [
"Mathematics"
] | 648 | [
"General topology",
"Topology",
"nan"
] |
353,021 | https://en.wikipedia.org/wiki/Homeomorphism%20%28graph%20theory%29 | In graph theory, two graphs and are homeomorphic if there is a graph isomorphism from some subdivision of to some subdivision of . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense.
Subdivision and smoothing
In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v } yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w } and {w,v }. For directed edges, this operation shall reserve their propagating direction.
For example, the edge e, with endpoints {u,v }:
can be subdivided into two edges, e1 and e2, connecting to a new vertex w of degree-2, or indegree-1 and outdegree-1 for the directed edge:
Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem.
Reversion
The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree-2 vertices.
For example, the simple connected graph with two edges, e1 {u,w } and e2 {w,v }:
has a vertex (namely w) that can be smoothed away, resulting in:
Barycentric subdivisions
The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph.
Embedding on a surface
It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that
a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three).
In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph.
A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the . For example, consists of the Kuratowski subgraphs.
Example
In the following example, graph G and graph H are homeomorphic.
If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing:
Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic.
mixed graph
The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head.
See also
Minor (graph theory)
Edge contraction
References
Further reading
Graph theory
Homeomorphisms
NP-complete problems | Homeomorphism (graph theory) | [
"Mathematics"
] | 817 | [
"Discrete mathematics",
"Homeomorphisms",
"Graph theory",
"Computational problems",
"Combinatorics",
"Topology",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
353,022 | https://en.wikipedia.org/wiki/CW%20complex | In mathematics, and specifically in topology, a CW complex (also cellular complex or cell complex) is a topological space that is built by gluing together topological balls (so-called cells) of different dimensions in specific ways. It generalizes both manifolds and simplicial complexes and has particular significance for algebraic topology. It was initially introduced by J. H. C. Whitehead to meet the needs of homotopy theory.
CW complexes have better categorical properties than simplicial complexes, but still retain a combinatorial nature that allows for computation (often with a much smaller complex).
The C in CW stands for "closure-finite", and the W for "weak" topology.
Definition
CW complex
A CW complex is constructed by taking the union of a sequence of topological spaces such that each is obtained from by gluing copies of k-cells , each homeomorphic to the open -ball , to by continuous gluing maps . The maps are also called attaching maps. Thus as a set, .
Each is called the k-skeleton of the complex.
The topology of is weak topology: a subset is open iff is open for each k-skeleton .
In the language of category theory, the topology on is the direct limit of the diagram The name "CW" stands for "closure-finite weak topology", which is explained by the following theorem:
This partition of X is also called a cellulation.
The construction, in words
The CW complex construction is a straightforward generalization of the following process:
A 0-dimensional CW complex is just a set of zero or more discrete points (with the discrete topology).
A 1-dimensional CW complex is constructed by taking the disjoint union of a 0-dimensional CW complex with one or more copies of the unit interval. For each copy, there is a map that "glues" its boundary (its two endpoints) to elements of the 0-dimensional complex (the points). The topology of the CW complex is the topology of the quotient space defined by these gluing maps.
In general, an n-dimensional CW complex is constructed by taking the disjoint union of a k-dimensional CW complex (for some ) with one or more copies of the n-dimensional ball. For each copy, there is a map that "glues" its boundary (the -dimensional sphere) to elements of the -dimensional complex. The topology of the CW complex is the quotient topology defined by these gluing maps.
An infinite-dimensional CW complex can be constructed by repeating the above process countably many times. Since the topology of the union is indeterminate, one takes the direct limit topology, since the diagram is highly suggestive of a direct limit. This turns out to have great technical benefits.
Regular CW complexes
A regular CW complex is a CW complex whose gluing maps are homeomorphisms. Accordingly, the partition of X is also called a regular cellulation.
A loopless graph is represented by a regular 1-dimensional CW-complex. A closed 2-cell graph embedding on a surface is a regular 2-dimensional CW-complex. Finally, the 3-sphere regular cellulation conjecture claims that every 2-connected graph is the 1-skeleton of a regular CW-complex on the 3-dimensional sphere.
Relative CW complexes
Roughly speaking, a relative CW complex differs from a CW complex in that we allow it to have one extra building block that does not necessarily possess a cellular structure. This extra-block can be treated as a (-1)-dimensional cell in the former definition.
Examples
0-dimensional CW complexes
Every discrete topological space is a 0-dimensional CW complex.
1-dimensional CW complexes
Some examples of 1-dimensional CW complexes are:
An interval. It can be constructed from two points (x and y), and the 1-dimensional ball B (an interval), such that one endpoint of B is glued to x and the other is glued to y. The two points x and y are the 0-cells; the interior of B is the 1-cell. Alternatively, it can be constructed just from a single interval, with no 0-cells.
A circle. It can be constructed from a single point x and the 1-dimensional ball B, such that both endpoints of B are glued to x. Alternatively, it can be constructed from two points x and y and two 1-dimensional balls A and B, such that the endpoints of A are glued to x and y, and the endpoints of B are glued to x and y too.
A graph. Given a graph, a 1-dimensional CW complex can be constructed in which the 0-cells are the vertices and the 1-cells are the edges of the graph. The endpoints of each edge are identified with the incident vertices to it. This realization of a combinatorial graph as a topological space is sometimes called a topological graph.
3-regular graphs can be considered as generic 1-dimensional CW complexes. Specifically, if X is a 1-dimensional CW complex, the attaching map for a 1-cell is a map from a two-point space to X, . This map can be perturbed to be disjoint from the 0-skeleton of X if and only if and are not 0-valence vertices of X.
The standard CW structure on the real numbers has as 0-skeleton the integers and as 1-cells the intervals . Similarly, the standard CW structure on has cubical cells that are products of the 0 and 1-cells from . This is the standard cubic lattice cell structure on .
Finite-dimensional CW complexes
Some examples of finite-dimensional CW complexes are:
An n-dimensional sphere. It admits a CW structure with two cells, one 0-cell and one n-cell. Here the n-cell is attached by the constant mapping from its boundary to the single 0-cell. An alternative cell decomposition has one (n-1)-dimensional sphere (the "equator") and two n-cells that are attached to it (the "upper hemi-sphere" and the "lower hemi-sphere"). Inductively, this gives a CW decomposition with two cells in every dimension k such that .
The n-dimensional real projective space. It admits a CW structure with one cell in each dimension.
The terminology for a generic 2-dimensional CW complex is a shadow.
A polyhedron is naturally a CW complex.
Grassmannian manifolds admit a CW structure called Schubert cells.
Differentiable manifolds, algebraic and projective varieties have the homotopy type of CW complexes.
The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein–Penner Decomposition. Such cell decompositions are frequently called ideal polyhedral decompositions and are used in popular computer software, such as SnapPea.
Infinite-dimensional CW complexes
The infinite dimensional sphere . It admits a CW-structure with 2 cells in each dimension which are assembled in a way such that the -skeleton is precisely given by the -sphere.
The infinite dimensional projective spaces , and . has one cell in every dimension, , has one cell in every even dimension and has one cell in every dimension divisible by 4. The respective skeletons are then given by , (2n-skeleton) and (4n-skeleton).
Non CW-complexes
An infinite-dimensional Hilbert space is not a CW complex: it is a Baire space and therefore cannot be written as a countable union of n-skeletons, each of which being a closed set with empty interior. This argument extends to many other infinite-dimensional spaces.
The hedgehog space is homotopy equivalent to a CW complex (the point) but it does not admit a CW decomposition, since it is not locally contractible.
The Hawaiian earring has no CW decomposition, because it is not locally contractible at origin. It is also not homotopy equivalent to a CW complex, because it has no good open cover.
Properties
CW complexes are locally contractible.
If a space is homotopy equivalent to a CW complex, then it has a good open cover. A good open cover is an open cover, such that every nonempty finite intersection is contractible.
CW complexes are paracompact. Finite CW complexes are compact. A compact subspace of a CW complex is always contained in a finite subcomplex.
CW complexes satisfy the Whitehead theorem: a map between CW complexes is a homotopy equivalence if and only if it induces an isomorphism on all homotopy groups.
A covering space of a CW complex is also a CW complex.
The product of two CW complexes can be made into a CW complex. Specifically, if X and Y are CW complexes, then one can form a CW complex X × Y in which each cell is a product of a cell in X and a cell in Y, endowed with the weak topology. The underlying set of X × Y is then the Cartesian product of X and Y, as expected. In addition, the weak topology on this set often agrees with the more familiar product topology on X × Y, for example if either X or Y is finite. However, the weak topology can be finer than the product topology, for example if neither X nor Y is locally compact. In this unfavorable case, the product X × Y in the product topology is not a CW complex. On the other hand, the product of X and Y in the category of compactly generated spaces agrees with the weak topology and therefore defines a CW complex.
Let X and Y be CW complexes. Then the function spaces Hom(X,Y) (with the compact-open topology) are not CW complexes in general. If X is finite then Hom(X,Y) is homotopy equivalent to a CW complex by a theorem of John Milnor (1959). Note that X and Y are compactly generated Hausdorff spaces, so Hom(X,Y) is often taken with the compactly generated variant of the compact-open topology; the above statements remain true.
Cellular approximation theorem
Homology and cohomology of CW complexes
Singular homology and cohomology of CW complexes is readily computable via cellular homology. Moreover, in the category of CW complexes and cellular maps, cellular homology can be interpreted as a homology theory. To compute an extraordinary (co)homology theory for a CW complex, the Atiyah–Hirzebruch spectral sequence is the analogue of cellular homology.
Some examples:
For the sphere, take the cell decomposition with two cells: a single 0-cell and a single n-cell. The cellular homology chain complex and homology are given by:
since all the differentials are zero.
Alternatively, if we use the equatorial decomposition with two cells in every dimension
and the differentials are matrices of the form This gives the same homology computation above, as the chain complex is exact at all terms except and
For we get similarly
Both of the above examples are particularly simple because the homology is determined by the number of cells—i.e.: the cellular attaching maps have no role in these computations. This is a very special phenomenon and is not indicative of the general case.
Modification of CW structures
There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex that has a simpler CW decomposition.
Consider, for example, an arbitrary CW complex. Its 1-skeleton can be fairly complicated, being an arbitrary graph. Now consider a maximal forest F in this graph. Since it is a collection of trees, and trees are contractible, consider the space where the equivalence relation is generated by if they are contained in a common tree in the maximal forest F. The quotient map is a homotopy equivalence. Moreover, naturally inherits a CW structure, with cells corresponding to the cells of that are not contained in F. In particular, the 1-skeleton of is a disjoint union of wedges of circles.
Another way of stating the above is that a connected CW complex can be replaced by a homotopy-equivalent CW complex whose 0-skeleton consists of a single point.
Consider climbing up the connectivity ladder—assume X is a simply-connected CW complex whose 0-skeleton consists of a point. Can we, through suitable modifications, replace X by a homotopy-equivalent CW complex where consists of a single point? The answer is yes. The first step is to observe that and the attaching maps to construct from form a group presentation. The Tietze theorem for group presentations states that there is a sequence of moves we can perform to reduce this group presentation to the trivial presentation of the trivial group. There are two Tietze moves:
1) Adding/removing a generator. Adding a generator, from the perspective of the CW decomposition consists of adding a 1-cell and a 2-cell whose attaching map consists of the new 1-cell and the remainder of the attaching map is in . If we let be the corresponding CW complex then there is a homotopy equivalence given by sliding the new 2-cell into X.
2) Adding/removing a relation. The act of adding a relation is similar, only one is replacing X by where the new 3-cell has an attaching map that consists of the new 2-cell and remainder mapping into . A similar slide gives a homotopy-equivalence .
If a CW complex X is n-connected one can find a homotopy-equivalent CW complex whose n-skeleton consists of a single point. The argument for is similar to the case, only one replaces Tietze moves for the fundamental group presentation by elementary matrix operations for the presentation matrices for (using the presentation matrices coming from cellular homology. i.e.: one can similarly realize elementary matrix operations by a sequence of addition/removal of cells or suitable homotopies of the attaching maps.
'The' homotopy category
The homotopy category of CW complexes is, in the opinion of some experts, the best if not the only candidate for the homotopy category (for technical reasons the version for pointed spaces is actually used). Auxiliary constructions that yield spaces that are not CW complexes must be used on occasion. One basic result is that the representable functors on the homotopy category have a simple characterisation (the Brown representability theorem).
See also
Abstract cell complex
The notion of CW complex has an adaptation to smooth manifolds called a handle decomposition, which is closely related to surgery theory.
References
Notes
General references
More details on the first author's home page]
Algebraic topology
Homotopy theory
Topological spaces | CW complex | [
"Mathematics"
] | 3,024 | [
"Mathematical structures",
"Algebraic topology",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology"
] |
353,070 | https://en.wikipedia.org/wiki/Epiglottis | The epiglottis (: epiglottises or epiglottides) is a leaf-shaped flap in the throat that prevents food and water from entering the trachea and the lungs. It stays open during breathing, allowing air into the larynx. During swallowing, it closes to prevent aspiration of food into the lungs, forcing the swallowed liquids or food to go along the esophagus toward the stomach instead. It is thus the valve that diverts passage to either the trachea or the esophagus.
The epiglottis is made of elastic cartilage covered with a mucous membrane, attached to the entrance of the larynx. It projects upwards and backwards behind the tongue and the hyoid bone.
The epiglottis may be inflamed in a condition called epiglottitis, which is most commonly due to the vaccine-preventable bacterium Haemophilus influenzae. Dysfunction may cause the inhalation of food, called aspiration, which may lead to pneumonia or airway obstruction. The epiglottis is also an important landmark for intubation.
The epiglottis has been identified as early as Aristotle, and gets its name from being above the glottis (epi- + glottis).
Structure
The epiglottis sits at the entrance of the larynx. It is shaped like a leaf of purslane and has a free upper part that rests behind the tongue, and a lower stalk (). The stalk originates from the back surface of the thyroid cartilage, connected by a thyroepiglottic ligament. At the sides, the stalk is connected to the arytenoid cartilages at the walls of the larynx by folds.
The epiglottis originates at the entrance of the larynx, and is attached to the hyoid bone. From there, it projects upwards and backwards behind the tongue. The space between the epiglottis and the tongue is called the vallecula.
Microanatomy
The epiglottis has two surfaces; a forward-facing surface, and a surface facing the larynx. The forward-facing surface is covered with several layers of thin cells (stratified squamous epithelium), and is not covered with keratin, the same surface as the back of the tongue. The back surface is covered in a layer of column-shaped cells with cilia, similar to the rest of the respiratory tract. It also has mucus-secreting goblet cells. There is an intermediate zone between these surfaces that contains cells that transition in shape. The body of the epiglottis consists of elastic cartilage.
Development
The epiglottis arises from the fourth pharyngeal arch. It can be seen as a distinct structure later than the other cartilage of the pharynx, visible around the fifth month of development. The position of the epiglottis also changes with ageing. In infants, it touches the soft palate, whereas in adults, its position is lower.
Variation
A high-rising epiglottis is a normal anatomical variation, visible during an examination of the mouth. It does not cause any serious problem apart from maybe a mild sensation of a foreign body in the throat. It is seen more often in children than adults and does not need any medical or surgical intervention. The front surface of the epiglottis is occasionally notched.
Function
The epiglottis is normally pointed upward during breathing with its underside functioning as part of the pharynx. There are taste buds on the epiglottis.
Swallowing
During swallowing, the epiglottis bends backwards, folding over the entrance to the trachea, and preventing food from going into it. The folding backwards is a complex movement the causes of which are not completely understood. It is likely that during swallowing the hyoid bone and the larynx move upwards and forwards, which increases passive pressure from the back of the tongue; the aryepiglottic muscles contract; the passive weight of the food pushes down; and the laryngeal and thyroarytenoid muscles contract. The consequence of this is that during swallowing the bent epiglottis blocks off the trachea, preventing food from going into it; food instead travels down the esophagus, which is behind it.
Speech sounds
In many languages, the epiglottis is not essential for producing sounds. In some languages, the epiglottis is used to produce epiglottal consonant speech sounds, though this sound-type is rather rare.
Clinical significance
Inflammation
Inflammation of the epiglottis is known as epiglottitis. Epiglottitis is mainly caused by Haemophilus influenzae. A person with epiglottitis may have a fever, sore throat, difficulty swallowing, and difficulty breathing. For this reason, acute epiglottitis is considered a medical emergency, because of the risk of obstruction of the pharynx. Epiglottitis is often managed with antibiotics, inhaled aerosolised epinephrine to act as a bronchodilator, and may require tracheal intubation or a tracheostomy if breathing is difficult.
The incidence of epiglottitis has decreased significantly in countries where vaccination against Haemophilus influenzae is administered.
Aspiration
When food or other objects travel down the respiratory tract rather than down the esophagus to the stomach, this is called . This can lead to the obstruction of airways, inflammation of lung tissue, and aspiration pneumonia; and in the long term, atelectasis and bronchiectasis. One reason aspiration can occur is because of failure of the epiglottis to close completely.
If food or liquid enters the airway due to the epiglottis failing to close properly, throat-clearing or a cough reflex may occur to protect the respiratory system and expel material from the airway. Where there is impairment in laryngeal vestibule sensation, silent aspiration (entry of material to the airway that does not result in a cough reflex) may occur.
Other
The epiglottis and vallecula are important anatomical landmarks in intubation. Abnormal positioning of the epiglottis is a rare cause of obstructive sleep apnoea.
Other animals
The epiglottis is present in mammals, including land mammals and cetaceans, also as a cartilaginous structure. Like in humans, it functions to prevent entry of food into the trachea during swallowing. The position of the larynx is flat in mice and other rodents, as well as rabbits. For this reason, because the epiglottis is located behind the soft palate in rabbits, they are obligate nose breathers, as are mice and other rodents. In rodents and mice, there is a unique pouch in front of the epiglottis, and the epiglottis is commonly injured by inhaled substances, particularly at the transition zone between the flattened and cuboidal epithelium. It is also common to see taste buds on the epiglottis in these species.
History
The epiglottis was noted by Aristotle, although the epiglottis' function was first defined by Vesalius in 1543. The word has Greek roots. The epiglottis gets its name from being above () the glottis ().
Additional images
See also
Epiglottal consonant
Epiglotto-pharyngeal consonant
Pharyngeal consonant
References
External links
()
Where is the Epiglottis? at Study Sciences
Digestive system
Larynx
Human throat | Epiglottis | [
"Biology"
] | 1,605 | [
"Digestive system",
"Organ systems"
] |
353,697 | https://en.wikipedia.org/wiki/Sonic%20hedgehog%20protein | Sonic hedgehog protein (SHH) is encoded for by the SHH gene. The protein is named after the video game character Sonic the Hedgehog.
This signaling molecule is key in regulating embryonic morphogenesis in all animals. SHH controls organogenesis and the organization of the central nervous system, limbs, digits and many other parts of the body. Sonic hedgehog is a morphogen that patterns the developing embryo using a concentration gradient characterized by the French flag model. This model has a non-uniform distribution of SHH molecules which governs different cell fates according to concentration. Mutations in this gene can cause holoprosencephaly, a failure of splitting in the cerebral hemispheres, as demonstrated in an experiment using SHH knock-out mice in which the forebrain midline failed to develop and instead only a single fused telencephalic vesicle resulted.
Sonic hedgehog still plays a role in differentiation, proliferation, and maintenance of adult tissues. Abnormal activation of SHH signaling in adult tissues has been implicated in various types of cancers including breast, skin, brain, liver, gallbladder and many more.
Discovery and naming
The hedgehog gene (hh) was first identified in the fruit fly Drosophila melanogaster in the classic Heidelberg screens of Christiane Nüsslein-Volhard and Eric Wieschaus, as published in 1980. These screens, which led to the researchers winning a Nobel Prize in 1995 along with developmental geneticist Edward B. Lewis, identified genes that control the segmentation pattern of the Drosophila embryos. The hh loss of function mutant phenotype causes the embryos to be covered with denticles, i.e. small pointy projections resembling the spikes of a hedgehog. Investigations aimed at finding a hedgehog equivalent in vertebrates by Philip Ingham, Andrew P. McMahon and Clifford Tabin revealed three homologous genes.
Two of these genes, desert hedgehog and Indian hedgehog, were named for species of hedgehogs, while sonic hedgehog was named after the video game character Sonic the Hedgehog. The gene was named by Robert Riddle, a postdoctoral fellow at the Tabin Lab, after his wife Betsy Wilder came home with a magazine containing an advert for the first game in the series, Sonic the Hedgehog (1991). In the zebrafish, two of the three vertebrate hh genes are duplicated: SHH a and SHH b (formerly described as tiggywinkle hedgehog, named for Mrs. Tiggy-Winkle, a character from Beatrix Potter's books for children) and ihha and ihhb (formerly described as echidna hedgehog, named for the spiny anteater and not for the character Knuckles the Echidna in the Sonic franchise).
Function
Of the hh homologues, SHH has been found to have the most critical roles in development, acting as a morphogen involved in patterning many systems—including the anterior pituitary, pallium of the brain, spinal cord, lungs, teeth and the thalamus by the zona limitans intrathalamica. In vertebrates, the development of limbs and digits depends on the secretion of sonic hedgehog by the zone of polarizing activity, located on the posterior side of the embryonic limb bud. Mutations in the human sonic hedgehog gene SHH cause holoprosencephaly type 3 HPE3, as a result of the loss of the ventral midline. The sonic hedgehog transcription pathway has also been linked to the formation of specific kinds of cancerous tumors, including the embryonic cerebellar tumor and medulloblastoma, as well as the progression of prostate cancer tumours. For SHH to be expressed in the developing embryo limbs, a morphogen called fibroblast growth factors must be secreted from the apical ectodermal ridge.
Sonic hedgehog has also been shown to act as an axonal guidance cue. It has been demonstrated that SHH attracts commissural axons at the ventral midline of the developing spinal cord. Specifically, SHH attracts retinal ganglion cell (RGC) axons at low concentrations and repels them at higher concentrations. The absence (non-expression) of SHH has been shown to control the growth of nascent hind limbs in cetaceans (whales and dolphins).
The SHH gene is a member of the hedgehog gene family with five variations of DNA sequence alterations or splice variants. SHH is located on chromosome seven and initiates the production of Sonic Hedgehog protein. This protein sends short- and long-range signals to embryonic tissues to regulate development. If the SHH gene is mutated or absent, the protein Sonic Hedgehog cannot do its job properly. Sonic hedgehog contributes to cell growth, cell specification and formation, structuring and organization of the body plan. This protein functions as a vital morphogenic signaling molecule and plays an important role in the formation of many different structures in developing embryos. The SHH gene affects several major organ systems, such as the nervous system, cardiovascular system, respiratory system and musculoskeletal system. Mutations in the SHH gene can cause malformation of components of these systems, which can result in major problems in the developing embryo. The brain and eyes, for example, can be significantly impacted by mutations in this gene and cause disorders such as Microphthalmia and Holoprosencephaly. Microphthalmia is a condition that affects the eyes, which results in small, underdeveloped tissues in one or both eyes. This can lead to issues ranging from a coloboma to a single small eye to the absence of eyes altogether. Holoprosencephaly is a condition most commonly caused by a mutation of the SHH gene that causes improper separation or turn of the left and right brain and facial dysmorphia. Many systems and structures rely heavily on proper expression of the SHH gene and subsequent sonic hedgehog protein, earning it the distinction of being an essential gene to development.
Patterning of the central nervous system
The sonic hedgehog (SHH) signaling molecule assumes various roles in patterning the central nervous system (CNS) during vertebrate development. One of the most characterized functions of SHH is its role in the induction of the floor plate and diverse ventral cell types within the neural tube. The notochord—a structure derived from the axial mesoderm—produces SHH, which travels extracellularly to the ventral region of the neural tube and instructs those cells to form the floor plate. Another view of floor plate induction hypothesizes that some precursor cells located in the notochord are inserted into the neural plate before its formation, later giving rise to the floor plate.
The neural tube itself is the initial groundwork of the vertebrate CNS, and the floor plate is a specialized structure, located at the ventral midpoint of the neural tube. Evidence supporting the notochord as the signaling center comes from studies in which a second notochord is implanted near a neural tube in vivo, leading to the formation of an ectopic floor plate within the neural tube.
Sonic hedgehog is the secreted protein that mediates signaling activities of the notochord and floor plate. Studies involving ectopic expression of SHH in vitro and in vivo result in floor plate induction and differentiation of motor neuron and ventral interneurons. On the other hand, mice mutants for SHH lack ventral spinal cord characteristics. In vitro blocking of SHH signaling using antibodies against it shows similar phenotypes. SHH exerts its effects in a concentration-dependent manner, so that a high concentration of SHH results in a local inhibition of cellular proliferation. This inhibition causes the floor plate to become thin compared to the lateral regions of the neural tube. Lower concentration of SHH results in cellular proliferation and induction of various ventral neural cell types. Once the floor plate is established, cells residing in this region will subsequently express SHH themselves, generating a concentration gradient within the neural tube.
Although there is no direct evidence of a SHH gradient, there is indirect evidence via the visualization of Patched (Ptc) gene expression, which encodes for the ligand binding domain of the SHH receptor throughout the ventral neural tube. In vitro studies show that incremental two- and threefold changes in SHH concentration give rise to motor neuron and different interneuronal subtypes as found in the ventral spinal cord. These incremental changes in vitro correspond to the distance of domains from the signaling tissue (notochord and floor plate) which subsequently differentiates into different neuronal subtypes as it occurs in vitro. Graded SHH signaling is suggested to be mediated through the Gli family of proteins, which are vertebrate homologues of the Drosophila zinc-finger-containing transcription factor Cubitus interruptus (Ci). Ci is a crucial mediator of hedgehog (Hh) signaling in Drosophila. In vertebrates, three different Gli proteins are present, viz. Gli1, Gli2 and Gli3, which are expressed in the neural tube. Mice mutants for Gli1 show normal spinal cord development, suggesting that it is dispensable for mediating SHH activity. However, Gli2 mutant mice show abnormalities in the ventral spinal cord, with severe defects in the floor plate and ventral-most interneurons (V3). Gli3 antagonizes SHH function in a dose-dependent manner, promoting dorsal neuronal subtypes. SHH mutant phenotypes can be rescued in a SHH/Gli3 double mutant. Gli proteins have a C-terminal activation domain and an N-terminal repressive domain.
SHH is suggested to promote the activation function of Gli2 and inhibit repressive activity of Gli3. SHH also seems to promote the activation function of Gli3, but this activity is not strong enough. The graded concentration of SHH gives rise to graded activity of Gli 2 and Gli3, which promote ventral and dorsal neuronal subtypes in the ventral spinal cord. Evidence from Gli3 and SHH/Gli3 mutants show that SHH primarily regulates the spatial restriction of progenitor domains rather than being inductive, as SHH/Gli3 mutants show intermixing of cell types.
SHH also induces other proteins with which it interacts, and these interactions can influence the sensitivity of a cell towards SHH. Hedgehog-interacting protein (HHIP) is induced by SHH, which in turn attenuates its signaling activity. Vitronectin is another protein that is induced by SHH; it acts as an obligate co-factor for SHH signaling in the neural tube.
There are five distinct progenitor domains in the ventral neural tube: V3 interneurons, motor neurons (MN), V2, V1, and V0 interneurons (in ventral to dorsal order). These different progenitor domains are established by "communication" between different classes of homeobox transcription factors. (See Trigeminal Nerve.) These transcription factors respond to SHH gradient concentration. Depending upon the nature of their interaction with SHH, they are classified into two groups—class I and class II—and are composed of members from the Pax, Nkx, Dbx and Irx families. Class I proteins are repressed at different thresholds of SHH delineating ventral boundaries of progenitor domains, while class II proteins are activated at different thresholds of SHH delineating the dorsal limit of domains. Selective cross-repressive interactions between class I and class II proteins give rise to five cardinal ventral neuronal subtypes.
It is important to note that SHH is not the only signaling molecule exerting an effect on the developing neural tube. Many other molecules, pathways and mechanisms are active (e.g., RA, FGF, BMP), and complex interactions between SHH and other molecules are possible. BMPs are suggested to play a critical role in determining the sensitivity of neural cell to SHH signaling. Evidence supporting this comes from studies using BMP inhibitors that ventralize the fate of the neural plate cell for a given SHH concentration. On the other hand, mutation in BMP antagonists (e.g., noggin) produces severe defects in the ventral-most characteristics of the spinal cord, followed by ectopic expression of BMP in the ventral neural tube. Interactions of SHH with Fgf and RA have not yet been studied in molecular detail.
Morphogenetic activity
The concentration- and time-dependent, cell-fate-determining activity of SHH in the ventral neural tube makes it a prime example of a morphogen. In vertebrates, SHH signaling in the ventral portion of the neural tube is most notably responsible for the induction of floor plate cells and motor neurons. SHH emanates from the notochord and ventral floor plate of the developing neural tube to create a concentration gradient that spans the dorso-ventral axis and is antagonized by an inverse Wnt gradient, which specifies the dorsal spinal cord. Higher concentrations of the SHH ligand are found in the most ventral aspects of the neural tube and notochord, while lower concentrations are found in the more dorsal regions of the neural tube. The SHH concentration gradient has been visualized in the neural tube of mice engineered to express a SHH::GFP fusion protein to show this graded distribution of SHH during the time of ventral neural tube patterning.
It is thought that the SHH gradient works to elicit multiple different cell fates by a concentration- and time-dependent mechanism that induces a variety of transcription factors in the ventral progenitor cells. Each of the ventral progenitor domains expresses a highly individualized combination of transcription factors—Nkx2.2, Olig2, Nkx6.1, Nkx6.2, Dbx1, Dbx2, Irx3, Pax6, and Pax7—that is regulated by the SHH gradient. These transcription factors are induced sequentially along the SHH concentration gradient with respect to the amount and time of exposure to SHH ligand. As each population of progenitor cells responds to the different levels of SHH protein, they begin to express a unique combination of transcription factors that leads to neuronal cell fate differentiation. This SHH-induced differential gene expression creates sharp boundaries between the discrete domains of transcription factor expression, which ultimately patterns the ventral neural tube.
The spatial and temporal aspect of the progressive induction of genes and cell fates in the ventral neural tube is illustrated by the expression domains of two of the most well-characterized transcription factors, Olig2 and Nkx2.2. Early in development, the cells at the ventral midline have only been exposed to a low concentration of SHH for a relatively short time and express the transcription factor Olig2. The expression of Olig2 rapidly expands in a dorsal direction concomitantly with the continuous dorsal extension of the SHH gradient over time. However, as the morphogenetic front of SHH ligand moves and begins to grow more concentrated, cells that are exposed to higher levels of the ligand respond by switching off Olig2 and turning on Nkx2.2, creating a sharp boundary between the cells expressing the transcription factor Nkx2.2 ventral to the cells expressing Olig2. It is in this way that each of the domains of the six progenitor cell populations are thought to be successively patterned throughout the neural tube by the SHH concentration gradient. Mutual inhibition between pairs of transcription factors expressed in neighboring domains contributes to the development of sharp boundaries; however, in some cases, inhibitory relationship has been found even between pairs of transcription factors from more distant domains. Particularly, NKX2-2 expressed in the V3 domain is reported to inhibit IRX3 expressed in V2 and more dorsal domains, although V3 and V2 are separated by a further domain termed MN.
SHH expression in the frontonasal ectodermal zone (FEZ), which is a signaling center that is responsible for the patterned development of the upper jaw, regulates craniofacial development mediating through the miR-199 family in the FEZ. Specifically, SHH-dependent signals from the brain regulate genes of the miR-199 family with downregulations of the miR-199 genes increasing SHH expression and resulting in wider faces, while upregulations of the miR-199 genes decrease SHH expression resulting in narrow faces.
Tooth development
SHH plays an important role in organogenesis and, most importantly, craniofacial development. Being that SHH is a signaling molecule, it primarily works by diffusion along a concentration gradient, affecting cells in different manners. In early tooth development, SHH is released from the primary enamel knot—a signaling center—to provide positional information in both a lateral and planar signaling pattern in tooth development and regulation of tooth cusp growth. SHH in particular is needed for growth of epithelial cervical loops, where the outer and inner epitheliums join and form a reservoir for dental stem cells. After the primary enamel knots are apoptosed, the secondary enamel knots are formed. The secondary enamel knots secrete SHH in combination with other signaling molecules to thicken the oral ectoderm and begin patterning the complex shapes of the crown of a tooth during differentiation and mineralization. In a knockout gene model, absence of SHH is indicative of holoprosencephaly. However, SHH activates downstream molecules of Gli2 and Gli3. Mutant Gli2 and Gli3 embryos have abnormal development of incisors that are arrested in early tooth development as well as small molars.
Lung development
Although SHH is most commonly associated with brain and limb digit development, it is also important in lung development. Studies using qPCR and knockouts have demonstrated that SHH contributes to embryonic lung development. The mammalian lung branching occurs in the epithelium of the developing bronchi and lungs. SHH expressed throughout the foregut endoderm (innermost of three germ layers) in the distal epithelium, where the embryonic lungs are developing. This suggests that SHH is partially responsible for the branching of the lungs. Further evidence of SHH's role in lung branching has been seen with qPCR. SHH expression occurs in the developing lungs around embryonic day 11 and is strongly expressed in the buds of the fetal lungs but low in the developing bronchi. Mice who are deficient in SHH can develop tracheoesophageal fistula (abnormal connection of the esophagus and trachea). Additionally, a double (SHH-/- ) knockout mouse model exhibited poor lung development. The lungs of the SHH double knockout failed to undergo lobation and branching (i.e., the abnormal lungs only developed one branch, compared to an extensively branched phenotype of the wildtype).
Potential regenerative function
Sonic hedgehog may play a role in mammalian hair cell regeneration. By modulating retinoblastoma protein activity in rat cochlea, sonic hedgehog allows mature hair cells that normally cannot return to a proliferative state to divide and differentiate. Retinoblastoma proteins suppress cell growth by preventing cells from returning to the cell cycle, thereby preventing proliferation. Inhibiting the activity of Rb seems to allow cells to divide. Therefore, sonic hedgehog—identified as an important regulator of Rb—may also prove to be an important feature in regrowing hair cells after damage.
SHH is important for regulating dermal adipogenesis by hair follicle transit-amplifying cells (HF-TACs). Specifically, SHH induces dermal angiogenesis by acting directly on adipocyte precursors and promoting their proliferation through their expression of the peroxisome proliferator-activated receptor γ (Pparg) gene.
Processing
SHH undergoes a series of processing steps before it is secreted from the cell. Newly synthesised SHH weighs 45 kDa and is referred to as the preproprotein. As a secreted protein, it contains a short signal sequence at its N-terminus, which is recognised by the signal recognition particle during the translocation into the endoplasmic reticulum (ER), the first step in protein secretion. Once translocation is complete, the signal sequence is removed by signal peptidase in the ER. There, SHH undergoes autoprocessing to generate a 20 kDa N-terminal signaling domain (SHH-N) and a 25 kDa C-terminal domain with no known signaling role. The cleavage is catalysed by a protease within the C-terminal domain. During the reaction, a cholesterol molecule is added to the C-terminus of SHH-N. Thus, the C-terminal domain acts as an intein and a cholesterol transferase. Another hydrophobic moiety, a palmitate, is added to the alpha-amine of N-terminal cysteine of SHH-N. This modification is required for efficient signaling, resulting in a 30-fold increase in potency over the non-palmitylated form and is carried out by a member of the membrane-bound O-acyltransferase family Protein-cysteine N-palmitoyltransferase HHAT.
Robotnikinin
A potential inhibitor of the Hedgehog signaling pathway has been found and dubbed "Robotnikinin"—after Sonic the Hedgehog's nemesis and the main antagonist of the Sonic the Hedgehog game series, Dr. Ivo "Eggman" Robotnik.
Former controversy surrounding name
The gene has been linked to a condition known as holoprosencephaly, which can result in severe brain, skull and facial defects, causing a few clinicians and scientists to criticize the name on the grounds that it sounds too frivolous. It has been noted that mention of a mutation in a sonic hedgehog gene might not be well received in a discussion of a serious disorder with a patient or their family. This controversy has largely died down, and the name is now generally seen as a humorous relic of the time before the rise of fast, cheap complete genome sequencing and standardized nomenclature. The problem of the "inappropriateness" of the names of genes such as "Mothers against decapentaplegic," "Lunatic fringe," and "Sonic hedgehog" is largely avoided by using standardized abbreviations when speaking with patients and their families.
Gallery
See also
Pikachurin, a retinal protein named after Pikachu
Zbtb7, an oncogene which was originally named "Pokémon"
References
Further reading
External links
An introductory article on SHH at Davidson College
Rediscovering biology: Unit 7 Genetics of development .. Expert interview transcripts interview with John Incardona PhD .. explanation of the discovery and naming of the sonic hedgehog gene
‘Sonic Hedgehog’ sounded funny at first .. New York Times November 12, 2006 ..
GeneReviews/NCBI/NIH/UW entry on Anophthalmia / Microphthalmia Overview
SHH – sonic hedgehog US National Library of Medicine
Proteins
Morphogens
HINT domain
Cell signaling
Ligands (biochemistry)
Genes on human chromosome 7
Sonic the Hedgehog | Sonic hedgehog protein | [
"Chemistry",
"Biology"
] | 4,899 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Ligands (biochemistry)",
"Molecular biology",
"Proteins",
"Morphogens",
"Induced stem cells"
] |
353,805 | https://en.wikipedia.org/wiki/Glycogenolysis | Glycogenolysis is the breakdown of glycogen (n) to glucose-1-phosphate and glycogen (n-1). Glycogen branches are catabolized by the sequential removal of glucose monomers via phosphorolysis, by the enzyme glycogen phosphorylase.
Mechanism
In the muscles, glycogenosis begins due to the binding of cAMP to phosphorylate kinase, converting the latter to its active form so it can convert phosphorylase b to phosphorylase a, which is responsible for catalyzing the breakdown of glycogen.
The overall reaction for the breakdown of glycogen to glucose-1-phosphate is:
glycogen(n residues) + Pi glycogen(n-1 residues) + glucose-1-phosphate
Here, glycogen phosphorylase cleaves the bond linking a terminal glucose residue to a glycogen branch by substitution of a phosphoryl group for the α[1→4] linkage.
Glucose-1-phosphate is converted to glucose-6-phosphate (which often ends up in glycolysis) by the enzyme phosphoglucomutase.
Glucose residues are phosphorolysed from branches of glycogen until four residues before a glucose that is branched with a α[1→6] linkage. Glycogen debranching enzyme then transfers three of the remaining four glucose units to the end of another glycogen branch. This exposes the α[1→6] branching point, which is hydrolysed by α[1→6] glucosidase, removing the final glucose residue of the branch as a molecule of glucose and eliminating the branch. This is the only case in which a glycogen metabolite is not glucose-1-phosphate. The glucose is subsequently phosphorylated to glucose-6-phosphate by hexokinase.
Enzymes
Glycogen phosphorylase with Pyridoxal phosphate as prosthetic group
Alpha-1,4 → alpha-1,4 glucan transferase
Alpha-1,6-glucosidase
Phosphoglucomutase
Glucose-6-phosphatase (absent in muscles)
Function
Glycogenolysis takes place in the cells of the muscle and liver tissues in response to hormonal and neural signals. In particular, glycogenolysis plays an important role in the fight-or-flight response and the regulation of glucose levels in the blood.
In myocytes (muscle cells), glycogen degradation serves to provide an immediate source of glucose-6-phosphate for glycolysis, to provide energy for muscle contraction. Glucose-6-phosphate can not pass through the cell membrane, and is therefore used solely by the myocytes that produce it.
In hepatocytes (liver cells), the main purpose of the breakdown of glycogen is for the release of glucose into the bloodstream for uptake by other cells. The phosphate group of glucose-6-phosphate is removed by the enzyme glucose-6-phosphatase, which is not present in myocytes, and the free glucose exits the cell via GLUT2 facilitated diffusion channels in the hepatocyte cell membrane.
Regulation
Glycogenolysis is regulated hormonally in response to blood sugar levels by glucagon and insulin, and stimulated by epinephrine during the fight-or-flight response. Insulin potently inhibits glycogenolysis.
In myocytes, glycogen degradation may also be stimulated by neural signals; glycogenolysis is regulated by epinephrine and calcium released by the sarcoplasmic reticulum.
Glucagon has no effect on muscle glycogenolysis.
Calcium binds with calmodulin and the complex activates phosphorylase kinase.
Clinical significance
Parenteral (intravenous) administration of glucagon is a common human medical intervention in diabetic emergencies when sugar cannot be given orally. It can also be administered intramuscularly.
Pathology
See also
Glycogenesis
References
External links
The chemical logic of glycogen degradation at ufp.pt
Biochemical reactions
Carbohydrate metabolism
Diabetes
Hepatology | Glycogenolysis | [
"Chemistry",
"Biology"
] | 928 | [
"Carbohydrate metabolism",
"Biochemical reactions",
"Carbohydrate chemistry",
"Biochemistry",
"Metabolism"
] |
353,849 | https://en.wikipedia.org/wiki/Hydraulic%20mining | Hydraulic mining is a form of mining that uses high-pressure jets of water to dislodge rock material or move sediment. In the placer mining of gold or tin, the resulting water-sediment slurry is directed through sluice boxes to remove the gold. It is also used in mining kaolin and coal.
Hydraulic mining developed from ancient Roman techniques that used water to excavate soft underground deposits. Its modern form, using pressurized water jets produced by a nozzle called a "monitor", came about in the 1850s during the California Gold Rush in the United States. Though successful in extracting gold-rich minerals, the widespread use of the process resulted in extensive environmental damage, such as increased flooding and erosion, and sediment blocking waterways and covering farm fields. These problems led to its legal regulation. Hydraulic mining has been used in various forms around the world.
History
Ground Sluicing
Hydraulic mining had its precursor in the practice of ground sluicing, a development of which is also known as "hushing", in which surface streams of water were diverted so as to erode gold-bearing gravels. This technique was developed in the first centuries BC and AD by Roman miners to erode away alluvium. The Romans used ground sluicing to remove overburden and the gold-bearing debris in Las Médulas of Spain, and Dolaucothi in Great Britain. The method was also used in Elizabethan England and Wales (and rarely, Scotland) for developing lead, tin and copper mines.
Water was used on a large scale by Roman engineers in the first centuries BC and AD when the Roman empire was expanding rapidly in Europe. Using a process later known as hushing, the Romans stored a large volume of water in a reservoir immediately above the area to be mined; the water was then quickly released. The resulting wave of water removed overburden and exposed bedrock. Gold veins in the bedrock were then worked using a number of techniques, and water power was used again to remove debris. The remains at Las Médulas and in surrounding areas show badland scenery on a gigantic scale owing to hydraulicking of the rich alluvial gold deposits.
Las Médulas is now a UNESCO World Heritage Site. The site shows the remains of at least seven large aqueducts of up to in length feeding large supplies of water into the site. The gold-mining operations were described in vivid terms by Pliny the Elder in his Natural History published in the first century AD. Pliny was a procurator in Hispania Terraconensis in the 70s AD and witnessed the operations himself. The use of hushing has been confirmed by field survey and archaeology at Dolaucothi in South Wales, the only known Roman gold mine in Great Britain.
California Gold Rush
The modern form of hydraulic mining, using jets of water directed under very high pressure through hoses and nozzles at gold-bearing upland paleogravels, was first used by Edward Matteson near Nevada City, California in 1853 during the California Gold Rush. Matteson used canvas hose which was later replaced with crinoline hose by the 1860s. In California, hydraulic mining often brought water from higher locations for long distances to holding ponds several hundred feet above the area to be mined. California hydraulic mining exploited gravel deposits, making it a form of placer mining.
Early placer miners in California discovered that the more gravel they could process, the more gold they were likely to find. Instead of working with pans, sluice boxes, long toms, and rockers, miners collaborated to find ways to process larger quantities of gravel more rapidly. Hydraulic mining became the largest-scale, and most devastating, form of placer mining. Water was redirected into an ever-narrowing channel, through a large canvas hose, and out through a giant iron nozzle, called a "monitor". The extremely high pressure stream was used to wash entire hillsides through enormous sluices.
By the early 1860s, while hydraulic mining was at its height, small-scale placer mining had largely exhausted the rich surface placers, and the mining industry turned to hard rock (called quartz mining in California) or hydraulic mining, which required larger organizations and much more capital. By the mid-1880s, it is estimated that 11 million ounces of gold (worth approximately US$7.5 billion at mid-2006 prices) had been recovered by hydraulic mining
.
Environmental impacts
While generating millions of dollars in tax revenues for the state and supporting a large population of miners in the mountains, hydraulic mining had a devastating effect on riparian natural environment and agricultural systems in California. Millions of tons of earth and water were delivered to mountain streams that fed rivers flowing into the Sacramento Valley. Once the rivers reached the relatively flat valley, the water slowed, the rivers widened, and the sediment was deposited in the floodplains and river beds causing them to rise, shift to new channels, and overflow their banks, causing major flooding, especially during the spring melt.
Cities and towns in the Sacramento Valley experienced an increasing number of devastating floods, while the rising riverbeds made navigation on the rivers increasingly difficult. Perhaps no other city experienced the boon and the bane of gold mining as much as Marysville. Situated at the confluence of the Yuba and Feather rivers, Marysville was the final "jumping off" point for miners heading to the northern foothills to seek their fortune. Steamboats from San Francisco, carrying miners and supplies, navigated up the Sacramento River, then the Feather River to Marysville where they would unload their passengers and cargo.
Marysville eventually constructed a complex levee system to protect the city from floods and sediment. Hydraulic mining greatly exacerbated the problem of flooding in Marysville and shoaled the waters of the Feather River so severely that few steamboats could navigate from Sacramento to the Marysville docks. The sediment left by such efforts were reprocessed by mining dredges at the Yuba Goldfields, located near Marysville.
The spectacular eroded landscape left at the site of hydraulic mining can be viewed at Malakoff Diggins State Historic Park in Nevada County, California.
The San Francisco Bay became an outlet for polluting byproducts during the Gold Rush. Hydraulic mining left a trail of toxic waste, called "slickens," that flowed from mine sites in the Sierras through the Sacramento River and into the San Francisco Bay. The slickens would contain harmful metals such as mercury. During this period, the industrial mining industry released 1.5 billion yards of toxic slickens into the Sacramento River. As the slickens traveled through California's water arteries, it deposited its toxins into local ecosystems and waterways.
Nearby farmland became contaminated, which led to political pushback against the use of hydraulic mining. The slickens flowed through the Sacramento River before depositing itself into the San Francisco Bay. Currently, the San Francisco Bay remains dangerously contaminated with mercury. Estimates suggest that it will be another century before the Bay naturally removes the mercury from its system.
Legal action landmark case
Vast areas of farmland in the Sacramento Valley were deeply buried by the mining sediment. Frequently devastated by flood waters, farmers demanded an end to hydraulic mining. In the most renowned legal fight of farmers against miners, the farmers sued the hydraulic mining operations and the landmark case of Woodruff v. North Bloomfield Mining and Gravel Company made its way to the United States District Court in San Francisco where Judge Lorenzo Sawyer decided in favor of the farmers and limited hydraulic mining on January 7, 1884, declaring that hydraulic mining was "a public and private nuisance" and enjoining its operation in areas tributary to navigable streams and rivers.
Hydraulic mining on a much smaller scale was recommenced after 1893 when the United States Congress passed the Camminetti Act which allowed licensed mining operations if sediment retention structures were constructed. This led to a number of operations above sediment catching brush dams and log crib dams. Most of the water-delivery hydraulic mining infrastructure had been destroyed by an 1891 flood, so this later stage of mining was carried on at a much smaller scale in California.
Beyond California
Although often associated with California due to its adoption and widespread use there, the technology was exported widely, to Oregon (Jacksonville in 1856), Colorado (Clear Creek, Central City and Breckenridge in 1860), Montana (Bannack in 1865), Arizona (Lynx Creek in 1868), Idaho (Idaho City in 1863), South Dakota (Deadwood in 1876), Alaska (Fairbanks in 1920), British Columbia (Canada), and overseas. It was used extensively in Dahlonega, Georgia and continues to be used in developing nations, often with devastating environmental consequences. The devastation caused by this method of mining caused Edwin Carter, the "Log Cabin Naturalist", to switch from mining to collecting wildlife specimens from 1875–1900 in Breckenridge, Colorado, US.
Hydraulic mining was used during the Australian gold rushes where it was called hydraulic sluicing. One notable location was at the Oriental Claims near Omeo in Victoria where it was used between the 1850s and early 1900s, with abundant evidence of the damage still being visible today.
Hydraulic mining was used extensively in the Central Otago gold rush that took place in the 1860s in the South Island of New Zealand, where it was also known as sluicing.
Starting in the 1870s, hydraulic mining became a mainstay of alluvial tin mining on the Malay Peninsula.
Hydraulicking was formerly used in Polk County, Florida to mine phosphate rock.
Contemporary usage
In addition to its use in true mining, hydraulic mining can be used as an excavation technique, principally to demolish hills. For example, the Denny Regrade in Seattle was largely accomplished by hydraulic mining.
Hydraulic mining is the principal way that kaolinite clay is mined in Cornwall and Devon, in South-West England.
Egypt used hydraulic mining methods to breach the Bar Lev Line sand wall at the Suez Canal, in Operation Badr (1973) which opened the Yom Kippur War.
Rand gold fields
On the South African Rand gold fields, a gold surface tailings re-treatment facility called East Rand Gold and Uranium Company (ERGO) has been in operation since 1977. The facility uses hydraulic monitors to create slurry from older (and consequently richer) tailings sites and pumps it long distances to a concentration plant.
The facility processes nearly two million tons of tailings each month at a processing cost of below US$3.00/t (2013). Gold is recovered at a rate of only 0.20 g/t, but the low yield is compensated for by the extremely low cost of processing, with no risky or expensive mining or milling required for recovery.
The resulting slimes are pumped further away from the built-up areas permitting the economic development of land close to commercially valuable areas and previously covered by the tailings. The historic yellow-coloured mine dumps around Johannesburg are now almost a rarity, seen only in older photographs.
Uranium and pyrite (for sulfuric acid production) are also available for recovery from the process stream as co-products under suitable economic conditions.
Underground hydraulic mining
High-pressure water jets have also been used in the underground mining of coal, to break up the coal seam and wash the resulting coal slurry toward a collection point. The high-pressure water nozzle is referred to as the 'hydro monitor'.
See also
Hydrology
Hydropower
Hydraulic fracturing, use of high-pressure water in oil and gas drilling
Pressure washer, similar use of high-pressure jets of water
Water jet cutter, similar use of high-pressure jets of water
Cigar Lake Mine uses a similar method of high-pressure water to mine uranium
Borehole mining, remotely operated with similar use of high-pressure water jets.
References
Hydraulic Mining in California: A Tarnished Legacy, by Powell Greenland, 2001
Battling the Inland Sea: American Political Culture, Public Policy, and the Sacramento Valley: 1850-1986., U.Calif Press; 395pp.
Gold vs. Grain: The Hydraulic Mining Controversy in California's Sacramento Valley, by Robert L. Kelley, 1959
Lewis, P. R. and G. D. B. Jones, Roman gold-mining in north-west Spain, Journal of Roman Studies 60 (1970): 169–85
California Gold Rush
History of mining
Hydraulic engineering
Surface mining | Hydraulic mining | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,512 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
353,890 | https://en.wikipedia.org/wiki/Universality%20class | In statistical mechanics, a universality class is a collection of mathematical models which share a single scale-invariant limit under the process of renormalization group flow. While the models within a class may differ dramatically at finite scales, their behavior will become increasingly similar as the limit scale is approached. In particular, asymptotic phenomena such as critical exponents will be the same for all models in the class.
Some well-studied universality classes are the ones containing the Ising model or the percolation theory at their respective phase transition points; these are both families of classes, one for each lattice dimension. Typically, a family of universality classes will have a lower and upper critical dimension: below the lower critical dimension, the universality class becomes degenerate (this dimension is 2d for the Ising model, or for directed percolation, but 1d for undirected percolation), and above the upper critical dimension the critical exponents stabilize and can be calculated by an analog of mean-field theory (this dimension is 4d for Ising or for directed percolation, and 6d for undirected percolation).
List of critical exponents
Critical exponents are defined in terms of the variation of certain physical properties of the system near its phase transition point. These physical properties will include its reduced temperature , its order parameter measuring how much of the system is in the "ordered" phase, the specific heat, and so on.
The exponent is the exponent relating the specific heat C to the reduced temperature: we have . The specific heat will usually be singular at the critical point, but the minus sign in the definition of allows it to remain positive.
The exponent relates the order parameter to the temperature. Unlike most critical exponents it is assumed positive, since the order parameter will usually be zero at the critical point. So we have .
The exponent relates the temperature with the system's response to an external driving force, or source field. We have , with J the driving force.
The exponent relates the order parameter to the source field at the critical temperature, where this relationship becomes nonlinear. We have (hence ), with the same meanings as before.
The exponent relates the size of correlations (i.e. patches of the ordered phase) to the temperature; away from the critical point these are characterized by a correlation length . We have .
The exponent measures the size of correlations at the critical temperature. It is defined so that the correlation function scales as .
The exponent , used in percolation theory, measures the size of the largest clusters (roughly, the largest ordered blocks) at 'temperatures' (connection probabilities) below the critical point. So .
The exponent , also from percolation theory, measures the number of size s clusters far from (or the number of clusters at criticality): , with the factor removed at critical probability.
For symmetries, the group listed gives the symmetry of the order parameter. The group is the dihedral group, the symmetry group of the n-gon, is the n-element symmetric group, is the octahedral group, and is the orthogonal group in n dimensions. 1 is the trivial group.
References
External links
Universality classes from Sklogwiki
Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002),
Critical phenomena
Renormalization group
Scale-invariant systems | Universality class | [
"Physics",
"Materials_science",
"Mathematics"
] | 714 | [
"Scaling symmetries",
"Physical phenomena",
"Critical phenomena",
"Renormalization group",
"Scale-invariant systems",
"Condensed matter physics",
"Statistical mechanics",
"Symmetry",
"Dynamical systems"
] |
354,320 | https://en.wikipedia.org/wiki/Institution%20of%20Civil%20Engineers | The Institution of Civil Engineers (ICE) is an independent professional association for civil engineers and a charitable body in the United Kingdom. Based in London, ICE has over 92,000 members, of whom three-quarters are located in the UK, while the rest are located in more than 150 other countries. The ICE aims to support the civil engineering profession by offering professional qualification, promoting education, maintaining professional ethics, and liaising with industry, academia and government. Under its commercial arm, it delivers training, recruitment, publishing and contract services. As a professional body, ICE aims to support and promote professional learning (both to students and existing practitioners), managing professional ethics and safeguarding the status of engineers, and representing the interests of the profession in dealings with government, etc. It sets standards for membership of the body; works with industry and academia to progress engineering standards and advises on education and training curricula.
History
The late 18th century and early 19th century saw the founding of many learned societies and professional bodies (for example, the Royal Society and the Law Society). Groups calling themselves civil engineers had been meeting for some years from the late 18th century, notably the Society of Civil Engineers formed in 1771 by John Smeaton (renamed the Smeatonian Society after his death). At that time, formal engineering in Britain was limited to the military engineers of the Corps of Royal Engineers, and in the spirit of self-help prevalent at the time and to provide a focus for the fledgling 'civilian engineers', the Institution of Civil Engineers was founded as the world's first professional engineering body.
The initiative to found the Institution was taken in 1818 by eight young engineers, Henry Robinson Palmer (23), William Maudslay (23), Thomas Maudslay (26), James Jones (28), Charles Collinge (26), John Lethbridge, James Ashwell (19) and Joshua Field (32), who held an inaugural meeting on 2 January 1818, at the Kendal Coffee House in Fleet Street. The institution made little headway until a key step was taken – the appointment of Thomas Telford as the first President of the body. Greatly respected within the profession and blessed with numerous contacts across the industry and in government circles, he was instrumental in drumming up membership and getting a Royal Charter for ICE in 1828. This official recognition helped establish ICE as the pre-eminent organisation for engineers of all disciplines.
Early definitions of a Civil Engineer can be found in the discussions held on 2 January 1818 and in the application for Royal Chartership. In 1818 Palmer said that:
The objects of such institution, as recited in the charter, and reported in The Times, were
After Telford's death in 1834, the organisation moved into premises in Great George Street in the heart of Westminster in 1839, and began to publish learned papers on engineering topics. Its members, notably William Cubitt, were also prominent in the organisation of the Great Exhibition of 1851.
For 29 years ICE provided the forum for engineers practising in all the disciplines recognised today. Mechanical engineer and tool-maker Henry Maudslay was an early member and Joseph Whitworth presented one of the earliest papers – it was not until 1847 that the Institution of Mechanical Engineers was established (with George Stephenson as its first President).
By the end of the 19th century, ICE had introduced examinations for professional engineering qualifications to help ensure and maintain high standards among its members – a role it continues today.
The ICE's Great George Street headquarters, designed by James Miller, was built by John Mowlem & Co and completed in 1911.
Membership and professional qualification
The institution is a membership organisation comprising 95,460 members worldwide (as of 31 December 2022); around three-quarters are located in the United Kingdom. Membership grades include:
Student
Graduate (GMICE)
Associate (AMICE)
Technician (MICE)
Member (MICE)
Fellow (FICE)
ICE is a licensed body of the Engineering Council and can award the Chartered Engineer (CEng), Incorporated Engineer (IEng) and Engineering Technician (EngTech) professional qualifications. Members who are Chartered Engineers can use the protected title Chartered Civil Engineer.
ICE is also licensed by the Society for the Environment to award the Chartered Environmentalist (CEnv) professional qualification.
Publishing
The Institution of Civil Engineers also publishes technical studies covering research and best practice in civil engineering. Under its commercial arm, Thomas Telford Ltd, it delivers training, recruitment, publishing and contract services, such as the NEC Engineering and Construction Contract. All the profits of Thomas Telford Ltd go back to the Institution to further its stated aim of putting civil engineers at the heart of society. The publishing division has existed since 1836 and is today called ICE Publishing. ICE Publishing produces roughly 30 books a year, including the ICE Manuals series, and 30 civil engineering journals, including the ICE Proceedings in nineteen parts, Géotechnique, and the Magazine of Concrete Research. The ICE Science series is now also published by ICE Publishing. ICE Science currently consists of five journals: Nanomaterials and Energy, Emerging Materials Research, Bioinspired, Biomimetic and Nanobiomaterials, Green Materials and Surface Innovations.
Nineteen individual parts now make up the Proceedings, as follows:
Proceedings of the Institution of Civil Engineers: Bridge Engineering
Proceedings of the Institution of Civil Engineers: Civil Engineering
Proceedings of the Institution of Civil Engineers: Construction Materials
Proceedings of the Institution of Civil Engineers: Energy
Proceedings of the Institution of Civil Engineers: Engineering and Computational Mechanics
Proceedings of the Institution of Civil Engineers: Engineering History and Heritage
Proceedings of the Institution of Civil Engineers: Engineering Sustainability
Proceedings of the Institution of Civil Engineers: Forensic Engineering
Proceedings of the Institution of Civil Engineers: Geotechnical Engineering
Proceedings of the Institution of Civil Engineers: Ground Improvement
Proceedings of the Institution of Civil Engineers: Management, Procurement and Law
Proceedings of the Institution of Civil Engineers: Maritime Engineering
Proceedings of the Institution of Civil Engineers: Municipal Engineer
Proceedings of the Institution of Civil Engineers: Smart Infrastructure and Construction
Proceedings of the Institution of Civil Engineers: Structures and Buildings
Proceedings of the Institution of Civil Engineers: Transport
Proceedings of the Institution of Civil Engineers: Urban Design and Planning
Proceedings of the Institution of Civil Engineers: Waste and Resource Management
Proceedings of the Institution of Civil Engineers: Water Management
ICE members, except for students, also receive the New Civil Engineer magazine (published weekly from 1995 to 2017 by Emap, now published monthly by Metropolis International).
Specialist Knowledge Societies
The ICE also administers 15 Specialist Knowledge Societies created at different times to support special interest groups within the civil engineering industry, some of which are British sections of international and/or European bodies. The societies provide continuing professional development and assist in the transfer of knowledge concerning specialist areas of engineering.
The Specialist Knowledge Societies are:
Governance
The institution is governed by the ICE Trustee Board, comprising the President, three Vice Presidents, four members elected from the membership, three ICE Council members, and one nominated member. The President is the public face of the institution and day-to-day management is the responsibility of the Director General.
President
The ICE President is elected annually and the holder for 2024–2025 is Jim Hall.
Each year a number of young engineers have been chosen as President's apprentices. The scheme was started in 2005 during the presidency of Gordon Masterton, who also initiated a President's blog, now the ICE Infrastructure blog. Each incoming President sets out the main theme of his or her year of office in a Presidential Address.
Many of the profession's greatest engineers have served as President of the ICE including:
One of Britain's greatest engineers, Isambard Kingdom Brunel died before he could take up the post (he was vice-president from 1850).
Female civil engineers
The first woman member of ICE was Dorothy Donaldson Buchanan in 1927. The first female Fellows elected were Molly Fergusson (1957), Marie Lindley (1972), Helen Stone (1991) and Joanna Kennedy (1992). In January 2025, 30-year-old Costain engineer Georgia Thompson became the youngest woman to be elected a Fellowship of the ICE.
The three female Presidents (to date) are Jean Venables, who became the 144th holder of the office in 2008, Rachel Skinner, who became President in 2020, and Anusha Shah, the President in 2023.
In January 1969 the Council of the Institution set up a working party to consider the role of women in engineering. Among its conclusions were that 'while women have certainly established their competence throughout the professional engineering field, there is clearly a built-in or unconscious prejudice against them'. The WISE Campaign (Women into Science and Engineering) was launched in 1984; by 1992 3% of the total ICE membership of 79,000 was female, and only 0.8% of chartered civil engineers were women. By 2016 women comprised nearly 12% of total membership, almost 7% of chartered civil engineers and just over 2% of Fellows. In June 2015 a Presidential Commission on diversity was announced. By the start of 2023 women made up 16% of overall membership, with female fellows comprising 6% of the fellowship.
Awards
The Institution makes various awards to recognise the work of its members. In addition to awards for technical papers, reports and competition entries it awards medals for different achievements.
Gold Medal – The Gold Medal is awarded to an individual who has made valuable contributions to civil engineering over many years. This may cover contributions in one or more areas, such as, design, research, development, investigation, construction, management (including project management), education and training.
Garth Watson Medal – The Garth Watson Medal is awarded for dedicated and valuable service to ICE by an ICE Member or member of staff.
Brunel Medal – The Brunel Medal is awarded to teams, individuals or organisations operating within the built environment and recognises excellence in civil engineering.
Edmund Hambly Medal – The Edmund Hambly Medal awarded for creative design in an engineering project that makes a substantial contribution to sustainable development. It is awarded to projects, of any scale, which take into account such factors as full life-cycle effects, including de-commissioning, and show an understanding of the implications of infrastructure impact upon the environment. The medal is awarded in honour of past president Edmund Hambly who was a proponent of sustainable engineering.
International Medal – The International Medal is awarded annually to a civil engineer who has made an outstanding contribution to civil engineering outside the United Kingdom or an engineer who resides outside the United Kingdom.
Warren Medal – The Warren Medal is awarded annually to an ICE member in recognition of valuable services to his or her region.
Telford Medal – Telford Medal is the highest prize that can be awarded by the ICE for a paper.
James Alfred Ewing Medal – The James Alfred Ewing Medal is awarded by the council on the joint nomination of the president and the President of the Royal Society.
James Forrest Medal – The James Forrest Medal was established in honour of James Forrest upon his retirement as secretary in 1896.
Baker Medal – The Baker Medal was established in 1934 to recognise papers that promote or cover developments in engineering practice, or investigation into problems with which Sir Benjamin Baker was specially identified.
Jean Venables Medal – Since 2011, the Institution has awarded a Jean Venables Medal to its best Technician Professional Review candidate.
President's Medal
Emerging Engineer Award
James Rennie Medal – For the best Chartered Professional Review candidate of the year. Named after James Rennie, a civil engineer noted for his devotion to the training of new engineers.
Renée Redfern Hunt Memorial Prize – For the best chartered or member professional review written exercise of the year. Named for an ICE staff member who served as examinations officer from 1945 to 1981.
Tony Chapman Medal – For the best member professional review candidate of the year. Named after an ICE council member who played a key role in the integration of the Board of Incorporated Engineers and Technicians into the institution and in promoting incorporated engineer status.
Chris Binnie Award for Sustainable Water Management
The Bev Waugh Award – Since 2021, for productivity and culture, recognises a leader or individual who has had a positive impact on joint team working
Adrian Long Medal
Student chapters
The ICE has student chapters in several countries including Hong Kong, India, Indonesia, Malaysia, Malta, Pakistan, Poland, Sudan, Trinidad, and United Arab Emirates.
Arms
See also
Chartered Institution of Civil Engineering Surveyors
Construction Industry Council
References
Charles Matthew Norrie (1956). Bridging the Years – a short history of British Civil Engineering. Edward Arnold (Publishers) Ltd.
Garth Watson (1988). The Civils – The story of the Institution of Civil Engineers. Thomas Telford Ltd
Hugh Ferguson and Mike Chrimes (2011). The Civil Engineers – The story of the Institution of Civil Engineers and the People Who Made It. Thomas Telford Ltd
External links
Royal Charter and other documentation for governance of ICE
ICE Royal Charter, By-laws and Regulations,
ICE Publishing website
ICE Science website (archived 11 April 2013)
Civil engineering professional associations
ECUK Licensed Members
Organisations based in the City of Westminster
Organizations established in 1818
1818 establishments in the United Kingdom | Institution of Civil Engineers | [
"Engineering"
] | 2,643 | [
"Civil engineering professional associations",
"Civil engineering organizations"
] |
5,663,769 | https://en.wikipedia.org/wiki/Enormous%20Toroidal%20Plasma%20Device | The Enormous Toroidal Plasma Device (ETPD) is an experimental physics device housed at the Basic Plasma Science Facility at University of California, Los Angeles (UCLA). It previously operated as the Electric Tokamak (ET) between 1999 and 2006 and was noted for being the world's largest tokamak before being decommissioned due to the lack of support and funding. The machine was renamed to ETPD in 2009. At present, the machine is undergoing upgrades to be re-purposed into a general laboratory for experimental plasma physics research.
As the Electric Tokamak
The Electric Tokamak (ET) was the last of a series of small tokamak machines built in 1998 under the direction of principal investigator and designer, Robert Taylor, a UCLA professor. The machine was designed to be a low field (0.25 T) magnetic confinement fusion device with a large aspect ratio. It is composed of 16 vacuum chambers made of 1-inch thick steel, with a major radius of 5 meters and a minor radius of 1 meter. The ET was the largest tokamak ever built at its time, with a vacuum vessel slightly bigger than that of the Joint European Torus.
The first plasma was achieved in January 1999. The ET is capable of producing a plasma current of 45 kiloamperes and can produce a core electron plasma temperature of 300 eV.
Four sets of independent coils are necessary for OH (ohmic heating) current drive, vertical equilibrium field, plasma elongation and plasma shaping (D or reverse-D). The OH system provides 10 V·s using a 10 kA power supply. Up to 0.1 T of vertical field can be applied for horizontal control and this is more than sufficient for all plasma configurations, including high beta. An additional set of coils provide a small horizontal field to correct for error field and to stabilize the plasma vertically. All the coils are located outside the vessel and are constructed out of aluminium.
A Rogowski probe outside the vessel and sets of Hall probes inside the vessel are used to monitor plasma current, position and shaping and are used in the control feedback loop. The poloidal system was designed using an in-house equilibrium code as well as a variety of other codes in order to cross-check computations and to assess the stability of the resulting plasma.
Like most tokamaks, the machine uses a combination of RF heating and neutral beam injection to drive and shape the plasma.
Decommission in 2006
In 2006, the ET had run out of funding and was decommissioned following the retirement of Taylor. Factors leading to loss of funding are attributed to the lack of extensive plasma diagnostics, its large size, and its place in the politics of fusion. When it was operating, the ET was funded mostly by the Department of Energy (DOE).
As the Enormous Toroidal Plasma Device
In 2009, the Electric Tokamak (ET) was renamed to the Enormous Toroidal Plasma Device (ETPD) and was re-purposed for basic plasma research. A lanthanum hexaboride (LaB6) plasma source was developed for the ETPD (similar to the one used in the Large Plasma Device), and is capable of producing a long column of magnetized plasma (~100 m) that winds itself multiple times along the toroidal axis of the machine. The plasma column was shown to be current-free and terminates on the neutral gas within the chamber without touching the machine walls.
The typical operational parameters of the ETPD are:
Density: n ≤ 3 × 1013 cm−3
Electron Temperature: 5 eV < Te < 30 eV
Ion Temperature 1 eV < Ti < 16 eV
Background field: B = 250 gauss (25 mT)
Plasma beta: β ~ 1
The ETPD is currently in the process of being upgraded (i.e. larger sources, better diagnostic capabilities) to support a wide range of plasma physics experiments.
See also
Large Plasma Device, a linear plasma device housed in the same facility as the ETPD
References
External links
"Installation and Initial Testing of the Electric Tokamak Folded Waveguide" with photos.
UCLA Electric Tokamak Homepage
UCLA Tokamak Research
Plasma physics facilities
Tokamaks
University of California, Los Angeles buildings and structures | Enormous Toroidal Plasma Device | [
"Physics"
] | 859 | [
"Plasma physics facilities",
"Plasma physics"
] |
5,664,494 | https://en.wikipedia.org/wiki/Neutral-beam%20injection | Neutral-beam injection (NBI) is one method used to heat plasma inside a fusion device consisting in a beam of high-energy neutral particles that can enter the magnetic confinement field. When these neutral particles are ionized by collision with the plasma particles, they are kept in the plasma by the confining magnetic field and can transfer most of their energy by further collisions with the plasma. By tangential injection in the torus, neutral beams also provide momentum to the plasma and current drive, one essential feature for long pulses of burning plasmas. Neutral-beam injection is a flexible and reliable technique, which has been the main heating system on a large variety of fusion devices. To date, all NBI systems were based on positive precursor ion beams. In the 1990s there has been impressive progress in negative ion sources and accelerators with the construction of multi-megawatt negative-ion-based NBI systems at LHD (H0, 180 keV) and JT-60U (D0, 500 keV). The NBI designed for ITER is a substantial challenge (D0, 1 MeV, 40 A) and a prototype is being constructed to optimize its performance in view of the ITER future operations. Other ways to heat plasma for nuclear fusion include RF heating, electron cyclotron resonance heating (ECRH), ion cyclotron resonance heating (ICRH), and lower hybrid resonance heating (LH).
Mechanism
This is typically done by:
Making a plasma. This can be done by microwaving a low-pressure gas.
Electrostatic ion acceleration. This is done dropping the positively charged ions towards negative plates. As the ions fall, the electric field does work on them, heating them to fusion temperatures.
Reneutralizing the hot plasma by adding in the opposite charge. This gives the fast-moving beam with no charge.
Injecting the fast-moving hot neutral beam in the machine.
It is critical to inject neutral material into plasma, because if it is charged, it can start harmful plasma instabilities. Most fusion devices inject isotopes of hydrogen, such as pure deuterium or a mix of deuterium and tritium. This material becomes part of the fusion plasma. It also transfers its energy into the existing plasma within the machine. This hot stream of material should raise the overall temperature. Although the beam has no electrostatic charge when it enters, as it passes through the plasma, the atoms are ionized. This happens because the beam bounces off ions already in the plasma .
Neutral-beam injectors installed in fusion experiments
At present, all main fusion experiments use NBIs. Traditional positive-ion-based injectors (P-NBI) are installed for instance in JET and in AUG. To allow power deposition in the center of the burning plasma in larger devices, a higher neutral-beam energy is required. High-energy (>100 keV) systems require the use of negative ion technology (N-NBI).
Legend
Coupling with fusion plasma
Because the magnetic field inside the torus is circular, these fast ions are confined to the background plasma. The confined fast ions mentioned above are slowed down by the background plasma, in a similar way to how air resistance slows down a baseball. The energy transfer from the fast ions to the plasma increases the overall plasma temperature.
It is very important that the fast ions are confined within the plasma long enough for them to deposit their energy. Magnetic fluctuations are a big problem for plasma confinement in this type of device (see plasma stability) by scrambling what were initially well-ordered magnetic fields. If the fast ions are susceptible to this type of behavior, they can escape very quickly. However, some evidence suggests that they are not susceptible.
The interaction of fast neutrals with the plasma consist of
ionisation by collision with plasma electrons and ions,
drift of newly created fast ions in the magnetic field,
collisions of fast ions with plasma ions and electrons by Coulomb collisions (slow-down and scattering, thermalisation) or charge exchange collisions with background neutrals.
Design of neutral beam systems
Beam energy
The adsorption length for neutral beam ionization in a plasma is roughly
with in m, particle density n in 1019 m−3, atomic mass M in amu, particle energy E in keV. Depending on the plasma minor diameter and density, a minimum particle energy can be defined for the neutral beam, in order to deposit a sufficient power on the plasma core rather than to the plasma edge.
For a fusion-relevant plasma, the required fast neutral energy gets in the range of 1 MeV. With increasing energy, it is increasingly difficult to obtain fast hydrogen atoms starting from precursor beams composed of positive ions. For that reason, recent and future heating neutral beams will be based on negative-ion beams. In the interaction with background gas, it is much easier to detach the extra electron from a negative ion (H− has a binding energy of 0.75 eV and a very large cross-section for electron detachment in this energy range) rather than to attach one electron to a positive ion.
Charge state of the precursor ion beam
A neutral beam is obtained by neutralisation of a precursor ion beam, commonly accelerated in large electrostatic accelerators. The precursor beam could either be a positive-ion beam or a negative-ion beam: in order to obtain a sufficiently high current, it is produced extracting charges from a plasma discharge.
However, few negative hydrogen ions are created in a hydrogen plasma discharge. In order to generate a sufficiently high negative-ion density and obtain a decent negative-ion beam current, caesium vapors are added to the plasma discharge (surface-plasma negative-ion sources). Caesium, deposited at the source walls, is an efficient electron donor; atoms and positive ions scattered at caesiated surface have a relatively high probability of being scattered as negatively charged ions. Operation of caesiated sources is complex and not so reliable. The development of alternative concepts for negative-ion beam sources is mandatory for the use of neutral beam systems in future fusion reactors.
Existing and future negative-ion-based neutral beam systems (N-NBI) are listed in the following table:
Ion beam neutralisation
Neutralisation of the precursor ion beam is commonly performed by passing the beam through a gas cell. For a precursor negative-ion beam at fusion-relevant energies, the key collisional processes are:
D− + D2 → D0 + e + D2 (singe-electron detachment, with −10=1.13×10−20 m2 at 1 MeV)
D− + D2 → D+ + e + D2 (double-electron detachment, with −11=7.22×10−22 m2 at 1 MeV)
D0 + D2 → D+ + e + D2 (reionization, with 01=3.79×10−21 m2 at 1 MeV)
D+ + D2 → D0 + D2+ (charge exchange, 10 negligible at 1 MeV)
Underline indicates the fast particles, while subscripts i, j of the cross-section ij indicate the charge state of fast particle before and after collision.
Cross-sections at 1 MeV are such that, once created, a fast positive ion cannot be converted into a fast neutral, and this is the cause of the limited achievable efficiency of gas neutralisers.
The fractions of negatively charged, positively charged, and neutral particles exiting the neutraliser gas cells depend on the integrated gas density or target thickness with the gas density along the beam path . In the case of D− beams, the maximum neutralisation yield occurs at a target thickness m−2.
Typically, the background gas density shall be minimised all along the beam path (i.e. within the accelerating electrodes, along the duct connecting to the fusion plasma) to minimise losses except in the neutraliser cell. Therefore, the required target thickness for neutralisation is obtained by injecting gas in a cell with two open ends. A peaked density profile is realised along the cell, when injection occurs at mid-length. For a given gas throughput [Pa·m3/s], the maximum gas pressure at the centre of the cell depends on the gas conductance [m3/s]:
and in molecular-flow regime can be calculated as
with the geometric parameters , , indicated in figure, gas molecule mass, and gas temperature.
Very high gas throughput is commonly adopted, and neutral-beam systems have custom vacuum pumps among the largest ever built, with pumping speeds in the range of million liters per second. If there are no space constraints, a large gas cell length is adopted, but this solution is unlikely in future devices due to the limited volume inside the bioshield protecting from energetic neutron flux (for instance, in the case of JT-60U the N-NBI neutraliser cell is about 15 m long, while in the ITER HNB its length is limited to 3 m).
See also
ITER Neutral Beam Test Facility
References
External links
Thermonuclear Fusion Test Reactor with neutral beam injector at PPPL
Auxiliary heating in ITER
IPP website about NBI technology
Fusion power | Neutral-beam injection | [
"Physics",
"Chemistry"
] | 1,896 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
16,312,085 | https://en.wikipedia.org/wiki/Hu%E2%80%93Washizu%20principle | In continuum mechanics, and in particular in finite element analysis, the Hu–Washizu principle is a variational principle which says that the action
is stationary, where is the elastic stiffness tensor. The Hu–Washizu principle is used to develop mixed finite element methods. The principle is named after Hu Haichang and Kyūichirō Washizu.
References
Further reading
K. Washizu: Variational Methods in Elasticity & Plasticity, Pergamon Press, New York, 3rd edition (1982)
O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu : The Finite Element Method: Its Basis and Fundamentals, Butterworth–Heinemann, (2005).
Calculus of variations
Finite element method
Structural analysis
Principles
Continuum mechanics | Hu–Washizu principle | [
"Physics",
"Mathematics",
"Engineering"
] | 160 | [
"Structural engineering",
"Continuum mechanics",
"Applied mathematics",
"Structural analysis",
"Classical mechanics",
"Applied mathematics stubs",
"Mechanical engineering",
"Aerospace engineering"
] |
16,321,447 | https://en.wikipedia.org/wiki/Vortex%20lattice%20method | The Vortex lattice method, (VLM), is a numerical method used in computational fluid dynamics, mainly in the early stages of aircraft design and in aerodynamic education at university level. The VLM models the lifting surfaces, such as a wing, of an aircraft as an infinitely thin sheet of discrete vortices to compute lift and induced drag. The influence of the thickness and viscosity is neglected.
VLMs can compute the flow around a wing with rudimentary geometrical definition. For a rectangular wing it is enough to know the span and chord. On the other side of the spectrum, they can describe the flow around a fairly complex aircraft geometry (with multiple lifting surfaces with taper, kinks, twist, camber, trailing edge control surfaces and many other geometric features).
By simulating the flow field, one can extract the pressure distribution or as in the case of the VLM, the force distribution, around the simulated body. This knowledge is then used to compute the aerodynamic coefficients and their derivatives that are important for assessing the aircraft's handling qualities in the conceptual design phase. With an initial estimate of the pressure distribution on the wing, the structural designers can start designing the load-bearing parts of the wings, fin and tailplane and other lifting surfaces. Additionally, while the VLM cannot compute the viscous drag, the induced drag stemming from the production of lift can be estimated. Hence as the drag must be balanced with the thrust in the cruise configuration, the propulsion group can also get important data from the VLM simulation.
Historical background
John DeYoung provides a background history of the VLM in the NASA Langley workshop documentation SP-405.
The VLM is the extension of Prandtl's lifting-line theory, where the wing of an aircraft is modeled as an infinite number of Horseshoe vortices. The name was coined by V.M. Falkner in his Aeronautical Research Council paper of 1946. The method has since then been developed and refined further by W.P. Jones, H. Schlichting, G.N. Ward and others.
Although the computations needed can be carried out by hand, the VLM benefited from the advent of computers for the large amounts of computations that are required.
Instead of only one horseshoe vortex per wing, as in the Lifting-line theory, the VLM utilizes a lattice of horseshoe vortices, as described by Falkner in his first paper on this subject in 1943. The number of vortices used vary with the required pressure distribution resolution, and with required accuracy in the computed aerodynamic coefficients. A typical number of vortices would be around 100 for an entire aircraft wing; an Aeronautical Research Council report by Falkner published in 1949 mentions the use of an "84-vortex lattice before the standardisation of the 126-lattice" (p. 4).
The method is comprehensibly described in all major aerodynamic textbooks, such as Katz & Plotkin, Anderson, Bertin & Smith Houghton & Carpenter or Drela,
Theory
The vortex lattice method is built on the theory of ideal flow, also known as Potential flow. Ideal flow is a simplification of the real flow experienced in nature, however for many engineering applications this simplified representation has all of the properties that are important from the engineering point of view. This method neglects all viscous effects. Turbulence, dissipation and boundary layers are not resolved at all. However, lift induced drag can be assessed and, taking special care, some stall phenomena can be modelled.
Assumptions
The following assumptions are made regarding the problem in the vortex lattice method:
The flow field is incompressible, inviscid and irrotational. However, small-disturbance subsonic compressible flow can be modeled if the general 3D Prandtl-Glauert transformation is incorporated into the method.
The lifting surfaces are thin. The influence of thickness on aerodynamic forces are neglected.
The angle of attack and the angle of sideslip are both small, small angle approximation.
Method
By the above assumptions the flowfield is Conservative vector field, which means that there
exists a perturbation velocity potential such that the total velocity vector is given by
and that satisfies Laplace's equation.
Laplace's equation is a second order linear equation, and being so it is subject
to the principle of superposition. Which means that if and are two solutions of
the linear differential equation, then the linear combination is also a solution for any values of the constants and . As Anderson put it "A complicated flow pattern
for an irrotational, incompressible flow can be synthesized by adding together a number of elementary flows, which are also irrotational and incompressible.”. Such elementary flows are the point source or sink, the doublet and the vortex line, each being a solution of Laplace's equation. These may be superposed in many ways to create the formation of line sources, vortex sheets and so on. In the Vortex Lattice method, each such elementary flow is the velocity field of a horseshoe vortex with some strength .
Aircraft model
All the lifting surfaces of an aircraft are divided into some number of quadrilateral panels, and a horseshoe vortex and a collocation point (or control point) are placed on each panel. The transverse segment of the vortex is at the 1/4 chord position of the panel, while the collocation point is at the 3/4 chord position. The vortex strength is to be determined. A normal vector is also placed at each collocation point, set normal to the camber surface of the actual lifting surface.
For a problem with panels, the perturbation velocity at collocation point is given by summing the contributions of all the horseshoe vortices in terms of an Aerodynamic Influence Coefficient (AIC) matrix .
The freestream velocity vector is given in terms of the freestream speed and the angles of attack and sideslip, .
A Neumann boundary condition is applied at each collocation point, which prescribes that the normal velocity across the camber surface is zero. Alternate implementations may also use the Dirichlet boundary condition directly on the velocity potential.
This is also known as the flow tangency condition. By evaluating the dot products above the following system of equations results. The new normalwash AIC matrix is , and the right hand side is formed by the freestream speed and the two aerodynamic angles
This system of equations is solved for all the vortex strengths . The total force vector and total moment vector about the origin are then computed by summing the contributions of all the forces on all the individual horseshoe vortices, with being the fluid density.
Here, is the vortex's transverse segment vector, and is the perturbation velocity at this segment's center location (not at the collocation point).
The lift and induced drag are obtained from the components of the total force vector . For the case of zero sideslip these are given by
Extension to the Dynamic Case
The preliminary design of airplanes requires unsteady aerodynamic models, usually written in the frequency domain for aeroelastic analyses. Commonly used is the Doublet Lattice Method, where the wing system is subdivided into panels. Each panel has a line of doublets of acceleration potential in the first-quarter line, similarly of what is usually done in the Vortex Lattice Method. Each panel has a load point where the lifting force is assumed applied and a control point where the aeroelastic boundary condition is enforced. The Doublet Lattice Method evaluated at frequency zero is usually obtained with a Vortex Lattice formulation
References
External links
http://web.mit.edu/drela/Public/web/avl/
https://github.com/OpenVOGEL
Sources
NASA, Vortex-lattice utilization. NASA SP-405, NASA-Langley, Washington, 1976.
Prandtl. L, Applications of modern hydrodynamics to aeronautics, NACA-TR-116, NASA, 1923.
Falkner. V.M., The Accuracy of Calculations Based on Vortex Lattice Theory, Rep. No. 9621, British A.R.C., 1946.
J. Katz, A. Plotkin, Low-Speed Aerodynamics, 2nd ed., Cambridge University Press, Cambridge, 2001.
J.D. Anderson Jr, Fundamentals of aerodynamics, 2nd ed., McGraw-Hill Inc, 1991.
J.J. Bertin, M.L. Smith, Aerodynamics for Engineers, 3rd ed., Prentice Hall, New Jersey, 1998.
E.L. Houghton, P.W. Carpenter, Aerodynamics for Engineering Students, 4th ed., Edward Arnold, London, 1993.
Lamar, J. E., Herbert, H. E., Production version of the extended NASA-Langley vortex lattice FORTRAN computer program. Volume 1: User's guide, NASA-TM-83303, NASA, 1982
Lamar, J. E., Herbert, H. E., Production version of the extended NASA-Langley vortex lattice FORTRAN computer program. Volume 2: Source code, NASA-TM-83304, NASA, 1982
Melin, Thomas, A Vortex Lattice MATLAB Implementation for Linear Aerodynamic Wing Applications, Royal Institute of Technology (KTH), Sweden, December, 2000
M. Drela, Flight Vehicle Aerodynamics, MIT Press, Cambridge, MA, 2014.
Fluid dynamics
Aerodynamics | Vortex lattice method | [
"Chemistry",
"Engineering"
] | 1,929 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
4,244,086 | https://en.wikipedia.org/wiki/Open%20Graphics%20Project | The Open Graphics Project (OGP) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards, primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card.
OGD1
The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card did not have the same level of performance or functionality as graphics cards on the market at the time, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, some six years after the project began and 3 years after the appearance of the first prototypes.
Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses. The RTL (in Verilog) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL).
It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC, EDID, DPMS and VBE VESA standards. TV-out is also planned.
Versioning schema
Versioning schema for OGD1 will go like this:
{Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed}
OGD1 components
Main components of OGD1 graphics card (shown on the picture)
A) DVI transmitter pair A
B) DVI transmitter pair B
C) 330MHz triple 10-bit DAC (behind)
D) TV chip
E) 2x4 256 megabit DDR SDRAM (front, behind)
F) Xilinx 3S4000 FPGA (main chip)
G) Lattice XP10 FPGA (host interface)
H) SPI PROM 1 Mibit
J) SPI PROM 16 Mibit
K) 3x 500 MHz DACs (optional)
L) 64-bit PCI-X edge connector
M) DVI-I connector A and connector B
N) S-Video connector
O) 100-pin expansion bus connector
Divisions/terms related to OGP
Open Graphics Project (OGP)The group of people developing OGA, its written documentation, and its products.
Open Graphics Architecture (OGA)The trade name for open graphics architectures specified by the Open Graphics Project.
Open Graphics Development (OGD)The initial FPGA-based experimentation board used as a test platform for TRV ASICs.
Traversal Technology (TRV)The commercial name for the first ASIC products, based on the Open Graphics Architecture.
Open Graphics Card (OGC)Graphics cards based on TRV chips.
Open Hardware Foundation (OHF)A non-profit corporation whose charter is to promote the design and production of open-source and open-documentation hardware.
Current status
The OGP project failed to gain the necessary funding to produce an ASIC version of its card. The project appears to have been discontinued in 2011.
See also
Graphics hardware and FOSS
Open-source hardware
Open system (computing)
RISC-V
References
External links
The official Open Graphics wiki archived at the Wayback Machine June 9, 2010
Project VGA – another free graphics core project, aiming at cheaper hardware
Manticore – an older FPGA-based free graphics core implementation. As of 2009-05-04 no source is available.
The master thesis "An FPGA-based 3D Graphics System" illustrates very well the design decisions to make, while developing a FPGA-based 3D graphics core.
The master thesis "A performance-driven SoC architecture for video synthesis" gives a more complete and hands-on approach of some aspects.
Graphics hardware
Information technology projects
Open hardware electronic devices
Open-source hardware
Graphics cards | Open Graphics Project | [
"Technology",
"Engineering"
] | 943 | [
"Information technology",
"Information technology projects"
] |
4,244,144 | https://en.wikipedia.org/wiki/Handel-C | Handel-C is a high-level hardware description language aimed at low-level hardware and is most commonly used in programming FPGAs. Handel-C is to hardware design what the first high-level programming languages were to programming CPUs. It is a turing-complete rich subset of the C programming language, with an emphasis on parallel computing.
Unlike many other hardware design languages (HDL) that target a specific computer architecture Handel-C can be compiled to a number of HDLs and then synthesised to the corresponding hardware. This frees developers to concentrate on the programming task at hand rather than the idiosyncrasies of a specific design language and architecture.
Additional features
Handel-C's subset of C includes all common C language features necessary to describe complex algorithms. Like many embedded C compilers, floating point data types were omitted. Floating point arithmetic is supported through external libraries that are very efficient.
Parallel programs
In order to facilitate a way to describe parallel behavior some of the communicating sequential processes (CSP) keywords are used, along with the general file structure of the Occam programming language.
For example:
par {
++c;
a = d + e;
b = d + e;
}
Channels
Channels provide a mechanism for message passing between parallel threads. Channels can be defined as asynchronous or synchronous (with or without an inferred storage element respectively). A thread writing to a synchronous channel will be immediately blocked until the corresponding listening thread is ready to receive the message. Likewise the receiving thread will block on a read statement until the sending thread executes the next send. Thus they may be used as a means of synchronizing threads.
par {
chan int a; // declare a synchronous channel
int x;
// begin sending thread
seq (i = 0; i < 10; i++) {
a ! i; // send the values 0 to 9 sequentially into the channel
}
// begin receiving thread
seq (j = 0; j < 10; j++) {
a ? x; // perform a sequence of 10 reads from the channel into variable x
delay; // introduce a delay of 1 clock cycle between successive reads
// this has the effect of blocking the sending thread between writes
}
}
Asynchronous channels provide a specified amount of storage for data passing through them in the form of a FIFO. Whilst this FIFO neither full nor empty, both sending and receiving threads may proceed without being blocked. However, when the FIFO is empty, the receiving thread will block at the next read. When it is full, the sending thread will block at the next send. A channel with actors in differing clock domains is automatically asynchronous due to the need for at least one element of storage to mitigate metastability.
A thread may simultaneously wait on multiple channels, synchronous or asynchronous, acting upon the first one available given a specified order of priority or optionally executing an alternate path if none is ready.
Scope and variable sharing
The scope of declarations are limited to the code blocks ({ ... }) in which they were declared, the scope is hierarchical in nature as declarations are in scope within sub blocks.
For example:
int a;
void main(void)
{
int b;
/* "a" and "b" are within scope */
{
int c;
/* "a", "b" and "c" are within scope */
}
{
int d;
/* "a", "b" and "d" are within scope */
}
}
Extensions to the C language
In addition to the effects the standard semantics of C have on the timing of the program, the following keywords are reserved for describing the practicalities of the FPGA environment or for the language elements sourced from Occam:
Scheduling
In Handel-C, assignment and the delay command take one cycle. All other operations are "free". This allows programmers to manually schedule tasks and create effective pipelines. By arranging loops in parallel with the correct delays, pipelines can massively increase data throughput, at the expense of increased hardware resource use.
History
The historical roots of Handel-C are in a series of Oxford University Computing Laboratory hardware description languages developed by the hardware compilation group. Handel HDL evolved into Handel-C around early 1996. The technology developed at Oxford was spun off to mature as a cornerstone product for Embedded Solutions Limited (ESL) in 1996. ESL was renamed Celoxica in September 2000.
Handel-C was adopted by many University Hardware Research groups after its release by ESL, as a result was able to establish itself as a hardware design tool of choice within the academic community, especially in the United Kingdom.
In early 2008, Celoxica's ESL business was acquired by Agility, which developed and sold, among other products, ESL tools supporting Handel-C.
In early 2009, Agility ceased operations after failing to obtain further capital investments or credit
In January 2009, Mentor Graphics acquired Agility's C synthesis assets.
Other subset C HDL's that developed around the same time are Transmogrifier C in 1994 at University of Toronto (now the FpgaC open source project) and Streams-C at Los Alamos National Laboratory (now licensed to Impulse Accelerated Technologies under the name Impulse C)
See also
High- and low-level
C to HDL
References
External links
Handel-C language resources at Mentor Graphics
Oxford Handel-C
C programming language family
Hardware description languages
Electronic design automation | Handel-C | [
"Engineering"
] | 1,151 | [
"Electronic engineering",
"Hardware description languages"
] |
4,245,290 | https://en.wikipedia.org/wiki/Danubit | Danubit was an industrial plastic explosive produced by the Slovak company .
It had been used for many decades intended primarily as a rock blasting explosive for surface and underground mass mining of mineral raw materials. Underwater blasting applications were possible as well.
The producer of Danubit, Istrochem, is a chemical company founded in 1847 by Alfred Nobel in Bratislava, Slovakia. The production of explosives ceased, when part of the company was acquired by the Czech company Explosia a.s., the producer of Semtex in 2009.
Characteristics
See also
Dynamite
External links
Istrochem (producer)
Explosives | Danubit | [
"Chemistry"
] | 125 | [
"Explosives",
"Explosions"
] |
4,245,349 | https://en.wikipedia.org/wiki/Overpotential | In electrochemistry, overpotential is the potential difference (voltage) between a half-reaction's thermodynamically determined reduction potential and the potential at which the redox event is experimentally observed. The term is directly related to a cell's voltage efficiency. In an electrolytic cell the existence of overpotential implies that the cell requires more energy than thermodynamically expected to drive a reaction. In a galvanic cell the existence of overpotential means less energy is recovered than thermodynamics predicts. In each case the extra/missing energy is lost as heat. The quantity of overpotential is specific to each cell design and varies across cells and operational conditions, even for the same reaction. Overpotential is experimentally determined by measuring the potential at which a given current density (typically small) is achieved.
Thermodynamics
The four possible polarities of overpotentials are listed below.
An electrolytic cell's anode is more positive, using more energy than thermodynamics require.
An electrolytic cell's cathode is more negative, using more energy than thermodynamics require.
A galvanic cell's anode is less negative, supplying less energy than thermodynamically possible.
A galvanic cell's cathode is less positive, supplying less energy than thermodynamically possible.
The overpotential increases with growing current density (or rate), as described by the Tafel equation. An electrochemical reaction is a combination of two half-cells and multiple elementary steps. Each step is associated with multiple forms of overpotential. The overall overpotential is the summation of many individual losses.
Voltage efficiency describes the fraction of energy lost through overpotential. For an electrolytic cell this is the ratio of a cell's thermodynamic potential divided by the cell's experimental potential converted to a percentile. For a galvanic cell it is the ratio of a cell's experimental potential divided by the cell's thermodynamic potential converted to a percentile. Voltage efficiency should not be confused with Faraday efficiency. Both terms refer to a mode through which electrochemical systems can lose energy. Energy can be expressed as the product of potential, current and time (joule = volt × Ampere × second). Losses in the potential term through overpotentials are described by voltage efficiency. Losses in the current term through misdirected electrons are described by Faraday efficiency.
Varieties
Overpotential can be divided into many different subcategories that are not all well defined. For example, "polarization overpotential" can refer to the electrode polarization and the hysteresis found in forward and reverse peaks of cyclic voltammetry. A likely reason for the lack of strict definitions is that it is difficult to determine how much of a measured overpotential is derived from a specific source. Overpotentials can be grouped into three categories: activation, concentration, and resistance.
Activation overpotential
The activation overpotential is the potential difference above the equilibrium value required to produce a current that depends on the activation energy of the redox event. While ambiguous, "activation overpotential" often refers exclusively to the activation energy necessary to transfer an electron from an electrode to an anolyte. This sort of overpotential can also be called "electron transfer overpotential" and is a component of "polarization overpotential", a phenomenon observed in cyclic voltammetry and partially described by the Cottrell equation.
Reaction overpotential
Reaction overpotential is an activation overpotential that specifically relates to chemical reactions that precede electron transfer. Reaction overpotential can be reduced or eliminated with the use of electrocatalysts. The electrochemical reaction rate and related current density is dictated by the kinetics of the electrocatalyst and substrate concentration.
The platinum electrode common to much of electrochemistry is electrocatalytically involved in many reactions. For example, hydrogen is oxidized and protons are reduced readily at the platinum surface of a standard hydrogen electrode in aqueous solution, in a Hydrogen Evolution Reaction. Substituting an electrocatalytically inert glassy carbon electrode for the platinum electrode produces irreversible reduction and oxidation peaks with large overpotentials.
Concentration overpotential
Concentration overpotential spans a variety of phenomena that involve the depletion of charge-carriers at the electrode surface. Bubble overpotential is a specific form of concentration overpotential in which the concentration of charge-carriers is depleted by the formation of a physical bubble. The "diffusion overpotential" can refer to a concentration overpotential created by slow diffusion rates as well as "polarization overpotential", whose overpotential is derived mostly from activation overpotential but whose peak current is limited by diffusion of analyte.
The potential difference is caused by differences in the concentration of charge-carriers between bulk solution and the electrode surface. It occurs when electrochemical reaction is sufficiently rapid to lower the surface concentration of the charge-carriers below that of bulk solution. The rate of reaction is then dependent on the ability of the charge-carriers to reach the electrode surface.
Bubble overpotential
Bubble overpotential is a specific form of concentration overpotential and is due to the evolution of gas at either the anode or cathode. This reduces the effective area for current and increases the local current density. An example is the electrolysis of an aqueous sodium chloride solution—although oxygen should be produced at the anode based on its potential, bubble overpotential causes chlorine to be produced instead, which allows the easy industrial production of chlorine and sodium hydroxide by electrolysis.
Resistance overpotential
Resistance overpotentials are those tied to a cell design. These include "junction overpotentials" that occur at electrode surfaces and interfaces like electrolyte membranes. They can also include aspects of electrolyte diffusion, surface polarization (capacitance) and other sources of counter electromotive forces.
See also
Electrolysis
Electrosynthesis
References
Electrochemical concepts
Electrochemical potentials | Overpotential | [
"Chemistry"
] | 1,285 | [
"Electrochemistry",
"Electrochemical concepts",
"Electrochemical potentials"
] |
4,245,815 | https://en.wikipedia.org/wiki/Mosaic%20gold | Mosaic gold or bronze powder refers to tin(IV) sulfide as used as a pigment in bronzing and gilding wood and metal work. It is obtained as a yellow scaly crystalline powder. The alchemists referred to it as aurum musivum, or aurum mosaicum. The term mosaic gold has also been used to refer to ormolu and to cut shapes of gold leaf, some darkened for contrast, arranged as a mosaic. The term bronze powder may also refer to powdered bronze alloy.
A recipe for mosaic gold is already provided in the 3th century A.D. treatise Baopuzi, composed by the Chinese alchemist Ge Hong. The earliest sources for its preparation in Europe, under the name porporina or purpurina, are the late 13th-century North Italian Liber colorum secundum Magistrum Bernardum and Cennino Cennini's Libro dell'arte from the 1420s. Instructions became more widespread and varied thereafter, the around 1500 recipe collection Liber illuministarum from Tegernsee Abbey in Bavaria alone offering six different methods for its preparation. Alchemists prepared it by combining mercury, tin, sal ammoniac, and sublimated sulfur (fleur de soufre), grinding, mixing, then setting them for three hours in a sand heat. The dirty sublimate being taken off, aurum mosaicum was found at the bottom of the matrass.
In the past it was used for medical purposes in most chronic and nervous ailments, and particularly convulsions of children; however, it is no longer recommended for any medical uses.
See also
List of inorganic pigments
References
Inorganic pigments
Visual arts materials
Alchemical substances
Tin(IV) compounds
Powders
Sulfides | Mosaic gold | [
"Physics",
"Chemistry"
] | 371 | [
"Inorganic compounds",
"Alchemical substances",
"Inorganic pigments",
"Materials",
"Powders",
"Matter"
] |
4,246,442 | https://en.wikipedia.org/wiki/Roundabout%20PlayPump | The Roundabout PlayPump is a system that uses the energy created by children playing to operate a water pump. It is manufactured by the South African company Roundabout Outdoor. It operates in a similar way to a windmill-driven water pump.
The PlayPump received heavy publicity and funding when first introduced, but has since been criticized for being too expensive, too complex to maintain or repair in low-resource settings, too reliant on child labor, and overall less effective than traditional handpumps. WaterAid, one of the biggest water charities in the world, opposes the PlayPump for these reasons.
Design
The PlayPump water system is a playground merry-go-round attached to a water pump. The spinning motion pumps underground water into a 2,500-liter tank raised seven meters above ground. The water in the tank is easily dispensed by a tap valve. According to the manufacturer the pump can raise up to 1400 liters of water per hour from a depth of 40 meters. Excess water is diverted below ground again.
The storage tank has a four-sided advertising panel. Two sides are used to advertise products, thereby providing money for maintenance of the pump, and the other two sides are devoted to public health messages about topics like HIV/AIDS prevention.
History
The PlayPump was invented in South Africa by Ronnie Stuiver, a borehole driller and engineer, who exhibited it at an agricultural fair in 1989. Trevor Field, an agricultural executive, saw the device at the fair and licensed it from Stuiver. Field installed the first two systems in KwaZulu-Natal province in South Africa in 1994, and began receiving media attention in 1999, when Nelson Mandela attended the opening of a school which had a PlayPump. In 2000, PlayPump received the World Bank Development Marketplace Award, and it became internationally prominent following a 2005 PBS Frontline report in 2005. At a 2006 Clinton Global Initiative ceremony, donors pledged $16.4 million to install more PlayPumps.
By 2008, 1,000 PlayPumps had been installed, and Field set a goal of installing 4,000 by 2010. However, in 2009 PlayPumps International turned its inventory of uninstalled PlayPumps over to Water For People, and stopped installing new PlayPumps in order to focus on maintenance of existing ones.
Effectiveness
The Guardian calculated in 2009 that children would have to "play" for 27 hours every day to meet PlayPumps' stated targets of providing 2,500 people per pump with their daily water needs.
In June 2010, PBS's Frontline/World aired an update about the failure of PlayPumps, particularly in Mozambique. Many older women, who were not consulted prior to the installation of the PlayPumps, found operating them to be difficult, especially when there were few children around. PlayPumps were also breaking down, with no way for villagers to make the expensive necessary repairs. A comprehensive report about these failures was released by UNICEF in 2007.
See also
Empower Playgrounds
Blood: Water Mission
Water privatisation in South Africa
Water scarcity in Africa
References
External links
Roundabout Water Solutions
One Water — official One Water site
"Why pumping water is child's play" (2005-04-25) - BBC News article
"The Play Pump: Turning water into child's play" (2004-10-24) - article with streaming video
African Well Fund
Human power
Water supply
Pumps | Roundabout PlayPump | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 699 | [
"Pumps",
"Hydrology",
"Physical quantities",
"Turbomachinery",
"Physical systems",
"Power (physics)",
"Hydraulics",
"Human power",
"Environmental engineering",
"Water supply"
] |
4,246,605 | https://en.wikipedia.org/wiki/Antenna%20rotator | An antenna rotator (or antenna rotor) is a device used to change the orientation, within the horizontal plane, of a directional antenna. Most antenna rotators have two parts, the rotator unit and the controller. The controller is normally placed near the equipment which the antenna is connected to, while the rotator is mounted on the antenna mast directly below the antenna.
Rotators are commonly used in amateur radio and military communications installations. They are also used with TV and FM antennas, where stations are available from multiple directions, as the cost of a rotator is often significantly less than that of installing a second antenna to receive stations from multiple directions.
Rotators are manufactured for different sizes of antennas and installations. For example, a consumer TV antenna rotator has enough torque to turn a TV/FM or small ham antenna. These units typically cost around US$70 .
Heavy-duty ham rotators are designed to turn extremely large, heavy, high frequency (shortwave) beam antennas, and cost hundreds or possibly thousands of dollars.
In the center of the reference picture, the accompanying image includes an AzEl installation rotator, so named for its controlling of both the azimuth and the elevation components of the direction of an antenna system or array. Such antenna configurations are used in, for example, amateur radio satellite or moon-bounce communications.
An open hardware AzEl rotator system is provided by the SatNOGS Groundstation project.
The Alliance Manufacturing Co. of Alliance, Ohio, and the Astatic Corporation of Conneaut, Ohio, manufactured popular radio and TV booster and rotary antenna systems. These products were heavily advertised for radio use in newspapers starting in the early 1940s, and for use with commercial television sets from 1949 into the 1960s. Cinécraft Productions, a pioneer in early TV advertising, produced six commercials for the Astatic Booster TV in 1949 and 112 for the Alliance Tenna-Rotor, Tenna-Scope, and Casca-Matic Booster between 1949 and 1955.
Manufacturers of consumer TV antenna rotators
Past
Before the era of cable TV and the rise of satellite TV, many homes had outdoor antennas designed to capture over-the-air television signals. The rotator market was served by a number of manufacturers including
Alinco Electronics, Inc.
Alliance Manufacturing Co., Inc., Alliance, Ohio
American Phenolic Corporation
Astatic Corporation. Conneaut, Ohio
Channel Master
Cornell-Dubilier Corporation, South Plainfield, New Jersey
Gemini Industries, Inc., Passaic, New Jersey
Hy-Gain
Kenpro
Lance Industries, Sylmar, California
Leader Electronics, Inc., Cleveland, Ohio
Nippon Antenna
Philco
Philips
Pro Brand International, Inc. (Eagle Aspen brand)
Radio Merchandise Sales, Inc.
Radio Shack
RCA
Sears, Roebuck and Co.
Stolle
The Radiart Corp., Cleveland, Ohio
Zenith
Current
Although the cord-cutting movement has increased interest in receiving free over-the-air television signals, as of December 2021 consumer options are limited.
VOXX Accessories Corporation of Carmel, Indiana, a wholly-owned subsidiary of VOXX International, using the RCA trademark, offers a remote-controlled antenna rotator, the model VH226E.
Channel Master's stock of model CM-9521HD is depleted, and it is unclear whether more units will be manufactured.
Yaesu Musen sells a range of rotators aimed at the Amateur and FM (VHF) broadcast listener markets. Amateur products include an Azimuth-Elevation product.
References
External links
Alliance Tenna-Rotor Series 1-17 television commercials produced by Cinécraft Productions, Inc.
Yaesu Rotators
Antennas
Rotation | Antenna rotator | [
"Physics",
"Engineering"
] | 744 | [
"Physical phenomena",
"Telecommunications engineering",
"Antennas",
"Classical mechanics",
"Rotation",
"Motion (physics)"
] |
4,248,265 | https://en.wikipedia.org/wiki/Zimm%E2%80%93Bragg%20model | In statistical mechanics, the Zimm–Bragg model is a helix-coil transition model that describes helix-coil transitions of macromolecules, usually polymer chains. Most models provide a reasonable approximation of the fractional helicity of a given polypeptide; the Zimm–Bragg model differs by incorporating the ease of propagation (self-replication) with respect to nucleation. It is named for co-discoverers Bruno H. Zimm and J. K. Bragg.
Helix-coil transition models
Helix-coil transition models assume that polypeptides are linear chains composed of interconnected segments. Further, models group these sections into two broad categories: coils, random conglomerations of disparate unbound pieces, are represented by the letter 'C', and helices, ordered states where the chain has assumed a structure stabilized by hydrogen bonding, are represented by the letter 'H'.
Thus, it is possible to loosely represent a macromolecule as a string such as CCCCHCCHCHHHHHCHCCC and so forth. The number of coils and helices factors into the calculation of fractional helicity, , defined as
where
is the average helicity and
is the number of helix or coil units.
Zimm–Bragg
The Zimm–Bragg model takes the cooperativity of each segment into consideration when calculating fractional helicity. The probability of any given monomer being a helix or coil is affected by which the previous monomer is; that is, whether the new site is a nucleation or propagation.
By convention, a coil unit ('C') is always of statistical weight 1. Addition of a helix state ('H') to a previously coiled state (nucleation) is assigned a statistical weight , where is the nucleation parameter and is the equilibrium constant
Adding a helix state to a site that is already a helix (propagation) has a statistical weight of . For most proteins,
which makes the propagation of a helix more favorable than nucleation of a helix from coil state.
From these parameters, it is possible to compute the fractional helicity . The average helicity is given by
where is the partition function given by the sum of the probabilities of each site on the polypeptide. The fractional helicity is thus given by the equation
Statistical mechanics
The Zimm–Bragg model is equivalent to a one-dimensional Ising model and has no long-range interactions, i.e., interactions between residues well separated along the backbone; therefore, by the famous argument of Rudolf Peierls, it cannot undergo a phase transition.
The statistical mechanics of the Zimm–Bragg model may be solved exactly using the transfer-matrix method. The two parameters of the Zimm–Bragg model are σ, the statistical weight for nucleating a helix and s, the statistical weight for propagating a helix. These parameters may depend on the residue j; for example, a proline residue may easily nucleate a helix but not propagate one; a leucine residue may nucleate and propagate a helix easily; whereas glycine may disfavor both the nucleation and propagation of a helix. Since only nearest-neighbour interactions are considered in the Zimm–Bragg model, the full partition function for a chain of N residues can be written as follows
where the 2x2 transfer matrix Wj of the jth residue equals the matrix of statistical weights for the state transitions
The row-column entry in the transfer matrix equals the statistical weight for making a transition from state row in residue j − 1 to state column in residue j. The two states here are helix (the first) and coil (the second). Thus, the upper left entry s is the statistical weight for transitioning from helix to helix, whereas the lower left entry σs is that for transitioning from coil to helix.
See also
Alpha helix
Lifson–Roig model
Random coil
Statistical mechanics
References
Polymer physics
Protein structure
Statistical mechanics
Thermodynamic models | Zimm–Bragg model | [
"Physics",
"Chemistry",
"Materials_science"
] | 846 | [
"Polymer physics",
"Thermodynamic models",
"Thermodynamics",
"Polymer chemistry",
"Structural biology",
"Statistical mechanics",
"Protein structure"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.