id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
65,345 | https://en.wikipedia.org/wiki/Leat | A leat (; also lete or leet, or millstream) is the name, common in the south and west of England and in Wales, for an artificial watercourse or aqueduct dug into the ground, especially one supplying water to a watermill or its mill pond. Other common uses for leats include delivery of water for hydraulic mining and mineral concentration, for irrigation, to serve a dye works or other industrial plant, and provision of drinking water to a farm or household or as a catchment cut-off to improve the yield of a reservoir.
According to the Oxford English Dictionary, leat is cognate with let in the sense of "allow to pass through". Other names for the same thing include fleam (probably a leat supplying water to a mill that did not have a millpool). In parts of northern England, for example around Sheffield, the equivalent word is goit. In southern England, a leat used to supply water for water-meadow irrigation is often called a carrier, top carrier, or main.
Design and functions
Water mills
Leats generally start some distance (a few hundred metres/yards, or perhaps several miles/kilometres) above the mill or other destination, where an offtake or sluice gate diverts a proportion of the water from a river or stream. A weir in the source stream often serves to provide a reservoir of water adequate for diversion. The leat then runs along the edge or side of the valley, at a shallower slope than the main stream. The gradient, together with the quality of the wetted surface of the leat, determines the flow rate. The flow rate may be calculated using the Manning formula. By the time it arrives at the water mill the difference in levels between the leat and the main stream is great enough to provide a useful head of water – several metres (perhaps 5 to 15 feet) for a watermill, or a metre or less (perhaps one to four feet) for the controlled irrigation of a water-meadow.
Water supply
Leats are used to increase the yield of a reservoir by trapping streams in nearby catchments by means of a contour leat. This captures part or all of the stream flow and transports it along the contour to the reservoir. Such leats are common around reservoirs in the uplands of Wales.
Mining
Leats were built to work lead, tin and silver ores in mining areas of Wales, Cornwall, Devon, the Pennines and the Leadhills/Wanlockhead area of Southern Scotland from the 17th century onwards. They were used to supply water for hushing mineral deposits, washing ore and powering mills.
Use in Roman times
Leats were also used extensively by the Romans, and can still be seen at many sites, such as the Dolaucothi goldmines. They used the aqueducts to prospect for ores by sluicing away the overburden of soil to reveal the bedrock in a method known as hushing. They could then attack the ore veins by fire-setting, quench with water from a tank above the workings, and remove the debris with waves of water, a method still used in hydraulic mining. The water supply could then be used for washing the ore after crushing by simple machines also driven by water.
The Romans also used them for supplying water to the bath-houses or thermae and to drive vertical water-wheels.
Dartmoor
There are many leats on Dartmoor, mostly constructed to provide power for mining activities, although some were also sources of drinking water. The courses of many Dartmoor leats may still be followed. Many such leats on the moor are marked on the 1:50000 and 1:25000 Ordnance Survey maps, such as that serving the now-defunct Vitifer mine near the Warren House Inn. Notable leats include:
Drake's Leat, constructed in 1591 under the management of Sir Francis Drake, as an agent of the Corporation of Plymouth, to carry water from Dartmoor to Plymouth.
Devonport Leat constructed in the late 18th century to carry water to the expanding naval dockyard at Devonport (now a part of Plymouth).
See also
Acequia
Aqueduct (watercourse)
Flume
Kunstgraben
Mill race
Penstock
Roman aqueduct
Roman engineering
Roman mining
References
External links
Aqueducts in the United Kingdom
History of mining
Hydraulic engineering
Water management in mining
cs:Náhon
nrm:Bieu | Leat | [
"Physics",
"Engineering",
"Environmental_science"
] | 901 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
65,424 | https://en.wikipedia.org/wiki/Hydraulics | Hydraulics () is a technology and applied science using engineering, chemistry, and other sciences involving the mechanical properties and use of liquids. At a very basic level, hydraulics is the liquid counterpart of pneumatics, which concerns gases. Fluid mechanics provides the theoretical foundation for hydraulics, which focuses on applied engineering using the properties of fluids. In its fluid power applications, hydraulics is used for the generation, control, and transmission of power by the use of pressurized liquids. Hydraulic topics range through some parts of science and most of engineering modules, and they cover concepts such as pipe flow, dam design, fluidics, and fluid control circuitry. The principles of hydraulics are in use naturally in the human body within the vascular system and erectile tissue.
Free surface hydraulics is the branch of hydraulics dealing with free surface flow, such as occurring in rivers, canals, lakes, estuaries, and seas. Its sub-field open-channel flow studies the flow in open channels.
History
Ancient and medieval eras
Early uses of water power date back to Mesopotamia and ancient Egypt, where irrigation has been used since the 6th millennium BC and water clocks had been used since the early 2nd millennium BC. Other early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient Central Asia.
Persian Empire and Urartu
In the Persian Empire or previous entities in Persia, the Persians constructed an intricate system of water mills, canals and dams known as the Shushtar Historical Hydraulic System. The project, commenced by Achaemenid king Darius the Great and finished by a group of Roman engineers captured by Sassanian king Shapur I, has been referred to by UNESCO as "a masterpiece of creative genius". They were also the inventors of the Qanat, an underground aqueduct, around the 9th century BC. Several of Iran's large, ancient gardens were irrigated thanks to Qanats.
The Qanat spread to neighboring areas, including the Armenian highlands. There, starting in the early 8th century BC, the Kingdom of Urartu undertook significant hydraulic works, such as the Menua canal.
The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BCE, in the regions of Iraq, Iran, and Egypt.
China
In ancient China there was Sunshu Ao (6th century BC), Ximen Bao (5th century BC), Du Shi (circa 31 AD), Zhang Heng (78 – 139 AD), and Ma Jun (200 – 265 AD), while medieval China had Su Song (1020 – 1101 AD) and Shen Kuo (1031–1095). Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron. Zhang Heng was the first to employ hydraulics to provide motive power in rotating an armillary sphere for astronomical observation.
Sri Lanka
In ancient Sri Lanka, hydraulics were widely used in the ancient kingdoms of Anuradhapura and Polonnaruwa. The discovery of the principle of the valve tower, or valve pit, (Bisokotuwa in Sinhalese) for regulating the escape of water is credited to ingenuity more than 2,000 years ago. By the first century AD, several large-scale irrigation works had been completed. Macro- and micro-hydraulics to provide for domestic horticultural and agricultural needs, surface drainage and erosion control, ornamental and recreational water courses and retaining structures and also cooling systems were in place in Sigiriya, Sri Lanka. The coral on the massive rock at the site includes cisterns for collecting water. Large ancient reservoirs of Sri Lanka are Kalawewa (King Dhatusena), Parakrama Samudra (King Parakrama Bahu), Tisa Wewa (King Dutugamunu), Minneriya (King Mahasen)
Greco-Roman world
In Ancient Greece, the Greeks constructed sophisticated water and hydraulic power systems. An example is a construction by Eupalinos, under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. An early example of the usage of hydraulic wheel, probably the earliest in Europe, is the Perachora wheel (3rd century BC).
In Greco-Roman Egypt, the construction of the first hydraulic machine automata by Ctesibius (flourished c. 270 BC) and Hero of Alexandria (c. 10 – 80 AD) is notable. Hero describes several working machines using hydraulic power, such as the force pump, which is known from many Roman sites as having been used for raising water and in fire engines.
In the Roman Empire, different hydraulic applications were developed, including public water supplies, innumerable aqueducts, power using watermills and hydraulic mining. They were among the first to make use of the siphon to carry water across valleys, and used hushing on a large scale to prospect for and then extract metal ores. They used lead widely in plumbing systems for domestic and public supply, such as feeding thermae.
Hydraulic mining was used in the gold-fields of northern Spain, which was conquered by Augustus in 25 BC. The alluvial gold-mine of Las Medulas was one of the largest of their mines. At least seven long aqueducts worked it, and the water streams were used to erode the soft deposits, and then wash the tailings for the valuable gold content.
Arabic-Islamic world
In the Muslim world during the Islamic Golden Age and Arab Agricultural Revolution (8th–13th centuries), engineers made wide use of hydropower as well as early uses of tidal power, and large hydraulic factory complexes. A variety of water-powered industrial mills were used in the Islamic world, including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines, employed gears in watermills and water-raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines.
Al-Jazari (1136–1206) described designs for 50 devices, many of them water-powered, in his book, The Book of Knowledge of Ingenious Mechanical Devices, including water clocks, a device to serve wine, and five devices to lift water from rivers or pools. These include an endless belt with jugs attached and a reciprocating device with hinged valves.
The earliest programmable machines were water-powered devices developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated water-powered flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented water-powered programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.
Modern era (c. 1600–1870)
Benedetto Castelli and Italian Hydraulics
In 1619 Benedetto Castelli, a student of Galileo Galilei, published the book Della Misura dell'Acque Correnti or "On the Measurement of Running Waters," one of the foundations of modern hydrodynamics. He served as a chief consultant to the Pope on hydraulic projects, i.e., management of rivers in the Papal States, beginning in 1626.
The science and engineering of water in Italy from 1500-1800 in books and manuscripts is presented in an illustrated catalog published in 2022.
Blaise Pascal
Blaise Pascal (1623–1662) studied fluid hydrodynamics and hydrostatics, centered on the principles of hydraulic fluids. His discovery on the theory behind hydraulics led to his invention of the hydraulic press, which multiplied a smaller force acting on a smaller area into the application of a larger force totaled over a larger area, transmitted through the same pressure (or exact change of pressure) at both locations. Pascal's law or principle states that for an incompressible fluid at rest, the difference in pressure is proportional to the difference in height, and this difference remains the same whether or not the overall pressure of the fluid is changed by applying an external force. This implies that by increasing the pressure at any point in a confined fluid, there is an equal increase at every other end in the container, i.e., any change in pressure applied at any point of the liquid is transmitted undiminished throughout the fluids.
Jean Léonard Marie Poiseuille
A French physician, Poiseuille (1797–1869) researched the flow of blood through the body and discovered an important law governing the rate of flow with the diameter of the tube in which flow occurred.
In the UK
Several cities developed citywide hydraulic power networks in the 19th century, to operate machinery such as lifts, cranes, capstans and the like. Joseph Bramah (1748–1814) was an early innovator and William Armstrong (1810–1900) perfected the apparatus for power delivery on an industrial scale. In London, the London Hydraulic Power Company was a major supplier its pipes serving large parts of the West End of London, City and the Docks, but there were schemes restricted to single enterprises such as docks and railway goods yards.
Hydraulic models
After students understand the basic principles of hydraulics, some teachers use a hydraulic analogy to help students learn other things.
For example:
The MONIAC Computer uses water flowing through hydraulic components to help students learn about economics.
The thermal-hydraulic analogy uses hydraulic principles to help students learn about thermal circuits.
The electronic–hydraulic analogy uses hydraulic principles to help students learn about electronics.
The conservation of mass requirement combined with fluid compressibility yields a fundamental relationship between pressure, fluid flow, and volumetric expansion, as shown below:
Assuming an incompressible fluid or a "very large" ratio of compressibility to contained fluid volume, a finite rate of pressure rise requires that any net flow into the collected fluid volume create a volumetric change.
See also
Affinity laws
Bernoulli's principle
Fluid power
Hydraulic brake
Hydraulic cylinder
Hydraulic engineering
Hydraulic machinery
Hydraulic mining
Hydrology
International Association for Hydro-Environment Engineering and Research
Miniature hydraulics
Open-channel flow
Pneumatics
Notes
References
External links
Pascal's Principle and Hydraulics (archived copy)
The principle of hydraulics
IAHR media library Web resource of photos, animation & video
MIT hydraulics course notes
Ancient inventions
Hellenistic engineering
Hydraulic engineering
Mechanical engineering | Hydraulics | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,228 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Mechanical engineering",
"Hydraulic engineering",
"Fluid dynamics"
] |
65,465 | https://en.wikipedia.org/wiki/Petroleum%20engineering | Petroleum engineering is a field of engineering concerned with the activities related to the production of hydrocarbons, which can be either crude oil or natural gas. Exploration and production are deemed to fall within the upstream sector of the oil and gas industry. Exploration, by earth scientists, and petroleum engineering are the oil and gas industry's two main subsurface disciplines, which focus on maximizing economic recovery of hydrocarbons from subsurface reservoirs. Petroleum geology and geophysics focus on provision of a static description of the hydrocarbon reservoir rock, while petroleum engineering focuses on estimation of the recoverable volume of this resource using a detailed understanding of the physical behavior of oil, water and gas within porous rock at very high pressure.
The combined efforts of geologists and petroleum engineers throughout the life of a hydrocarbon accumulation determine the way in which a reservoir is developed and depleted, and usually they have the highest impact on field economics. Petroleum engineering requires a good knowledge of many other related disciplines, such as geophysics, petroleum geology, formation evaluation (well logging), drilling, economics, reservoir simulation, reservoir engineering, well engineering, artificial lift systems, completions and petroleum production engineering.
Recruitment to the industry has historically been from the disciplines of physics, mechanical engineering, chemical engineering and mining engineering. Subsequent development training has usually been done within oil companies.
Overview
The profession got its start in 1914 within the American Institute of Mining, Metallurgical and Petroleum Engineers (AIME). The first Petroleum Engineering degree was conferred in 1915 by the University of Pittsburgh. Since then, the profession has evolved to solve increasingly difficult situations. Improvements in computer modeling, materials and the application of statistics, probability analysis, and new technologies like horizontal drilling and enhanced oil recovery, have drastically improved the toolbox of the petroleum engineer in recent decades. Automation, sensors, and robots are being used to propel the industry to more efficiency and safety.
Deep-water, arctic and desert conditions are usually contended with. High temperature and high pressure (HTHP) environments have become increasingly commonplace in operations and require the petroleum engineer to be savvy in topics as wide-ranging as thermo-hydraulics, geomechanics, and intelligent systems.
The Society of Petroleum Engineers (SPE) is the largest professional society for petroleum engineers and publishes much technical information and other resources to support the oil and gas industry. It provides free online education (webinars), mentoring, and access to SPE Connect, an exclusive platform for members to discuss technical issues, best practices, and other topics. SPE members also are able to access the SPE Competency Management Tool to find knowledge and skill strengths and opportunities for growth. SPE publishes peer-reviewed journals, books, and magazines. SPE members receive a complimentary subscription to the Journal of Petroleum Technology and discounts on SPE's other publications. SPE members also receive discounts on registration fees for SPE organized events and training courses. SPE provides scholarships and fellowships to undergraduate and graduate students.
According to the United States Department of Labor's Bureau of Labor Statistics, petroleum engineers are required to have a bachelor's degree in engineering, generally a degree focused on petroleum engineering is preferred, but degrees in mechanical, chemical, and civil engineering are satisfactory as well. Petroleum engineering education is available at many universities in the United States and throughout the world - primarily in oil producing regions. U.S. News & World Report maintains a list of the Best Undergraduate Petroleum Engineering Programs. SPE and some private companies offer training courses. Some oil companies have considerable in-house petroleum engineering training classes.
Petroleum engineering salaries
Petroleum engineering has historically been one of the highest-paid engineering disciplines, although there is a tendency for mass layoffs when oil prices decline and waves of hiring as prices rise. In 2020, the United States Department of Labor's Bureau of Labor Statistics reported the median pay for petroleum engineers was US$137,330, or roughly $66.02 per hour. The same summary projects there will be 3% job growth in this field from 2019 to 2029.
SPE annually conducts a salary survey. In 2017, SPE reported that the average SPE professional member reported earning US$194,649 (including salary and bonus). The average base pay reported in 2016 was $143,006. Base pay and other compensation was on average was highest in the United States where the base pay was US$174,283. Drilling and production engineers tended to make the best base pay, US$160,026 for drilling engineers and US$158,964 for production engineers. Average base pay ranged from US$96,382-174,283. There are still significant gender pay gaps, plus or minus 5% of the US average pay gap which was 18% difference in 2017.
Also in 2016, U.S. News & World Report named petroleum engineering the top college major in terms of highest median annual wages of college-educated workers (age 25–59). The 2010 National Association of Colleges and Employers survey showed petroleum engineers as the highest paid 2010 graduates, at an average annual salary of $125,220. For individuals with experience, salaries can range from $170,000 to $260,000. They make an average of $112,000 a year and about $53.75 per hour. In a 2007 article, Forbes.com reported that petroleum engineering was the 24th best paying job in the United States.
Sub-disciplines
Petroleum engineers divide themselves into several types:
Reservoir engineers work to optimize production of oil and gas via proper placement, production rates, and enhanced oil recovery techniques.
Drilling engineers manage the technical aspects of drilling exploratory, production and injection wells.
Drilling fluid engineers A mud engineer (correctly called a Drilling Fluids Engineer, but most often referred to as the "Mud Man") works on an oil well or gas well drilling rig, and is responsible ensuring the properties of the drilling fluid, also known as drilling mud, are within designed specifications.
Completion engineers (also known as subsurface engineers) work to design and oversee the implementation of techniques aimed at ensuring wells are drilled stably and with the maximum opportunity for oil and gas production.
Production engineers manage the interface between the reservoir and the well, including perforations, sand control, downhole flow control, and downhole monitoring equipment; evaluate artificial lift methods; and select surface equipment that separates the produced fluids (oil, natural gas, and water).
Petrophysicists gather information about subsurface properties to build wellbore stability models and study rock properties
Education
Petroleum Engineering, like most forms of engineering, requires a strong foundation in physics, chemistry, and mathematics. Other fields pertinent to petroleum engineering include geology, formation evaluation, fluid flow in porous media, well drilling technology, economics, geostatistics, etc.
Petroleum Geostatistics
Geostatistics as applied to petroleum engineering uses statistical analysis to characterize reservoirs and create flow simulations that quantify uncertainties of the location of oil and gas.
Petroleum Geology
Petroleum geology is an interdisciplinary field composed of geophysics, geochemistry, and paleontology. The main focus of petroleum geology is the exploration and appraisal of reservoirs containing hydrocarbons via technical forms of analysis.
Well Drilling Technology
Well drilling technology is primarily the focus for drilling engineers. The two forms of well drilling are percussion and rotary drilling, rotary being the most common of the two. An important aspect of drilling is the drill bit, which creates a borehole of approximately three and a half to thirty inches in diameter. The three classes of drill bits, roller cone, fixed cutter, and hybrid, each use teeth to break up the rock. To optimize drilling efficiency and cost, drilling engineers make use of drilling simulators that allow them to identify drilling conditions. Drilling technologies including horizontal drilling and directional drilling have been developed to obtain hydrocarbons profitably from impermeable and coal-bed methane accumulations.
Professional associations
Society of Petroleum Engineers
American Institute of Mining, Metallurgical and Petroleum Engineers
See also
Petroleum industry
Petroleum geology
Seismic to simulation
Society of Petroleum Engineers
SPE Certified Petroleum Professional
References
Bibliography
External links
The Society of Petroleum Engineers
Schlumberger Oilfield Glossary: An Online Glossary of Oilfield Terms
Society of Petroleum Evaluation Engineers
Petroleum Engineering Schools
What is Forensic Petroleum Engineering?
Petroleum Engineering - Best Petroleum Engineering Schools & Colleges, Jobs in USA
About Petroleum Engineering
Career Opportunities in Petroleum Engineering
oil and gas online certification courses
Engineering disciplines | Petroleum engineering | [
"Engineering"
] | 1,729 | [
"Petroleum engineering",
"Energy engineering",
"nan"
] |
65,622 | https://en.wikipedia.org/wiki/Clinical%20chemistry | Clinical chemistry (also known as chemical pathology, clinical biochemistry or medical biochemistry) is a division in medical laboratory sciences focusing on qualitative tests of important compounds, referred to as analytes or markers, in bodily fluids and tissues using analytical techniques and specialized instruments. This interdisciplinary field includes knowledge from medicine, biology, chemistry, biomedical engineering, informatics, and an applied form of biochemistry (not to be confused with medicinal chemistry, which involves basic research for drug development).
The discipline originated in the late 19th century with the use of simple chemical reaction tests for various components of blood and urine. Many decades later, clinical chemists use automated analyzers in many clinical laboratories. These instruments perform experimental techniques ranging from pipetting specimens and specimen labelling to advanced measurement techniques such as spectrometry, chromatography, photometry, potentiometry, etc. These instruments provide different results that help identify uncommon analytes, changes in light and electronic voltage properties of naturally-occurring analytes such as enzymes, ions, electrolytes, and their concentrations, all of which are important for diagnosing diseases.
Blood and urine are the most common test specimens clinical chemists or medical laboratory scientists collect for clinical routine tests, with a main focus on serum and plasma in blood. There are now many blood tests and clinical urine tests with extensive diagnostic capabilities. Some clinical tests require clinical chemists to process the specimen before testing. Clinical chemists and medical laboratory scientists serve as the interface between the laboratory side and the clinical practice, providing suggestions to physicians on which test panel to order and interpret any irregularities in test results that reflect on the patient's health status and organ system functionality. This allows healthcare providers to make more accurate evaluation of a patient's health and to diagnose disease, predicting the progression of a disease (prognosis), screening, and monitoring the treatment's efficiency in a timely manner. The type of test required dictates what type of sample is used.
Common Analytes
Some common analytes that clinical chemistry tests analyze include:
Electrolytes
Sodium
Potassium
Chloride
Bicarbonate
Renal (kidney) function tests
Creatinine
Blood urea nitrogen
Liver function tests
Total protein (serum)
Albumin
Globulins
A/G ratio (albumin-globulin)
Protein electrophoresis
Urine protein
Bilirubin; direct; indirect; total
Aspartate transaminase (AST)
Alanine transaminase (ALT)
Gamma-glutamyl transpeptidase (GGT)
Alkaline phosphatase (ALP)
Cardiac markers
H-FABP
Troponin
Myoglobin
CK-MB
B-type natriuretic peptide (BNP)
Minerals
Calcium
Magnesium
Phosphate
Potassium
Blood disorders
Iron
Transferrin
TIBC
Vitamin B12
Vitamin D
Folic acid
Miscellaneous
Glucose
C-reactive protein
Glycated hemoglobin (HbA1c)
Uric acid
Arterial blood gases ([H+], PCO2, PO2)
Adrenocorticotropic hormone (ACTH)
Toxicological screening and forensic toxicology (drugs and toxins)
Neuron-specific enolase (NSE)
fecal occult blood test (FOBT)
Panel tests
A physician may order many laboratory tests on one specimen, referred to as a test panel, when a single test cannot provide sufficient information to make a swift and accurate diagnosis and treatment plan. A test panel is a group of many tests a clinical chemists do on one sample to look for changes in many analytes that may be indicative of specific medical concerns or the health status of an organ system. Thus, panel tests provide a more extensive evaluation of a patient's health, have higher predictive values for confirming or disproving a disease, and are quick and cost-effective.
Metabolic Panel
A Metabolic Panel (MP) is a routine group of blood tests commonly used for health screenings, disease detection, and monitoring vital signs of hospitalized patients with specific medical conditions. MP panel analyzes common analytes in the blood to assess the functions of the kidneys and liver, as well as electrolyte and acid-base balances. There are two types of MPs - Basic Metabolic Panel (BMP) or Comprehensive Metabolic Panel (CMP).
Basic Metabolic Panel
BMP is a panel of tests that measures eight analytes in the blood's fluid portion (plasma). The results of the BMP provide valuable information about a patient's kidney function, blood sugar level, electrolyte levels, and the acid-base balance. Abnormal changes in one or more of these analytes can be a sign of serious health issues:
Sodium, Potassium, Chloride, and Carbon Dioxide: they are electrolytes that have electrical charges that manage the body’s water level, acid-base balance in the blood, and kidney function.
Calcium: This charged electrolyte is essential for the proper functions of nerve, muscle, blood clotting, and bone health. Changes in the calcium level can be signs of bone disease, muscle cramps/ spasms, thyroid disease, or other conditions.
Glucose: This measures the blood sugar levels, which is a crucial energy for your body and brain. High glucose levels can be a sign of diabetes or insulin resistance.
Urea and Creatinine: These are waste products that the kidney filters out from blood. Urea measurements are helpful in detecting and treating kidney failure and related metabolic disorders, whereas creatinine measurements give information on kidney’s health, tracking renal dialysis treatment, and monitor hospitalized patients that are on diuretics.
Comprehensive Metabolic Panel
Comprehensive metabolic panel (CMP) - 14 tests - above BMP plus total protein, albumin, alkaline phosphatase (ALP), alanine amino transferase (ALT), aspartate amino transferase (AST), bilirubin.
Specimen Processing
For blood tests, clinical chemists must process the specimen to obtain plasma and serum before testing for targeted analytes. This is most easily done by centrifugation, which packs the denser blood cells and platelets to the bottom of the centrifuge tube, leaving the liquid serum fraction resting above the packed cells. This initial step before analysis has recently been included in instruments that operate on the "integrated system" principle. Plasma is obtained by centrifugation before clotting occurs.
Instruments
Most current medical laboratories now have highly automated analyzers to accommodate the high workload typical of a hospital laboratory, and accept samples for up to about 700 different kinds of tests. Even the largest of laboratories rarely do all these tests themselves, and some must be referred to other labs. Tests performed are closely monitored and quality controlled.
Specialties
The large array of tests can be categorised into sub-specialities of:
General or routine chemistry – commonly ordered blood chemistries (e.g., liver and kidney function tests).
Special chemistry – elaborate techniques such as electrophoresis, and manual testing methods.
Clinical endocrinology – the study of hormones, and diagnosis of endocrine disorders.
Toxicology – the study of drugs of abuse and other chemicals.
Therapeutic Drug Monitoring – measurement of therapeutic medication levels to optimize dosage.
Urinalysis – chemical analysis of urine for a wide array of diseases, along with other fluids such as CSF and effusions
Fecal analysis – mostly for detection of gastrointestinal disorders.
See also
Reference ranges for common blood tests
Medical technologist
Clinical Biochemistry (journal)
Notes and references
Bibliography
External links
American Association of Clinical Chemistry
Association for Mass Spectrometry: Applications to the Clinical Lab (MSACL)
Clinical pathology
Pathology
Laboratory healthcare occupations | Clinical chemistry | [
"Chemistry",
"Biology"
] | 1,584 | [
"Biochemistry",
"Chemical pathology",
"Pathology"
] |
65,623 | https://en.wikipedia.org/wiki/Creatinine | Creatinine (; ) is a breakdown product of creatine phosphate from muscle and protein metabolism. It is released at a constant rate by the body (depending on muscle mass).
Biological relevance
Serum creatinine (a blood measurement) is an important indicator of kidney function, because it is an easily measured byproduct of muscle metabolism that is excreted unchanged by the kidneys. Creatinine itself is produced via a biological system involving creatine, phosphocreatine (also known as creatine phosphate), and adenosine triphosphate (ATP, the body's immediate energy supply).
Creatine is synthesized primarily in the liver by methylation of glycocyamine (guanidino acetate, synthesized in the kidney from the amino acids arginine and glycine) by S-adenosyl methionine. It is then transported in the blood to other organs, muscle and brain, where it is phosphorylated to phosphocreatine, a high-energy compound . Creatine conversion to phosphocreatine is catalysed by creatine kinase; spontaneous formation of creatinine occurs during the reaction.
Creatinine is removed from the blood chiefly by the kidneys, primarily by glomerular filtration, but also by proximal tubular secretion. Little or no tubular reabsorption of creatinine occurs. If filtration in the kidney is deficient, blood creatinine concentrations rise. Therefore, creatinine concentrations in blood and urine may be used to calculate the creatinine clearance (CrCl), which correlates approximately with the glomerular filtration rate (GFR). Blood creatinine concentrations may also be used alone to calculate the estimated GFR (eGFR).
The GFR is clinically important as a measurement of kidney function. However, in cases of severe kidney dysfunction the CrCl rate will overestimate the GFR, because hypersecretion of creatinine by the proximal renal tubules will account for a larger fraction of the total creatinine cleared. Ketoacids, cimetidine, and trimethoprim reduce creatinine tubular secretion and therefore increase the accuracy of the GFR estimate, in particular in severe kidney dysfunction. (In the absence of secretion, creatinine behaves like inulin).
An alternative estimation of kidney function can be made when interpreting the blood plasma concentration of creatinine along with that of urea. BUN-to-creatinine ratio (the ratio of blood urea nitrogen to creatinine) can indicate other problems besides those intrinsic to the kidney; for example, a urea concentration raised out of proportion to the creatinine may indicate a prerenal problem, such as volume depletion.
Counterintuitively, supporting the observation of higher creatinine production in women than in men, and putting into question the algorithms for GFR that do not distinguish for sex, women have higher muscle protein synthesis and higher muscle protein turnover across their life span. As HDL supports muscle anabolism, higher muscle protein turnover links increased creatine to the generally higher serum HDL in women compared with serum HDL in men and the HDL associated benefits, such as reduced incidence of cardiovascular complications and reduced COVID-19 severity.
Antibacterial and potential immunosuppressive properties
Studies suggest that creatinine can be effective in killing bacteria of many species, both Gram positive and Gram negative, as well as diverse antibiotic-resistant bacterial strains. Creatinine appears not to affect the growth of fungi and yeasts; this can be used to isolate slower growing fungi free from the normal bacterial populations found in most environmental samples. The mechanism by which creatinine kills bacteria is not currently known. Some reports also suggest that creatinine may have immunosuppressive properties.
Diagnostic use
Serum creatinine is the most commonly used indicator (although not a direct measure) of renal function. A raised creatinine is not always representative of a true reduction in GFR. A high reading may be due to increased production of creatinine not due to reduced kidney function, to interference with the assay, or to reduced tubular secretion of creatinine. An increase in serum creatinine can be due to increased ingestion of cooked meat (which contains creatinine converted from creatine by the heat from cooking) or excessive intake of protein and creatine supplements, taken to enhance athletic performance. Intense exercise can increase creatinine by increasing muscle breakdown.
Dehydration secondary to an inflammatory process with fever may cause a false increase in creatinine concentrations not related to actual kidney impairment, as in some cases associated with cholecystitis. Several medications and chromogens can interfere with the chemical assay. Creatinine secretion by the renal tubules can be blocked by some medications, again increasing measured creatinine.
Serum creatinine
Diagnostic serum creatinine studies are used to determine renal function. The reference interval is 0.6–1.3 mg/dL (53–115 μmol/L). It is simple to measure serum creatinine, and it is the most commonly used indicator of renal function.
A rise in blood creatinine concentration is a late marker, observed only with marked damage to functioning nephrons. The test is therefore unsuitable for detecting early-stage kidney disease. A better estimate of kidney function is given by calculating the estimated glomerular filtration rate (eGFR). eGFR can be calculated without a 24-hour urine collection, using serum creatinine concentration and some or all of the following variables: sex, age, and weight, as suggested by the American Diabetes Association. Many laboratories will automatically calculate eGFR when a creatinine test is requested. Algorithms to estimate GFR from creatinine concentration and other parameters are discussed in the renal function article. Unfortunately, the MDRD Study equation was developed in people with chronic kidney disease, and its major limitations are imprecision and systematic underestimation of measured GFR (bias) at higher/normal values.
A concern as of late 2010 relates to the adoption of a new analytical method, and the possible effect this may have in clinical medicine. Most clinical laboratories now align their creatinine measurements against a new standardized isotope dilution mass spectrometry (IDMS) method to measure serum creatinine. IDMS appears to give lower values than older methods when the serum creatinine values are relatively low, for example 0.7 mg/dL. The IDMS method would result in comparative overestimation of the corresponding calculated GFR in some patients with normal renal function. A few medicines are dosed even in normal renal function using that derived value of GFR. The dose, unless further modified, could then be higher than desired, potentially causing increased drug-related toxicity. To counter the effect of changing to IDMS, new FDA guidelines have suggested limiting doses of carboplatin, a chemotherapy drug, to specified maxima.
A 2009 Japanese study found a lower serum creatinine concentration to be associated with an increased risk for the development of type 2 diabetes in Japanese men.
Urine creatinine
Males produce approximately 150 μmol to 200 μmol of creatinine per kilogram of body weight per 24 h, while females produce approximately 100 μmol/kg/24 h to 150 μmol/kg/24 h. In normal circumstances, all the creatinine produced is excreted in the urine.
Creatinine concentration is checked during standard urine drug tests. An expected creatinine concentration indicates that the test sample is undiluted, whereas low amounts of creatinine in the urine indicate either a manipulated test or low initial baseline creatinine concentrations. Test samples considered manipulated due to low creatinine are not tested, and the test is sometimes considered failed.
Interpretation
In the United States and in most European countries creatinine is usually reported in mg/dL, whereas in Canada, Australia, and a few European countries, such as the UK, μmol/L is the usual unit. One mg/dL of creatinine equals 88.4 μmol/L.
The typical human reference ranges for serum creatinine are 0.5 mg/dL to 1.0 mg/dL (about 45 μmol/L to 90 μmol/L) for women and 0.7 mg/dL to 1.2 mg/dL (60 μmol/L to 110 μmol/L) for men. The significance of a single creatinine value must be interpreted in light of the patient's muscle mass. Patients with greater muscle mass have higher creatinine concentrations.
The trend of serum creatinine concentrations over time is more important than the absolute creatinine concentration.
Serum creatinine concentrations may increase when an ACE inhibitor (ACEI) is taken for heart failure and chronic kidney disease. ACE inhibitors provide survival benefits for patients with heart failure and slow disease progression in patients with chronic kidney disease. An increase not exceeding 30% is to be expected with use of an ACE inhibitor. Therefore, an ACE inhibitor should not be withdrawn when the serum creatinine increases, unless the increase exceeds 30% or hyperkalemia develops.
Chemistry
In chemical terms, creatinine is a lactam and an imidazolidinone, a spontaneously formed cyclic derivative of creatine.
Several tautomers of creatinine exist; ordered by contribution, they are:
2-Amino-1-methyl-1H-imidazol-4-ol (or 2-amino-1-methylimidazol-4-ol)
2-Amino-1-methyl-4,5-dihydro-1H-imidazol-4-one
2-Imino-1-methyl-2,3-dihydro-1H-imidazol-4-ol (or 2-imino-1-methyl-3H-imidazol-4-ol)
2-Imino-1-methylimidazolidin-4-one
2-Imino-1-methyl-2,5-dihydro-1H-imidazol-4-ol (or 2-imino-1-methyl-5H-imidazol-4-ol)
Creatinine starts to decompose at around 300 °C.
See also
Cystatin C, a novel marker of kidney function
Jaffe reaction, an example of a method of assaying creatinine
Rhabdomyolysis, which may be diagnosed using serum creatinine concentrations
Nephrotic syndrome
References
External links
Guanidines
Metabolism
Nephrology
Renal physiology
Imidazolidines | Creatinine | [
"Chemistry",
"Biology"
] | 2,326 | [
"Guanidines",
"Functional groups",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
65,888 | https://en.wikipedia.org/wiki/Electromagnetic%20induction | Electromagnetic or magnetic induction is the production of an electromotive force (emf) across an electrical conductor in a changing magnetic field.
Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. Lenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell–Faraday equation, one of the four Maxwell equations in his theory of electromagnetism.
Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators.
History
Electromagnetic induction was discovered by Michael Faraday, published in 1831. It was discovered independently by Joseph Henry in 1832.
In Faraday's first experimental demonstration (August 29, 1831), he wrapped two wires around opposite sides of an iron ring or "torus" (an arrangement similar to a modern toroidal transformer). Based on his understanding of electromagnets, he expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. He saw a transient current, which he called a "wave of electricity", when he connected the wire to the battery and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's model, the time varying aspect of electromagnetic induction is expressed as a differential equation, which Oliver Heaviside referred to as Faraday's law even though it is slightly different from Faraday's original formulation and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
In 1834 Heinrich Lenz formulated the law named after him to describe the "flux through the circuit". Lenz's law gives the direction of the induced emf and current resulting from electromagnetic induction.
Theory
Faraday's law of induction and Lenz's law
Faraday's law of induction makes use of the magnetic flux ΦB through a region of space enclosed by a wire loop. The magnetic flux is defined by a surface integral:
where dA is an element of the surface Σ enclosed by the wire loop, B is the magnetic field. The dot product B·dA corresponds to an infinitesimal amount of magnetic flux. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.
When the flux through the surface changes, Faraday's law of induction says that the wire loop acquires an electromotive force (emf). The most widespread version of this law states that the induced electromotive force in any closed circuit is equal to the rate of change of the magnetic flux enclosed by the circuit:
where is the emf and ΦB is the magnetic flux. The direction of the electromotive force is given by Lenz's law which states that an induced current will flow in the direction that will oppose the change which produced it. This is due to the negative sign in the previous equation. To increase the generated emf, a common approach is to exploit flux linkage by creating a tightly wound coil of wire, composed of N identical turns, each with the same magnetic flux going through them. The resulting emf is then N times that of one single wire.
Generating an emf through a variation of the magnetic flux through the surface of a wire loop can be achieved in several ways:
the magnetic field B changes (e.g. an alternating magnetic field, or moving a wire loop towards a bar magnet where the B field is stronger),
the wire loop is deformed and the surface Σ changes,
the orientation of the surface dA changes (e.g. spinning a wire loop into a fixed magnetic field),
any combination of the above
Maxwell–Faraday equation
In general, the relation between the emf in a wire loop encircling a surface Σ, and the electric field E in the wire is given by
where dℓ is an element of contour of the surface Σ, combining this with the definition of flux
we can write the integral form of the Maxwell–Faraday equation
It is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism.
Faraday's law and relativity
Faraday's law describes two different phenomena: the motional emf generated by a magnetic force on a moving wire (see Lorentz force), and the transformer emf that is generated by an electric force due to a changing magnetic field (due to the differential form of the Maxwell–Faraday equation). James Clerk Maxwell drew attention to the separate physical phenomena in 1861. This is believed to be a unique example in physics of where such a fundamental law is invoked to explain two such different phenomena.
Albert Einstein noticed that the two situations both corresponded to a relative movement between a conductor and a magnet, and the outcome was unaffected by which one was moving. This was one of the principal paths that led him to develop special relativity.
Applications
The principles of electromagnetic induction are applied in many devices and systems, including:
Electrical generator
The emf generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the drum generator is based upon the figure to the bottom-right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right.
In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. Mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampère's circuital law (labelled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to decrease the flux through that side of the circuit, opposing the increase in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field increases the flux on this side of the circuit, opposing the decrease in flux due to r the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy.
Electrical transformer
When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, . Therefore, an electromotive force is set up in the second loop called the induced emf or transformer emf. If the two ends of this loop are connected through an electrical load, current will flow.
Current clamp
A current clamp is a type of transformer with a split core which can be spread apart and clipped onto a wire or coil to either measure the current in it or, in reverse, to induce a voltage. Unlike conventional instruments the clamp does not make electrical contact with the conductor or require it to be disconnected during attachment of the clamp.
Magnetic flow meter
Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage ε generated in the magnetic field B due to a conductive liquid moving at velocity v is thus given by:
where ℓ is the distance between electrodes in the magnetic flow meter.
Eddy currents
Electrical conductors moving through a steady magnetic field, or stationary conductors within a changing magnetic field, will have circular currents induced within them by induction, called eddy currents. Eddy currents flow in closed loops in planes perpendicular to the magnetic field. They have useful applications in eddy current brakes and induction heating systems. However eddy currents induced in the metal magnetic cores of transformers and AC motors and generators are undesirable since they dissipate energy (called core losses) as heat in the resistance of the metal. Cores for these devices use a number of methods to reduce eddy currents:
Cores of low frequency alternating current electromagnets and transformers, instead of being solid metal, are often made of stacks of metal sheets, called laminations, separated by nonconductive coatings. These thin plates reduce the undesirable parasitic eddy currents, as described below.
Inductors and transformers used at higher frequencies often have magnetic cores made of nonconductive magnetic materials such as ferrite or iron powder held together with a resin binder.
Electromagnet laminations
Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more magnetic lines of force than the inner portion; hence the induced electromotive force is not uniform; this tends to cause electric currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature.
Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch (16 to 26 per centimetre), and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations.
This is a rotor approximately 20 mm in diameter from a DC motor used in a Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses.
Parasitic induction within conductors
In this illustration, a solid copper bar conductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the copper bar. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar.
High current power-frequency devices, such as electric motors, generators and transformers, use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers.
See also
Inductance
Moving magnet and conductor problem
References
Notes
References
Further reading
Maxwell, James Clerk (1881), A treatise on electricity and magnetism, Vol. II, Chapter III, §530, p. 178. Oxford, UK: Clarendon Press. .
External links
The Laws of Induction - The Feynman Lectures on Physics
A free java simulation on motional EMF
Electrodynamics
Physical phenomena
Michael Faraday
Maxwell's equations | Electromagnetic induction | [
"Physics",
"Mathematics"
] | 2,596 | [
"Physical phenomena",
"Equations of physics",
"Electrodynamics",
"Maxwell's equations",
"Dynamical systems"
] |
65,905 | https://en.wikipedia.org/wiki/Ideal%20gas | An ideal gas is a theoretical gas composed of many randomly moving point particles that are not subject to interparticle interactions. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. The requirement of zero interaction can often be relaxed if, for example, the interaction is perfectly elastic or regarded as point-like collisions.
Under various conditions of temperature and pressure, many real gases behave qualitatively like an ideal gas where the gas molecules (or atoms for monatomic gas) play the role of the ideal particles. Many gases such as nitrogen, oxygen, hydrogen, noble gases, some heavier gases like carbon dioxide and mixtures such as air, can be treated as ideal gases within reasonable tolerances over a considerable parameter range around standard temperature and pressure. Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure, as the potential energy due to intermolecular forces becomes less significant compared with the particles' kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. One mole of an ideal gas has a volume of (exact value based on 2019 revision of the SI) at standard temperature and pressure (a temperature of 273.15 K and an absolute pressure of exactly 105 Pa).
The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size becomes important. It also fails for most heavy gases, such as many refrigerants, and for gases with strong intermolecular forces, notably water vapor. At high pressures, the volume of a real gas is often considerably larger than that of an ideal gas. At low temperatures, the pressure of a real gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The deviation from the ideal gas behavior can be described by a dimensionless quantity, the compressibility factor, .
The ideal gas model has been explored in both the Newtonian dynamics (as in "kinetic theory") and in quantum mechanics (as a "gas in a box"). The ideal gas model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in statistical mechanics.
If the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not change. (If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.)
Types of ideal gas
There are three basic classes of ideal gas:
the classical or Maxwell–Boltzmann ideal gas,
the ideal quantum Bose gas, composed of bosons, and
the ideal quantum Fermi gas, composed of fermions.
The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used in a number of cases including the Sackur–Tetrode equation for the entropy of an ideal gas and the Saha ionization equation for a weakly ionized plasma.
Classical thermodynamic ideal gas
The classical thermodynamic properties of an ideal gas can be described by two equations of state:
Ideal gas law
The ideal gas law is the equation of state for an ideal gas, given by:
where
is the pressure
is the volume
is the amount of substance of the gas (in moles)
is the absolute temperature
is the gas constant, which must be expressed in units consistent with those chosen for pressure, volume and temperature. For example, in SI units = 8.3145 J⋅K−1⋅mol−1 when pressure is expressed in pascals, volume in cubic meters, and absolute temperature in kelvin.
The ideal gas law is an extension of experimentally discovered gas laws. It can also be derived from microscopic considerations.
Real fluids at low density and high temperature approximate the behavior of a classical ideal gas. However, at lower temperatures or a higher density, a real fluid deviates strongly from the behavior of an ideal gas, particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed as a compressibility factor.
This equation is derived from
Boyle's law: ;
Charles's law: ;
Avogadro's law: .
After combining three laws we get
That is:
.
Internal energy
The other equation of state of an ideal gas must express Joule's second law, that the internal energy of a fixed mass of ideal gas is a function only of its temperature, with . For the present purposes it is convenient to postulate an exemplary version of this law by writing:
where
is the internal energy
is the dimensionless specific heat capacity at constant volume, approximately for a monatomic gas, for diatomic gas, and 3 for non-linear molecules if we treat translations and rotations classically and ignore quantum vibrational contribution and electronic excitation. These formulas arise from application of the classical equipartition theorem to the translational and rotational degrees of freedom.
That for an ideal gas depends only on temperature is a consequence of the ideal gas law, although in the general case depends on temperature and an integral is needed to compute .
Microscopic model
In order to switch from macroscopic quantities (left hand side of the following equation) to microscopic ones (right hand side), we use
where
is the number of gas particles
is the Boltzmann constant ().
The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution.
The ideal gas model depends on the following assumptions:
The molecules of the gas are indistinguishable, small, hard spheres
All collisions are elastic and all motion is frictionless (no energy loss in motion or collision)
Newton's laws apply
The average distance between molecules is much larger than the size of the molecules
The molecules are constantly moving in random directions with a distribution of speeds
There are no attractive or repulsive forces between the molecules apart from those that determine their point-like collisions
The only forces between the gas molecules and the surroundings are those that determine the point-like collisions of the molecules with the walls
In the simplest case, there are no long-range forces between the molecules of the gas and the surroundings.
The assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas. The following three assumptions are very related: molecules are hard, collisions are elastic, and there are no inter-molecular forces. The assumption that the space between particles is much larger than the particles themselves is of paramount importance, and explains why the ideal gas approximation fails at high pressures.
Heat capacity
The dimensionless heat capacity at constant volume is generally defined by
where is the entropy. This quantity is generally a function of temperature due to intermolecular and intramolecular forces, but for moderate temperatures it is approximately constant. Specifically, the Equipartition Theorem predicts that the constant for a monatomic gas is = while for a diatomic gas it is = if vibrations are neglected (which is often an excellent approximation). Since the heat capacity depends on the atomic or molecular nature of the gas, macroscopic measurements on heat capacity provide useful information on the microscopic structure of the molecules.
The dimensionless heat capacity at constant pressure of an ideal gas is:
where is the enthalpy of the gas.
Sometimes, a distinction is made between an ideal gas, where and could vary with temperature, and a perfect gas, for which this is not the case.
The ratio of the constant volume and constant pressure heat capacity is the adiabatic index
For air, which is a mixture of gases that are mainly diatomic (nitrogen and oxygen), this ratio is often assumed to be 7/5, the value predicted by the classical Equipartition Theorem for diatomic gases.
Entropy
Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of ( is a thermodynamic potential), volume and the number of particles , then we will have a complete statement of the thermodynamic behavior of the ideal gas. We will be able to derive both the ideal gas law and the expression for internal energy from it.
Since the entropy is an exact differential, using the chain rule, the change in entropy when going from a reference state 0 to some other state with entropy may be written as where:
where the reference variables may be functions of the number of particles . Using the definition of the heat capacity at constant volume for the first differential and the appropriate Maxwell relation for the second we have:
Expressing in terms of as developed in the above section, differentiating the ideal gas equation of state, and integrating yields:
which implies that the entropy may be expressed as:
where all constants have been incorporated into the logarithm as which is some function of the particle number having the same dimensions as in order that the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This will mean that when the extensive parameters ( and ) are multiplied by a constant, the entropy will be multiplied by the same constant. Mathematically:
From this we find an equation for the function
Differentiating this with respect to , setting equal to 1, and then solving the differential equation yields :
where may vary for different gases, but will be independent of the thermodynamic state of the gas. It will have the dimensions of . Substituting into the equation for the entropy:
and using the expression for the internal energy of an ideal gas, the entropy may be written:
Since this is an expression for entropy in terms of , , and , it is a fundamental equation from which all other properties of the ideal gas may be derived.
This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed – as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above "ideal" development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity – the concept of an ideal gas breaks down at low values of . Nevertheless, there will be a "best" value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur–Tetrode equation which expresses the entropy of a monatomic () ideal gas. In the Sackur–Tetrode theory the constant depends only upon the mass of the gas particle. The Sackur–Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures.
An alternative way of expressing the change in entropy:
Thermodynamic potentials
Expressing the entropy as a function of , , and :
The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic potential):
where is the Gibbs free energy and is equal to so that:
The chemical potential is usually referenced to the potential at some standard pressure Po so that, with :
For a mixture (j=1,2,...) of ideal gases, each at partial pressure Pj, it can be shown that the chemical potential μj will be given by the above expression with the pressure P replaced by Pj.
The thermodynamic potentials for an ideal gas can now be written as functions of , , and as:
{|
|-
|
|
|
|-
|
|
|
|-
|
|
|
|-
|
|
|
|}
where, as before,
.
The most informative way of writing the potentials is in terms of their natural variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are:
In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral for more details.
Speed of sound
The speed of sound in an ideal gas is given by the Newton-Laplace formula:
where the isentropic Bulk modulus
For an isentropic process of an ideal gas, , therefore
Here,
is the adiabatic index ()
is the entropy per particle of the gas.
is the mass density of the gas.
is the pressure of the gas.
is the universal gas constant
is the temperature
is the molar mass of the gas.
Table of ideal gas equations
Ideal quantum gases
In the above-mentioned Sackur–Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur–Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.)
Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature.
Ideal Boltzmann gas
The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant :
where is the thermal de Broglie wavelength of the gas and is the degeneracy of states.
Ideal Bose and Fermi gases
An ideal gas of bosons (e.g. a photon gas) will be governed by Bose–Einstein statistics and the distribution of energy will be in the form of a Bose–Einstein distribution. An ideal gas of fermions will be governed by Fermi–Dirac statistics and the distribution of energy will be in the form of a Fermi–Dirac distribution.
See also
– billiard balls as a model of an ideal gas
References
Notes
References | Ideal gas | [
"Physics"
] | 3,244 | [
"Thermodynamic systems",
"Ideal gas",
"Physical systems"
] |
65,907 | https://en.wikipedia.org/wiki/Elastic%20collision | In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net loss of kinetic energy into other forms such as heat, noise, or potential energy.
During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute).
Collisions of atoms are elastic, for example Rutherford backscattering.
A useful special case of elastic collision is when the two bodies have equal mass, in which case they will simply exchange their momenta.
The molecules—as distinct from atoms—of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules’ translational motion and their internal degrees of freedom with each collision. At any instant, half the collisions are, to a varying extent, inelastic collisions (the pair possesses less kinetic energy in their translational motions after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across the entire sample, molecular collisions can be regarded as essentially elastic as long as Planck's law forbids energy from being carried away by black-body photons.
In the case of macroscopic bodies, perfectly elastic collisions are an ideal never fully realized, but approximated by the interactions of objects such as billiard balls.
When considering energies, possible rotational energy before and/or after a collision may also play a role.
Equations
One-dimensional Newtonian
In any collision without an external force, momentum is conserved; but in an elastic collision, kinetic energy is also conserved. Consider particles A and B with masses mA, mB, and velocities vA1, vB1 before collision, vA2, vB2 after collision. The conservation of momentum before and after the collision is expressed by:
Likewise, the conservation of the total kinetic energy is expressed by:
These equations may be solved directly to find when are known:
Alternatively the final velocity of a particle, v2 (vA2 or vB2) is expressed by:
Where:
e is the coefficient of restitution.
vCoM is the velocity of the center of mass of the system of two particles:
v1 (vA1 or vB1) is the initial velocity of the particle.
If both masses are the same, we have a trivial solution:
This simply corresponds to the bodies exchanging their initial velocities with each other.
As can be expected, the solution is invariant under adding a constant to all velocities (Galilean relativity), which is like using a frame of reference with constant translational velocity. Indeed, to derive the equations, one may first change the frame of reference so that one of the known velocities is zero, determine the unknown velocities in the new frame of reference, and convert back to the original frame of reference.
Examples
Before collision
Ball A: mass = 3 kg, velocity = 4 m/s
Ball B: mass = 5 kg, velocity = 0 m/s
After collision
Ball A: velocity = −1 m/s
Ball B: velocity = 3 m/s
Another situation:
The following illustrate the case of equal mass, .
In the limiting case where is much larger than , such as a ping-pong paddle hitting a ping-pong ball or an SUV hitting a trash can, the heavier mass hardly changes velocity, while the lighter mass bounces off, reversing its velocity plus approximately twice that of the heavy one.
In the case of a large , the value of is small if the masses are approximately the same: hitting a much lighter particle does not change the velocity much, hitting a much heavier particle causes the fast particle to bounce back with high speed. This is why a neutron moderator (a medium which slows down fast neutrons, thereby turning them into thermal neutrons capable of sustaining a chain reaction) is a material full of atoms with light nuclei which do not easily absorb neutrons: the lightest nuclei have about the same mass as a neutron.
Derivation of solution
To derive the above equations for rearrange the kinetic energy and momentum equations:
Dividing each side of the top equation by each side of the bottom equation, and using gives:
That is, the relative velocity of one particle with respect to the other is reversed by the collision.
Now the above formulas follow from solving a system of linear equations for regarding as constants:
Once is determined, can be found by symmetry.
Center of mass frame
With respect to the center of mass, both velocities are reversed by the collision: a heavy particle moves slowly toward the center of mass, and bounces back with the same low speed, and a light particle moves fast toward the center of mass, and bounces back with the same high speed.
The velocity of the center of mass does not change by the collision. To see this, consider the center of mass at time before collision and time after collision:
Hence, the velocities of the center of mass before and after collision are:
The numerators of and are the total momenta before and after collision. Since momentum is conserved, we have
One-dimensional relativistic
According to special relativity,
where p denotes momentum of any particle with mass, v denotes velocity, and c is the speed of light.
In the center of momentum frame where the total momentum equals zero,
Here represent the rest masses of the two colliding bodies, represent their velocities before collision, their velocities after collision, their momenta, is the speed of light in vacuum, and denotes the total energy, the sum of rest masses and kinetic energies of the two bodies.
Since the total energy and momentum of the system are conserved and their rest masses do not change, it is shown that the momentum of the colliding body is decided by the rest masses of the colliding bodies, total energy and the total momentum. Relative to the center of momentum frame, the momentum of each colliding body does not change magnitude after collision, but reverses its direction of movement.
Comparing with classical mechanics, which gives accurate results dealing with macroscopic objects moving much slower than the speed of light, total momentum of the two colliding bodies is frame-dependent. In the center of momentum frame, according to classical mechanics,
This agrees with the relativistic calculation despite other differences.
One of the postulates in Special Relativity states that the laws of physics, such as conservation of momentum, should be invariant in all inertial frames of reference. In a general inertial frame where the total momentum could be arbitrary,
We can look at the two moving bodies as one system of which the total momentum is the total energy is and its velocity is the velocity of its center of mass. Relative to the center of momentum frame the total momentum equals zero. It can be shown that is given by:
Now the velocities before the collision in the center of momentum frame and are:
When and
Therefore, the classical calculation holds true when the speed of both colliding bodies is much lower than the speed of light (~300,000 kilometres per second).
Relativistic derivation using hyperbolic functions
Using the so-called parameter of velocity (usually called the rapidity),
we get
Relativistic energy and momentum are expressed as follows:
Equations sum of energy and momentum colliding masses and (velocities correspond to the velocity parameters ), after dividing by adequate power are as follows:
and dependent equation, the sum of above equations:
subtract squares both sides equations "momentum" from "energy" and use the identity after simplifying we get:
for non-zero mass, using the hyperbolic trigonometric identity we get:
as functions is even we get two solutions:
from the last equation, leading to a non-trivial solution, we solve and substitute into the dependent equation, we obtain and then we have:
It is a solution to the problem, but expressed by the parameters of velocity. Return substitution to get the solution for velocities is:
Substitute the previous solutions and replace:
and after long transformation, with substituting:
we get:
Two-dimensional
For the case of two non-spinning colliding bodies in two dimensions, the motion of the bodies is determined by the three conservation laws of momentum, kinetic energy and angular momentum. The overall velocity of each body must be split into two perpendicular velocities: one tangent to the common normal surfaces of the colliding bodies at the point of contact, the other along the line of collision. Since the collision only imparts force along the line of collision, the velocities that are tangent to the point of collision do not change. The velocities along the line of collision can then be used in the same equations as a one-dimensional collision. The final velocities can then be calculated from the two new component velocities and will depend on the point of collision. Studies of two-dimensional collisions are conducted for many bodies in the framework of a two-dimensional gas.
In a center of momentum frame at any time the velocities of the two bodies are in opposite directions, with magnitudes inversely proportional to the masses. In an elastic collision these magnitudes do not change. The directions may change depending on the shapes of the bodies and the point of impact. For example, in the case of spheres the angle depends on the distance between the (parallel) paths of the centers of the two bodies. Any non-zero change of direction is possible: if this distance is zero the velocities are reversed in the collision; if it is close to the sum of the radii of the spheres the two bodies are only slightly deflected.
Assuming that the second particle is at rest before the collision, the angles of deflection of the two particles, and , are related to the angle of deflection in the system of the center of mass by
The magnitudes of the velocities of the particles after the collision are:
Two-dimensional collision with two moving objects
The final x and y velocities components of the first ball can be calculated as:
where and are the scalar sizes of the two original speeds of the objects, and are their masses, and are their movement angles, that is, (meaning moving directly down to the right is either a −45° angle, or a 315° angle), and lowercase phi () is the contact angle. (To get the and velocities of the second ball, one needs to swap all the '1' subscripts with '2' subscripts.)
This equation is derived from the fact that the interaction between the two bodies is easily calculated along the contact angle, meaning the velocities of the objects can be calculated in one dimension by rotating the x and y axis to be parallel with the contact angle of the objects, and then rotated back to the original orientation to get the true x and y components of the velocities.
In an angle-free representation, the changed velocities are computed using the centers and at the time of contact as
where the angle brackets indicate the inner product (or dot product) of two vectors.
Other conserved quantities
In the particular case of particles having equal masses, it can be verified by direct computation from the result above that the scalar product of the velocities before and after the collision are the same, that is Although this product is not an additive invariant in the same way that momentum and kinetic energy are for elastic collisions, it seems that preservation of this quantity can nonetheless be used to derive higher-order conservation laws.
Derivation of two dimensional solution
The impulse during the collision for each particle is:
Conservation of Momentum implies
.
Since the force during collision is perpendicular to both particles' surfaces at the contact point, the impulse is along the line parallel to , the relative vector between the particles' center at collision time:
for some to be determined and
Then from ():
From above equations, conservation of kinetic energy now requires:
whith
The both solutions of this equation are and , where corresponds to the trivial case of no collision. Substituting the non trivial value of in () we get the desired result ().
Since all equations are in vectorial form, this derivation is valid also for three dimensions with spheres.
See also
Collision
Inelastic collision
Coefficient of restitution
References
General references
External links
Rigid Body Collision Resolution in three dimensions including a derivation using the conservation laws
Classical mechanics
Collision
Particle physics
Scattering
Articles containing video clips
ru:Удар#Абсолютно упругий удар | Elastic collision | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,676 | [
"Classical mechanics",
"Scattering",
"Mechanics",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Collision"
] |
65,914 | https://en.wikipedia.org/wiki/Kinematics | Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics.
Kinematics is used in astrophysics to describe the motion of celestial bodies and collections of such bodies. In mechanical engineering, robotics, and biomechanics, kinematics is used to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the human skeleton.
Geometric transformations, also called rigid transformations, are used to describe the movement of components in a mechanical system, simplifying the derivation of the equations of motion. They are also central to dynamic analysis.
Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism and, working in reverse, using kinematic synthesis to design a mechanism for a desired range of motion. In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system or mechanism.
Etymology
The term kinematic is the English version of A.M. Ampère's cinématique, which he constructed from the Greek kinema ("movement, motion"), itself derived from kinein ("to move").
Kinematic and cinématique are related to the French word cinéma, but neither are directly derived from it. However, they do share a root word in common, as cinéma came from the shortened form of cinématographe, "motion picture projector and camera", once again from the Greek word for movement and from the Greek grapho ("to write").
Kinematics of a particle trajectory in a non-rotating frame of reference
Particle kinematics is the study of the trajectory of particles. The position of a particle is defined as the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is centered at your home, such that east is in the direction of the x-axis and north is in the direction of the y-axis, then the coordinate vector to the base of the tower is r = (0 m, −50 m, 0 m). If the tower is 50 m high, and this height is measured along the z-axis, then the coordinate vector to the top of the tower is r = (0 m, −50 m, 50 m).
In the most general case, a three-dimensional coordinate system is used to define the position of a particle. However, if the particle is constrained to move within a plane, a two-dimensional coordinate system is sufficient. All observations in physics are incomplete without being described with respect to a reference frame.
The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position vector can be expressed as
where , , and are the Cartesian coordinates and , and are the unit vectors along the , , and coordinate axes, respectively. The magnitude of the position vector gives the distance between the point and the origin.
The direction cosines of the position vector provide a quantitative measure of direction. In general, an object's position vector will depend on the frame of reference; different frames will lead to different values for the position vector.
The trajectory of a particle is a vector function of time, , which defines the curve traced by the moving particle, given by
where , , and describe each coordinate of the particle's position as a function of time.
Velocity and speed
The velocity of a particle is a vector quantity that describes the direction as well as the magnitude of motion of the particle. More mathematically, the rate of change of the position vector of a point with respect to time is the velocity of the point. Consider the ratio formed by dividing the difference of two positions of a particle (displacement) by the time interval. This ratio is called the average velocity over that time interval and is defined aswhere is the displacement vector during the time interval . In the limit that the time interval approaches zero, the average velocity approaches the instantaneous velocity, defined as the time derivative of the position vector,
Thus, a particle's velocity is the time rate of change of its position. Furthermore, this velocity is tangent to the particle's trajectory at every position along its path. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants.
The speed of an object is the magnitude of its velocity. It is a scalar quantity:
where is the arc-length measured along the trajectory of the particle. This arc-length must always increase as the particle moves. Hence, is non-negative, which implies that speed is also non-negative.
Acceleration
The velocity vector can change in magnitude and in direction or both at once. Hence, the acceleration accounts for both the rate of change of the magnitude of the velocity vector and the rate of change of direction of that vector. The same reasoning used with respect to the position of a particle to define velocity, can be applied to the velocity to define acceleration. The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio.
where Δv is the average velocity and Δt is the time interval.
The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative,
Alternatively,
Thus, acceleration is the first derivative of the velocity vector and the second derivative of the position vector of that particle. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants.
The magnitude of the acceleration of an object is the magnitude |a| of its acceleration vector. It is a scalar quantity:
Relative position vector
A relative position vector is a vector that defines the position of one point relative to another. It is the difference in position of the two points.
The position of one point A relative to another point B is simply the difference between their positions
which is the difference between the components of their position vectors.
If point A has position components
and point B has position components
then the position of point A relative to point B is the difference between their components:
Relative velocity
The velocity of one point relative to another is simply the difference between their velocities
which is the difference between the components of their velocities.
If point A has velocity components and point B has velocity components then the velocity of point A relative to point B is the difference between their components:
Alternatively, this same result could be obtained by computing the time derivative of the relative position vector rB/A.
Relative acceleration
The acceleration of one point C relative to another point B is simply the difference between their accelerations.
which is the difference between the components of their accelerations.
If point C has acceleration components
and point B has acceleration components
then the acceleration of point C relative to point B is the difference between their components:
Alternatively, this same result could be obtained by computing the second time derivative of the relative position vector rB/A.
Assuming that the initial conditions of the position, , and velocity at time are known, the first integration yields the velocity of the particle as a function of time.
A second integration yields its path (trajectory),
Additional relations between displacement, velocity, acceleration, and time can be derived. Since the acceleration is constant,
can be substituted into the above equation to give:
A relationship between velocity, position and acceleration without explicit time dependence can be had by solving the average acceleration for time and substituting and simplifying
where denotes the dot product, which is appropriate as the products are scalars rather than vectors.
The dot product can be replaced by the cosine of the angle between the vectors (see Geometric interpretation of the dot product for more details) and the vectors by their magnitudes, in which case:
In the case of acceleration always in the direction of the motion and the direction of motion should be in positive or negative, the angle between the vectors () is 0, so , and
This can be simplified using the notation for the magnitudes of the vectors where can be any curvaceous path taken as the constant tangential acceleration is applied along that path, so
This reduces the parametric equations of motion of the particle to a Cartesian relationship of speed versus position. This relation is useful when time is unknown. We also know that or is the area under a velocity–time graph. We can take by adding the top area and the bottom area. The bottom area is a rectangle, and the area of a rectangle is the where is the width and is the height. In this case and (the here is different from the acceleration ). This means that the bottom area is . Now let's find the top area (a triangle). The area of a triangle is where is the base and is the height. In this case, and or . Adding and results in the equation results in the equation . This equation is applicable when the final velocity is unknown.
Particle trajectories in cylindrical-polar coordinates
It is often convenient to formulate the trajectory of a particle r(t) = (x(t), y(t), z(t)) using polar coordinates in the X–Y plane. In this case, its velocity and acceleration take a convenient form.
Recall that the trajectory of a particle P is defined by its coordinate vector r measured in a fixed reference frame F. As the particle moves, its coordinate vector r(t) traces its trajectory, which is a curve in space, given by:
where x̂, ŷ, and ẑ are the unit vectors along the x, y and z axes of the reference frame F, respectively.
Consider a particle P that moves only on the surface of a circular cylinder r(t) = constant, it is possible to align the z axis of the fixed frame F with the axis of the cylinder. Then, the angle θ around this axis in the x–y plane can be used to define the trajectory as,
where the constant distance from the center is denoted as r, and θ(t) is a function of time.
The cylindrical coordinates for r(t) can be simplified by introducing the radial and tangential unit vectors,
and their time derivatives from elementary calculus:
Using this notation, r(t) takes the form,
In general, the trajectory r(t) is not constrained to lie on a circular cylinder, so the radius R varies with time and the trajectory of the particle in cylindrical-polar coordinates becomes:
Where r, θ, and z might be continuously differentiable functions of time and the function notation is dropped for simplicity. The velocity vector vP is the time derivative of the trajectory r(t), which yields:
Similarly, the acceleration aP, which is the time derivative of the velocity vP, is given by:
The term acts toward the center of curvature of the path at that point on the path, is commonly called the centripetal acceleration. The term is called the Coriolis acceleration.
Constant radius
If the trajectory of the particle is constrained to lie on a cylinder, then the radius r is constant and the velocity and acceleration vectors simplify. The velocity of vP is the time derivative of the trajectory r(t),
Planar circular trajectories
A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the z axis:
where r and z0 are constants. In this case, the velocity vP is given by:
where is the angular velocity of the unit vector around the z axis of the cylinder.
The acceleration aP of the particle P is now given by:
The components
are called, respectively, the radial and tangential components of acceleration.
The notation for angular velocity and angular acceleration is often defined as
so the radial and tangential acceleration components for circular trajectories are also written as
Point trajectories in a body moving in the plane
The movement of components of a mechanical system are analyzed by attaching a reference frame to each part and determining how the various reference frames move relative to each other. If the structural stiffness of the parts are sufficient, then their deformation can be neglected and rigid transformations can be used to define this relative movement. This reduces the description of the motion of the various parts of a complicated mechanical system to a problem of describing the geometry of each part and geometric association of each part relative to other parts.
Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways—more technically, it is the study of invariants under a set of transformations. These transformations can cause the displacement of the triangle in the plane, while leaving the vertex angle and the distances between vertices unchanged. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry.
The coordinates of points in a plane are two-dimensional vectors in R2 (two dimensional space). Rigid transformations are those that preserve the distance between any two points. The set of rigid transformations in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n).
Displacements and motion
The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say M, on one that moves relative to a fixed frame, F, on the other. The rigid transformation, or displacement, of M relative to F defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation.
The set of all displacements of M relative to F is called the configuration space of M. A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of M relative to F. The motion of a body consists of a continuous set of rotations and translations.
Matrix representation
The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3×3 matrix known as a homogeneous transform. The 3×3 homogeneous transform is constructed from a 2×2 rotation matrix A(φ) and the 2×1 translation vector d = (dx, dy), as:
These homogeneous transforms perform rigid transformations on the points in the plane z = 1, that is, on points with coordinates r = (x, y, 1).
In particular, let r define the coordinates of points in a reference frame M coincident with a fixed frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin of F and rotated by the angle φ relative to the x-axis of F, the new coordinates in F of points in M are given by:
Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered a subset of R3, translations become affine linear transformations.
Pure translation
If a rigid body moves so that its reference frame M does not rotate (θ = 0) relative to the fixed frame F, the motion is called pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d(t) of the origin of M, that is:
Thus, for bodies in pure translation, the velocity and acceleration of every point P in the body are given by:
where the dot denotes the derivative with respect to time and vO and aO are the velocity and acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in M is constant, so its derivative is zero.
Rotation of a body around a fixed axis
Rotational or angular kinematics is the description of the rotation of an object. In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The z-axis has been chosen for convenience.
Position
This allows the description of a rotation as the angular position of a planar reference frame M relative to a fixed F about this shared z-axis. Coordinates p = (x, y) in M are related to coordinates P = (X, Y) in F by the matrix equation:
where
is the rotation matrix that defines the angular position of M relative to F as a function of time.
Velocity
If the point p does not move in M, its velocity in F is given by
It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t),
where the matrix
is known as the angular velocity matrix of M relative to F. The parameter ω is the time derivative of the angle θ, that is:
Acceleration
The acceleration of P(t) in F is obtained as the time derivative of the velocity,
which becomes
where
is the angular acceleration matrix of M on F, and
The description of rotation then involves these three quantities:
Angular position: the oriented distance from a selected origin on the rotational axis to a point of an object is a vector r(t) locating the point. The vector r(t) has some projection (or, equivalently, some component) r⊥(t) on a plane perpendicular to the axis of rotation. Then the angular position of that point is the angle θ from a reference axis (typically the positive x-axis) to the vector r⊥(t) in a known rotation sense (typically given by the right-hand rule).
Angular velocity: the angular velocity ω is the rate at which the angular position θ changes with respect to time t: The angular velocity is represented in Figure 1 by a vector Ω pointing along the axis of rotation with magnitude ω and sense determined by the direction of rotation as given by the right-hand rule.
Angular acceleration: the magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t:
The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges:
Here θi and θf are, respectively, the initial and final angular positions, ωi and ωf are, respectively, the initial and final angular velocities, and α is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector.
Point trajectories in body moving in three dimensions
Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three-dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations.
Position
In order to define these formulas, the movement of a component B of a mechanical system is defined by the set of rotations [A(t)] and translations d(t) assembled into the homogeneous transformation [T(t)]=[A(t), d(t)]. If p is the coordinates of a point P in B measured in the moving reference frame M, then the trajectory of this point traced in F is given by:
This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context.
This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as:
This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is:
Velocity
The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this position vector,
The dot denotes the derivative with respect to time; because p is constant, its derivative is zero.
This formula can be modified to obtain the velocity of P by operating on its trajectory P(t) measured in the fixed frame F. Substituting the inverse transform for p into the velocity equation yields:
The matrix [S] is given by:
where
is the angular velocity matrix.
Multiplying by the operator [S], the formula for the velocity vP takes the form:
where the vector ω is the angular velocity vector obtained from the components of the matrix [Ω]; the vector
is the position of P relative to the origin O of the moving frame M; and
is the velocity of the origin O.
Acceleration
The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity vector:
This equation can be expanded firstly by computing
and
The formula for the acceleration AP can now be obtained as:
or
where α is the angular acceleration vector obtained from the derivative of the angular velocity vector;
is the relative position vector (the position of P relative to the origin O of the moving frame M); and
is the acceleration of the origin of the moving frame M.
Kinematic constraints
Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. The following are some common examples.
Kinematic coupling
A kinematic coupling exactly constrains all 6 degrees of freedom.
Rolling without slipping
An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass:
For the case of an object that does not tip or turn, this reduces to .
Inextensible cord
This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero. A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord. An equilibrium problem (i.e. not kinematic) of this type is the catenary.
Kinematic pairs
Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips shows that there are many ways to construct pairs that do not fit this simple classification.
Lower pair
A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in the fixed solid body. There are the following cases:
A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom, which is pure rotation about the axis of the hinge.
A prismatic joint, or slider, requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. This degree of freedom is the distance of the slide along the line.
A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. The position of the moving body is defined by both the rotation about and slide along the axis.
A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom.
A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom.
Higher pairs
Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints.
Kinematic chains
Rigid bodies ("links") connected by kinematic pairs ("joints") are known as kinematic chains. Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as type synthesis in machine design.
Examples
The planar one degree-of-freedom linkages assembled from N links and j hinges or sliding joints are:
N = 2, j = 1 : a two-bar linkage that is the lever;
N = 4, j = 4 : the four-bar linkage;
N = 6, j = 7 : a six-bar linkage. This must have two links ("ternary links") that support three joints. There are two distinct topologies that depend on how the two ternary linkages are connected. In the Watt topology, the two ternary links have a common joint; in the Stephenson topology, the two ternary links do not have a common joint and are connected by binary links.
N = 8, j = 10 : eight-bar linkage with 16 different topologies;
N = 10, j = 13 : ten-bar linkage with 230 different topologies;
N = 12, j = 16 : twelve-bar linkage with 6,856 topologies.
For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm", Mechanism and Machine Theory #41, pp. 1021–1030 (2006).
See also
Absement
Acceleration
Analytical mechanics
Applied mechanics
Celestial mechanics
Centripetal force
Classical mechanics
Distance
Dynamics (physics)
Fictitious force
Forward kinematics
Four-bar linkage
Inverse kinematics
Jerk (physics)
Kepler's laws
Kinematic coupling
Kinematic diagram
Kinematic synthesis
Kinetics (physics)
Motion (physics)
Orbital mechanics
Statics
Velocity
Integral kinematics
Chebychev–Grübler–Kutzbach criterion
References
Further reading
Eduard Study (1913) D.H. Delphenich translator, "Foundations and goals of analytical kinematics".
External links
Java applet of 1D kinematics
Physclips: Mechanics with animations and video clips from the University of New South Wales.
Kinematic Models for Design Digital Library (KMODDL), featuring movies and photos of hundreds of working models of mechanical systems at Cornell University and an e-book library of classic texts on mechanical design and engineering.
Micro-Inch Positioning with Kinematic Components
Classical mechanics
Mechanisms (engineering) | Kinematics | [
"Physics",
"Technology",
"Engineering"
] | 5,880 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
66,111 | https://en.wikipedia.org/wiki/Saros%20%28astronomy%29 | The saros () is a period of exactly 223 synodic months, approximately 6585.321 days (18.04 years), or 18 years plus 10, 11, or 12 days (depending on the number of leap years), and 8 hours, that can be used to predict eclipses of the Sun and Moon. One saros period after an eclipse, the Sun, Earth, and Moon return to approximately the same relative geometry, a near straight line, and a nearly identical eclipse will occur, in what is referred to as an eclipse cycle. A sar is one half of a saros.
A series of eclipses that are separated by one saros is called a saros series. It corresponds to:
6,585.321347 solar days
18.029 years
223 synodic months
241.999 draconic months
18.999 eclipse years (38 eclipse seasons of 173.31 days)
238.992 anomalistic months
241.029 sidereal months
The 19 eclipse years means that if there is a solar eclipse (or lunar eclipse), then after one saros a new moon will take place at the same node of the orbit of the Moon, and under these circumstances another solar eclipse can occur.
History
The earliest discovered historical record of what is known as the saros is by Chaldean (neo-Babylonian) astronomers in the last several centuries BCE. It was later known to Hipparchus, Pliny and Ptolemy.
The name "saros" () was applied to the eclipse cycle by Edmond Halley in 1686, who took it from the Suda, a Byzantine lexicon of the 11th century. The Suda says, "[The saros is] a measure and a number among Chaldeans. For 120 saroi make 2220 years (years of 12 lunar months) according to the Chaldeans' reckoning, if indeed the saros makes 222 lunar months, which are 18 years and 6 months (i.e. years of 12 lunar months)." The information in the Suda in turn was derived directly or otherwise from the Chronicle of Eusebius of Caesarea, which quoted Berossus. (Guillaume Le Gentil claimed that Halley's usage was incorrect in 1756, but the name continues to be used.) The Greek word apparently either comes from the Babylonian word sāru meaning the number 3600 or the Greek verb saro (σαρῶ) that means "sweep (the sky with the series of eclipses)".
The Saros period of 223 lunar months (in Greek numerals, ΣΚΓ′) is in the Antikythera Mechanism user manual on this instrument, made around 150 to 100 BCE in Greece, as seen in the picture. This number is one of a few inscriptions of the mechanism that are visible with the unaided eye. Above it, the period of the Metonic cycle and the Callippic cycle are also visible.
Description
The saros, a period of 6585.3211 days (15 common years + 3 leap years + 12.321 days, 14 common years + 4 leap years + 11.321 days, or 13 common years + 5 leap years + 10.321 days), is useful for predicting the times at which nearly identical eclipses will occur. Three periodicities related to lunar orbit, the synodic month, the draconic month, and the anomalistic month coincide almost perfectly each saros cycle. For an eclipse to occur, either the Moon must be located between the Earth and Sun (for a solar eclipse) or the Earth must be located between the Sun and Moon (for a lunar eclipse). This can happen only when the Moon is new or full, respectively, and repeat occurrences of these lunar phases result from solar and lunar orbits producing the Moon's synodic period of 29.53059 days. During most full and new moons, however, the shadow of the Earth or Moon falls to the north or south of the other body. Eclipses occur when the three bodies form a nearly straight line. Because the plane of the lunar orbit is inclined to that of the Earth, this condition occurs only when a full or new Moon is near or in the ecliptic plane, that is when the Moon is at one of the two nodes (the ascending or descending node). The period of time for two successive lunar passes through the ecliptic plane (returning to the same node) is termed the draconic month, a 27.21222 day period. The three-dimensional geometry of an eclipse, when the new or full moon is near one of the nodes, occurs every five or six months when the Sun is in conjunction or opposition to the Moon and coincidentally also near a node of the Moon's orbit at that time, or twice per eclipse year. Two eclipses separated by one saros have very similar appearance and duration due to the distance between the Earth and Moon being nearly the same for each event: this is because the saros is also an integer multiple of the anomalistic month of 27.5545 days, the period of the moon with respect to the lines of apsides in its orbit.
After one saros, the Moon will have completed roughly an integer number of synodic, draconic, and anomalistic periods (223, 242, and 239) and the Earth-Sun-Moon geometry will be nearly identical: the Moon will have the same phase and be at the same node and the same distance from the Earth. In addition, because the saros is close to 18 years in length (about 11 days longer), the Earth will be nearly the same distance from the Sun, and tilted to it in nearly the same orientation (same season). Given the date of an eclipse, one saros later a nearly identical eclipse can be predicted. During this 18-year period, about 40 other solar and lunar eclipses take place, but with a somewhat different geometry. One saros equaling 18.03 years is not equal to a perfect integer number of lunar orbits (Earth revolutions with respect to the fixed stars of 27.32166 days sidereal month), therefore, even though the relative geometry of the Earth–Sun–Moon system will be nearly identical after a saros, the Moon will be in a slightly different position with respect to the stars for each eclipse in a saros series. The axis of rotation of the Earth–Moon system exhibits a precession period of 18.59992 years.
The saros is not an integer number of days, but contains the fraction of of a day. Thus each successive eclipse in a saros series occurs about eight hours later in the day. In the case of an eclipse of the Sun, this means that the region of visibility will shift westward about 120°, or about one third of the way around the globe, and the two eclipses will thus not be visible from the same place on Earth. In the case of an eclipse of the Moon, the next eclipse might still be visible from the same location as long as the Moon is above the horizon. Given three saros eclipse intervals, the local time of day of an eclipse will be nearly the same. This three saros interval (19,755.96 days) is known as a triple saros or exeligmos (Greek: "turn of the wheel") cycle.
Saros series
Each saros series starts with a partial eclipse (Sun first enters the end of the node), and each successive saros the path of the Moon is shifted either northward (when near the descending node) or southward (when near the ascending node) due to the fact that the saros is not an exact integer of draconic months (about one hour short). At some point, eclipses are no longer possible and the series terminates (Sun leaves the beginning of the node). An arbitrary solar saros series was designated as solar saros series 1 by compilers of eclipse statistics. This series has finished, but the eclipse of November 16, 1990 BC (Julian calendar) for example is in solar saros series 1. There are different saros series for solar and lunar eclipses. For lunar saros series, the lunar eclipse occurring 58.5 synodic months earlier (February 23, 1994 BC) was assigned the number 1. If there is an eclipse one inex (29 years minus about 20 days) after an eclipse of a particular saros series then it is a member of the next series. For example, the eclipse of October 26, 1961 BC is in solar saros series 2. Saros series, of course, went on before these dates, and it is necessary to extend the saros series numbers backwards to negative numbers even just to accommodate eclipses occurring in the years following 2000 BC (up till the last eclipse with a negative saros number in 1367 BC). For solar eclipses the statistics for the complete saros series within the era between 2000 BC and AD 3000 are given in this article's references. It takes between 1226 and 1550 years for the members of a saros series to traverse the Earth's surface from north to south (or vice versa). These extremes allow from 69 to 87 eclipses in each series (most series have 71 or 72 eclipses). From 39 to 59 (mostly about 43) eclipses in a given series will be central (that is, total, annular, or hybrid annular-total). At any given time, approximately 40 different saros series will be in progress.
Saros series, as mentioned, are numbered according to the type of eclipse (lunar or solar). In odd numbered series (for solar eclipses) the Sun is near the ascending node, whereas in even numbered series it is near the descending node (this is reversed for lunar eclipse saros series). Generally, the ordering of these series determines the time at which each series peaks, which corresponds to when an eclipse is closest to one of the lunar nodes. For solar eclipses, the 40 series numbered between 117 and 156 are active (series 117 will end in 2054), whereas for lunar eclipses, there are now 41 active saros series (these numbers can be derived by counting the number of eclipses listed over an 18-year (saros) period from the eclipse catalog sites).
Example
As an example of a single saros series, this table gives the dates of some of the 72 lunar eclipses for saros series 131. This eclipse series began in AD 1427 with a partial eclipse at the southern edge of the Earth's shadow when the Moon was close to its descending node. In each successive saros, the Moon's orbital path is shifted northward with respect to the Earth's shadow, with the first total eclipse occurring in 1950. For the following 252 years, total eclipses occur, with the central eclipse in 2078. The first partial eclipse after this will occur in the year 2220, and the final partial eclipse of the series will occur in 2707. The total lifetime of lunar saros series 131 is 1280 years. Solar saros 138 interleaves with this lunar saros with an event occurring every 9 years 5 days alternating between each saros series.
Because of the fraction of days in a saros, the visibility of each eclipse will differ for an observer at a given locale. For the lunar saros series 131, the first total eclipse of 1950 had its best visibility for viewers in Eastern Europe and the Middle East because mid-eclipse was at 20:44 UT. The following eclipse in the series occurred about 8 hours later in the day with mid-eclipse at 4:47 UT, and was best seen from North America and South America. The third total eclipse occurred about 8 hours later in the day than the second eclipse with mid-eclipse at 12:43 UT, and had its best visibility for viewers in the Western Pacific, East Asia, Australia and New Zealand. This cycle of visibility repeats from the start to the end of the series, with minor variations. Solar saros 138 interleaves with this lunar saros with an event occurring every 9 years 5 days alternating between each saros series.
For a similar example for solar saros see solar saros 136.
Relationship between lunar and solar saros (sar)
After a given lunar or solar eclipse, after 9 years and days (a half saros, or sar) an eclipse will occur that is lunar instead of solar, or vice versa, with similar properties.
For example, if the Moon's penumbra partially covers the southern limb of the Earth during a solar eclipse, 9 years and days later a lunar eclipse will occur in which the Moon is partially covered by the southern limb of the Earth's penumbra. Likewise, 9 years and days after a total solar eclipse or an annular solar eclipse occurs, a total lunar eclipse will also occur. This 9-year period is referred to as a sar. It includes synodic months, or 111 synodic months plus one fortnight. The fortnight accounts for the alternation between solar and lunar eclipse. For a visual example see this chart (each row is one sar apart).
See also
List of saros series for solar eclipses
List of saros series for lunar eclipses
Eclipse cycle
Exeligmos
Solar eclipse
Lunar eclipse
Metonic cycle
References
Bibliography
Jean Meeus and Hermann Mucke (1983) Canon of Lunar Eclipses. Astronomisches Büro, Vienna
Theodor von Oppolzer (1887). Canon der Finsternisse. Vienna
Jean Meeus, Mathematical Astronomy Morsels, Willmann-Bell, Inc., 1997 (Chapter 9, p. 51, Table 9. A Some eclipse Periodicities)
External links
List of all active saros cycles
NASA – Eclipses and the Saros
Solar and Lunar Eclipses – Xabier Jubier – Interactive eclipse search
Eclipse Search – Search 5,000 years of eclipse data by various attributes
Eclipses, Cosmic Clockwork of the Ancients – Fundamental astronomy of eclipses
1st-millennium BC introductions
Eclipses
Time in astronomy
Technical factors of astrology
Neo-Babylonian Empire
Chaldea
Units of time | Saros (astronomy) | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,899 | [
"Time in astronomy",
"Physical quantities",
"Time",
"Units of time",
"Astronomical events",
"Quantity",
"Spacetime",
"Units of measurement",
"Eclipses"
] |
66,315 | https://en.wikipedia.org/wiki/Solid-state%20chemistry | Solid-state chemistry, also sometimes referred as materials chemistry, is the study of the synthesis, structure, and properties of solid phase materials. It therefore has a strong overlap with solid-state physics, mineralogy, crystallography, ceramics, metallurgy, thermodynamics, materials science and electronics with a focus on the synthesis of novel materials and their characterization. A diverse range of synthetic techniques, such as the ceramic method and chemical vapour depostion, make solid-state materials. Solids can be classified as crystalline or amorphous on basis of the nature of order present in the arrangement of their constituent particles. Their elemental compositions, microstructures, and physical properties can be characterized through a variety of analytical methods.
History
Because of its direct relevance to products of commerce, solid state inorganic chemistry has been strongly driven by technology. Progress in the field has often been fueled by the demands of industry, sometimes in collaboration with academia. Applications discovered in the 20th century include zeolite and platinum-based catalysts for petroleum processing in the 1950s, high-purity silicon as a core component of microelectronic devices in the 1960s, and “high temperature” superconductivity in the 1980s. The invention of X-ray crystallography in the early 1900s by William Lawrence Bragg was an enabling innovation. Our understanding of how reactions proceed at the atomic level in the solid state was advanced considerably by Carl Wagner's work on oxidation rate theory, counter diffusion of ions, and defect chemistry. Because of his contributions, he has sometimes been referred to as the father of solid state chemistry.
Synthetic methods
Given the diversity of solid-state compounds, an equally diverse array of methods are used for their preparation. Synthesis can range from high-temperature methods, like the ceramic method, to gas methods, like chemical vapour deposition. Often, the methods prevent defect formation or produce high-purity products.
High-temperature methods
Ceramic method
The ceramic method is one of the most common synthesis techniques. The synthesis occurs entirely in the solid state. The reactants are ground together, formed into a pellet using a pellet press and hydraulic press, and heated at high temperatures. When the temperature of the reactants are sufficient, the ions at the grain boundaries react to form desired phases. Generally ceramic methods give polycrystalline powders, but not single crystals.
Using a mortar and pestle, ResonantAcoustic mixer, or ball mill, the reactants are ground together, which decreases size and increases surface area of the reactants. If the mixing is not sufficient, we can use techniques such as co-precipitation and sol-gel. A chemist forms pellets from the ground reactants and places the pellets into containers for heating. The choice of container depends on the precursors, the reaction temperature and the expected product. For example, metal oxides are typically synthesized in silica or alumina containers. A tube furnace heats the pellet. Tube furnaces are available up to maximum temperatures of 2800oC.
Molten flux synthesis
Molten flux synthesis can be an efficient method for obtaining single crystals. In this method, the starting reagents are combined with flux, an inert material with a melting point lower than that of the starting materials. The flux serves as a solvent. After the reaction, the excess flux can be washed away using an appropriate solvent or it can be heat again to remove the flux by sublimation if it is a volatile compound.
Crucible materials have a great role to play in molten flux synthesis. The crucible should not react with the flux or the starting reagent. If any of the material is volatile, it is recommended to conduct the reaction in a sealed ampule. If the target phase is sensitive to oxygen, a carbon- coated fused silica tube or a carbon crucible inside a fused silica tube is often used which prevents the direct contact between the tube wall and reagents.
Chemical vapour transport
Chemical vapour transport results in very pure materials. The reaction typically occurs in a sealed ampoule. A transporting agent, added to the sealed ampoule, produces a volatile intermediate species from the solid reactant. For metal oxides, the transporting agent is usually Cl2 or HCl. The ampoule has a temperature gradient, and, as the gaseous reactant travels along the gradient, it eventually deposits as a crystal. An example of an industrially-used chemical vapor transport reaction is the Mond process. The Mond process involves heating impure nickel in a stream of carbon monoxide to produce pure nickel.
Low-temperature methods
Intercalation method
Intercalation synthesis is the insertion of molecules or ions between layers of a solid. The layered solid has weak intermolecular bonds holding its layers together. The process occurs via diffusion. Intercalation is further driven by ion exchange, acid-base reactions or electrochemical reactions. The intercalation method was first used in China with the discovery of porcelain. Also, graphene is produced by the intercalation method, and this method is the principle behind lithium-ion batteries.
Solution methods
It is possible to use solvents to prepare solids by precipitation or by evaporation. At times, the solvent is a hydrothermal that is under pressure at temperatures higher than the normal boiling point. A variation on this theme is the use of flux methods, which use a salt with a relatively low melting point as the solvent.
Gas methods
Many solids react vigorously with gas species like chlorine, iodine, and oxygen. Other solids form adducts, such as CO or ethylene. Such reactions are conducted in open-ended tubes, which the gasses are passed through. Also, these reactions can take place inside a measuring device such as a TGA. In that case, stoichiometric information can be obtained during the reaction, which helps identify the products.
Chemical vapour deposition
Chemical vapour deposition is a method widely used for the preparation of coatings and semiconductors from molecular precursors. A carrier gas transports the gaseous precursors to the material for coating.
Characterization
This is the process in which a material’s chemical composition, structure, and physical properties are determined using a variety of analytical techniques.
New phases
Synthetic methodology and characterization often go hand in hand in the sense that not one but a series of reaction mixtures are prepared and subjected to heat treatment. Stoichiometry, a numerical relationship between the quantities of reactant and product, is typically varied systematically. It is important to find which stoichiometries will lead to new solid compounds or solid solutions between known ones. A prime method to characterize the reaction products is powder diffraction because many solid-state reactions will produce polycrystalline molds or powders. Powder diffraction aids in the identification of known phases in the mixture. If a pattern is found that is not known in the diffraction data libraries, an attempt can be made to index the pattern. The characterization of a material's properties is typically easier for a product with crystalline structures.
Compositions and structures
Once the unit cell of a new phase is known, the next step is to establish the stoichiometry of the phase. This can be done in several ways. Sometimes the composition of the original mixture will give a clue, under the circumstances that only a product with a single powder pattern is found or a phase of a certain composition is made by analogy to known material, but this is rare.
Often, considerable effort in refining the synthetic procedures is required to obtain a pure sample of the new material. If it is possible to separate the product from the rest of the reaction mixture, elemental analysis methods such as scanning electron microscopy (SEM) and transmission electron microscopy (TEM) can be used. The detection of scattered and transmitted electrons from the surface of the sample provides information about the surface topography and composition of the material. Energy dispersive X-ray spectroscopy (EDX) is a technique that uses electron beam excitation. Exciting the inner shell of an atom with incident electrons emits characteristic X-rays with specific energy to each element. The peak energy can identify the chemical composition of a sample, including the distribution and concentration.Similar to EDX, X-ray diffraction analysis (XRD) involves the generation of characteristic X-rays upon interaction with the sample. The intensity of diffracted rays scattered at different angles is used to analyze the physical properties of a material such as phase composition and crystallographic structure. These techniques can also be coupled to achieve a better effect. For example, SEM is a useful complement to EDX due to its focused electron beam, it produces a high-magnification image that provides information on the surface topography. Once the area of interest has been identified, EDX can be used to determine the elements present in that specific spot. Selected area electron diffraction can be coupled with TEM or SEM to investigate the level of crystallinity and the lattice parameters of a sample.
More information
X-ray diffraction is also used due to its imaging capabilities and speed of data generation. The latter often requires revisiting and refining the preparative procedures and that are linked to the question of which phases are stable at what composition and what stoichiometry. In other words, what the phase diagram looks like. An important tool in establishing this are thermal analysis techniques like DSC or DTA and increasingly also, due to the advent of synchrotrons, temperature-dependent powder diffraction. Increased knowledge of the phase relations often leads to further refinement in synthetic procedures in an iterative way. New phases are thus characterized by their melting points and their stoichiometric domains. The latter is important for the many solids that are non-stoichiometric compounds. The cell parameters obtained from XRD are particularly helpful to characterize the homogeneity ranges of the latter.
Local structure
In contrast to the large structures of crystals, the local structure describes the interaction of the nearest neighbouring atoms. Methods of nuclear spectroscopy use specific nuclei to probe the electric and magnetic fields around the nucleus. E.g. electric field gradients are very sensitive to small changes caused by lattice expansion/compression (thermal or pressure), phase changes, or local defects. Common methods are Mössbauer spectroscopy and perturbed angular correlation.
Optical properties
For metallic materials, their optical properties arise from the collective excitation of conduction electrons. The coherent oscillations of electrons under electromagnetic radiation along with associated oscillations of the electromagnetic field are called surface plasmon resonances. The excitation wavelength and frequency of the plasmon resonances provide information on the particle's size, shape, composition, and local optical environment.
For non-metallic materials or semiconductors, they can be characterized by their band structure. It contains a band gap that represents the minimum energy difference between the top of the valence band and the bottom of the conduction band. The band gap can be determined using Ultraviolet-visible spectroscopy to predict the photochemical properties of the semiconductors.
Further characterization
In many cases, new solid compounds are further characterized by a variety of techniques that straddle the fine line that separates solid-state chemistry from solid-state physics. See Characterisation in material science for additional information.
References
External links
, Sadoway, Donald. 3.091SC; Introduction to Solid State Chemistry, Fall 2010. (Massachusetts Institute of Technology: MIT OpenCourseWare)
Materials science | Solid-state chemistry | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,358 | [
"Applied and interdisciplinary physics",
"Materials science",
"Condensed matter physics",
"nan",
"Solid-state chemistry"
] |
66,570 | https://en.wikipedia.org/wiki/Dalton%27s%20law | Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases. This empirical law was observed by John Dalton in 1801 and published in 1802. Dalton's law is related to the ideal gas laws.
Formula
Mathematically, the pressure of a mixture of non-reactive gases can be defined as the summation:
where , , ..., represent the partial pressures of each component.
where is the mole fraction of the ith component in the total mixture of n components .
Volume-based concentration
The relationship below provides a way to determine the volume-based concentration of any individual gaseous component
where ci is the concentration of component i.
Dalton's law is not strictly followed by real gases, with the deviation increasing with pressure. Under such conditions the volume occupied by the molecules becomes significant compared to the free space between them. In particular, the short average distances between molecules increases intermolecular forces between gas molecules enough to substantially change the pressure exerted by them, an effect not included in the ideal gas model.
See also
References
Gas laws
Physical chemistry
Engineering thermodynamics
de:Partialdruck#Dalton-Gesetz
et:Daltoni seadus | Dalton's law | [
"Physics",
"Chemistry",
"Engineering"
] | 268 | [
"Applied and interdisciplinary physics",
"Separation processes",
"Engineering thermodynamics",
"Distillation",
"Thermodynamics",
"nan",
"Gas laws",
"Mechanical engineering",
"Physical chemistry"
] |
66,575 | https://en.wikipedia.org/wiki/Nutrient | A nutrient is a substance used by an organism to survive, grow and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted into smaller molecules in the process of releasing energy such as for carbohydrates, lipids, proteins and fermentation products (ethanol or vinegar) leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential to humans and some animal species but most other animals and many plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, protein, fats, sugars and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiological roles in cellular processes, like vascular functions or nerve conduction. Inadequate amounts of essential nutrients or diseases that interfere with absorption, result in a deficiency state that compromises growth, survival and reproduction. Consumer advisories for dietary nutrient intakes such as the United States Dietary Reference Intake, are based on the amount required to prevent deficiency and provide macronutrient and micronutrient guides for both lower and upper limits of intake. In many countries, regulations require that food product labels display information about the amount of any macronutrients and micronutrients present in the food in significant quantities. Nutrients in larger quantities than the body needs may have harmful effects. Edible plants also contain thousands of compounds generally called phytochemicals which have unknown effects on disease or health including a diverse class with non-nutrient status called polyphenols which remain poorly understood as of 2024.
Types
Macronutrients
Macronutrients are defined in several ways.
The chemical elements humans consume in the largest quantities are carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulphur, summarized as CHNOPS.
The chemical compounds that humans consume in the largest quantities and provide bulk energy are classified as carbohydrates, proteins, and fats. Water must be also consumed in large quantities but does not provide caloric value.
Calcium, sodium, potassium, magnesium, and chloride ions, along with phosphorus and sulfur, are listed with macronutrients because they are required in large quantities compared to micronutrients, i.e., vitamins and other minerals, the latter often described as trace or ultratrace minerals.
Macronutrients provide energy:
Carbohydrates are compounds made up of types of sugar. Carbohydrates are classified according to their number of sugar units: monosaccharides (such as glucose and fructose), disaccharides (such as sucrose and lactose), oligosaccharides, and polysaccharides (such as starch, glycogen, and cellulose).
Proteins are organic compounds that consist of amino acids joined by peptide bonds. Since the body cannot manufacture some of the amino acids (termed essential amino acids), the diet must supply them. Through digestion, proteins are broken down by proteases back into free amino acids.
Fats consist of a glycerin molecule with three fatty acids attached. Fatty acid molecules contain a -COOH group attached to unbranched hydrocarbon chains connected by single bonds alone (saturated fatty acids) or by both double and single bonds (unsaturated fatty acids). Fats are needed for construction and maintenance of cell membranes, to maintain a stable body temperature, and to sustain the health of skin and hair. Because the body does not manufacture certain fatty acids (termed essential fatty acids), they must be obtained through one's diet.
Ethanol is not an essential nutrient, but it does provide calories. The United States Department of Agriculture uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the U.S. is , which at 40% ethanol (80 proof) would be 14 grams and 98 calories.
Micronutrients
Micronutrients are essential dietary elements required in varying quantities throughout life to serve metabolic and physiological functions.
Dietary minerals, such as potassium, sodium, and iron, are elements native to Earth, and cannot be synthesized. They are required in the diet in microgram or milligram amounts. As plants obtain minerals from the soil, dietary minerals derive directly from plants consumed or indirectly from edible animal sources.
Vitamins are organic compounds required in microgram or milligram amounts. The importance of each dietary vitamin was first established when it was determined that a disease would develop if that vitamin was absent from the diet.
Essentiality
Essential nutrients
An essential nutrient is a nutrient required for normal physiological function that cannot be synthesized in the body – either at all or in sufficient quantities – and thus must be obtained from a dietary source. Apart from water, which is universally required for the maintenance of homeostasis in mammals, essential nutrients are indispensable for various cellular metabolic processes and for the maintenance and function of tissues and organs. The nutrients considered essential for humans comprise nine amino acids, two fatty acids, thirteen vitamins, fifteen minerals and choline. In addition, there are several molecules that are considered conditionally essential nutrients since they are indispensable in certain developmental and pathological states.
Amino acids
An essential amino acid is an amino acid that is required by an organism but cannot be synthesized de novo by it, and therefore must be supplied in its diet. Out of the twenty standard protein-producing amino acids, nine cannot be endogenously synthesized by humans: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine.
Fatty acids
Essential fatty acids (EFAs) are fatty acids that humans and other animals must ingest because the body requires them for good health but cannot synthesize them. Only two fatty acids are known to be essential for humans: alpha-linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid).
Vitamins and vitamers
Vitamins occur in a variety of related forms known as vitamers. The vitamers of a given vitamin perform the functions of that vitamin and prevent symptoms of deficiency of that vitamin. Vitamins are those essential organic molecules that are not classified as amino acids or fatty acids. They commonly function as enzymatic cofactors, metabolic regulators or antioxidants. Humans require thirteen vitamins in their diet, most of which are actually groups of related molecules (e.g. vitamin E includes tocopherols and tocotrienols): vitamins A, C, D, E, K, thiamine (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12). The requirement for vitamin D is conditional, as people who get sufficient exposure to ultraviolet light, either from the sun or an artificial source, synthesize vitamin D in the skin.
Minerals
Minerals are the exogenous chemical elements indispensable for life. Although the four elements: carbon, hydrogen, oxygen, and nitrogen (CHON) are essential for life, they are so plentiful in food and drink that these are not considered nutrients and there are no recommended intakes for these as minerals. The need for nitrogen is addressed by requirements set for protein, which is composed of nitrogen-containing amino acids. Sulfur is essential, but again does not have a recommended intake. Instead, recommended intakes are identified for the sulfur-containing amino acids methionine and cysteine.
The essential nutrient trace elements for humans, listed in order of Recommended Dietary Allowance (expressed as a mass), are potassium, chloride, sodium, calcium, phosphorus, magnesium, iron, zinc, manganese, copper, iodine, chromium, molybdenum, and selenium. Additionally, cobalt is a component of Vitamin B12 which is essential. There are other minerals which are essential for some plants and animals, but may or may not be essential for humans, such as boron and silicon.
Choline
Choline is an essential nutrient. The cholines are a family of water-soluble quaternary ammonium compounds. Choline is the parent compound of the cholines class, consisting of ethanolamine having three methyl substituents attached to the amino function. Healthy humans fed artificially composed diets that are deficient in choline develop fatty liver, liver damage, and muscle damage. Choline was not initially classified as essential because the human body can produce choline in small amounts through phosphatidylcholine metabolism.
Conditionally essential
Conditionally essential nutrients are certain organic molecules that can normally be synthesized by an organism, but under certain conditions in insufficient quantities. In humans, such conditions include premature birth, limited nutrient intake, rapid growth, and certain disease states. Inositol, taurine, arginine, glutamine and nucleotides are classified as conditionally essential and are particularly important in neonatal diet and metabolism.
Non-essential
Non-essential nutrients are substances within foods that can have a significant impact on health. Dietary fiber is not absorbed in the human digestive tract. Soluble fiber is metabolized to butyrate and other short-chain fatty acids by bacteria residing in the large intestine. Soluble fiber is marketed as serving a prebiotic function with claims for promoting "healthy" intestinal bacteria.
Non-nutrients
Ethanol (C2H5OH) is not an essential nutrient, but it does supply approximately of food energy per gram. For spirits (vodka, gin, rum, etc.) a standard serving in the United States is , which at 40%ethanol (80proof) would be 14 grams and . At 50%alcohol, 17.5 g and . Wine and beer contain a similar amount of ethanol in servings of , respectively, but these beverages also contribute to food energy intake from components other than ethanol. A serving of wine contains . A serving of beer contains .
According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women ages 20 and up consume on average 6.8grams of alcohol per day and men consume on average 15.5 grams per day. Ignoring the non-alcohol contribution of those beverages, the average ethanol contributions to daily food energy intake are , respectively. Alcoholic beverages are considered empty calorie foods because, while providing energy, they contribute no essential nutrients.
By definition, phytochemicals include all nutritional and non-nutritional components of edible plants. Included as nutritional constituents are provitamin A carotenoids, whereas those without nutrient status are diverse polyphenols, flavonoids, resveratrol, and lignans that are present in numerous plant foods. Some phytochemical compounds are under preliminary research for their potential effects on human diseases and health. However, the qualification for nutrient status of compounds with poorly defined properties in vivo is that they must first be defined with a Dietary Reference Intake level to enable accurate food labeling, a condition not established for most phytochemicals that are claimed to provide antioxidant benefits.
Deficiencies and toxicity
See Vitamin, Mineral (nutrient), Protein (nutrient)
An inadequate amount of a nutrient is a deficiency. Deficiencies can be due to several causes, including an inadequacy in nutrient intake, called a dietary deficiency, or any of several conditions that interfere with the utilization of a nutrient within an organism. Some of the conditions that can interfere with nutrient utilization include problems with nutrient absorption, substances that cause a greater-than-normal need for a nutrient, conditions that cause nutrient destruction, and conditions that cause greater nutrient excretion. Nutrient toxicity occurs when excess consumption of a nutrient does harm to an organism.
In the United States and Canada, recommended dietary intake levels of essential nutrients are based on the minimum level that "will maintain a defined level of nutriture in an individual", a definition somewhat different from that used by the World Health Organization and Food and Agriculture Organization of a "basal requirement to indicate the level of intake needed to prevent pathologically relevant and clinically detectable signs of a dietary inadequacy".
In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher-than-average needs. Adequate Intakes (AIs) are set when there is insufficient information to establish EARs and RDAs. Countries establish tolerable upper intake levels, also referred to as upper limits (ULs), based on amounts that cause adverse effects. Governments are slow to revise information of this nature. For the U.S. values, except calcium and vitamin D, all data date from 1997 to 2004.
* The daily recommended amounts of niacin and magnesium are higher than the tolerable upper limit because, for both nutrients, the ULs identify the amounts which will not increase risk of adverse effects when the nutrients are consumed as a serving of a dietary supplement. Magnesium supplementation above the UL may cause diarrhea. Supplementation with niacin above the UL may cause flushing of the face and a sensation of body warmth. Each country or regional regulatory agency decides on a safety margin below when symptoms may occur, so the ULs may differ based on source.
EAR U.S. Estimated Average Requirements.
RDA U.S. Recommended Dietary Allowances; higher for adults than for children, and may be even higher for women who are pregnant or lactating.
AI U.S. Adequate Intake; AIs established when there is not sufficient information to set EARs and RDAs.
PRI Population Reference Intake is European Union equivalent of RDA; higher for adults than for children, and may be even higher for women who are pregnant or lactating. For Thiamin and Niacin, the PRIs are expressed as amounts per megajoule (239 kilocalories) of food energy consumed.
Upper Limit Tolerable upper intake levels.
ND ULs have not been determined.
NE EARs, PRIs or AIs have not yet been established or will not be (EU does not consider chromium an essential nutrient).
Plant
Plants absorb carbon, hydrogen, and oxygen from air and soil as carbon dioxide and water. Other nutrients are absorbed from soil (exceptions include some parasitic or carnivorous plants). Counting these, there are 17 important nutrients for plants: these are macronutrients; nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg), carbon (C), oxygen(O) and hydrogen (H), and the micronutrients; iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo) and nickel (Ni). In addition to carbon, hydrogen, and oxygen, nitrogen, phosphorus, and sulfur are also needed in relatively large quantities. Together, these six are the elemental macronutrients for all organisms.
They are sourced from inorganic matter (for example, carbon dioxide, water, nitrates, phosphates, sulfates, and diatomic molecules of nitrogen and, especially, oxygen) and organic compounds such as carbohydrates, lipids, proteins.
See also
References
External links
USDA. Dietary Reference Intakes
Chemical oceanography
Ecology
Edaphology
Biology and pharmacology of chemical elements
Nutrition
Essential nutrients | Nutrient | [
"Chemistry",
"Biology"
] | 3,583 | [
"Pharmacology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Ecology",
"Chemical oceanography",
"Biochemistry"
] |
66,675 | https://en.wikipedia.org/wiki/Light%20curve | In astronomy, a light curve is a graph of the light intensity of a celestial object or region as a function of time, typically with the magnitude of light received on the y-axis and with time on the x-axis. The light is usually in a particular frequency interval or band.
Light curves can be periodic, as in the case of eclipsing binaries, Cepheid variables, other periodic variables, and transiting extrasolar planets; or aperiodic, like the light curve of a nova, cataclysmic variable star, supernova, microlensing event, or binary as observed during occultation events. The study of the light curve, together with other observations, can yield considerable information about the physical process that produces it or constrain the physical theories about it.
Variable stars
Graphs of the apparent magnitude of a variable star over time are commonly used to visualise and analyse their behaviour. Although the categorisation of variable star types is increasingly done from their spectral properties, the amplitudes, periods, and regularity of their brightness changes are still important factors. Some types such as Cepheids have extremely regular light curves with exactly the same period, amplitude, and shape in each cycle. Others such as Mira variables have somewhat less regular light curves with large amplitudes of several magnitudes, while the semiregular variables are less regular still and have smaller amplitudes.
The shapes of variable star light curves give valuable information about the underlying physical processes producing the brightness changes. For eclipsing variables, the shape of the light curve indicates the degree of totality, the relative sizes of the stars, and their relative surface brightnesses. It may also show the eccentricity of the orbit and distortions in the shape of the two stars. For pulsating stars, the amplitude or period of the pulsations can be related to the luminosity of the star, and the light curve shape can be an indicator of the pulsation mode.
Supernovae
Light curves from supernovae can be indicative of the type of supernova. Although supernova types are defined on the basis of their spectra, each has typical light curve shapes. Type I supernovae have light curves with a sharp maximum and gradually decline, while Type II supernovae have less sharp maxima. Light curves are helpful for classification of faint supernovae and for the determination of sub-types. For example, the type II-P (for plateau) have similar spectra to the type II-L (linear) but are distinguished by a light curve where the decline flattens out for several weeks or months before resuming its fade.
Planetary astronomy
In planetary science, a light curve can be used to derive the rotation period of a minor planet, moon, or comet nucleus. From the Earth there is often no way to resolve a small object in the Solar System, even in the most powerful of telescopes, since the apparent angular size of the object is smaller than one pixel in the detector. Thus, astronomers measure the amount of light produced by an object as a function of time (the light curve). The time separation of peaks in the light curve gives an estimate of the rotational period of the object. The difference between the maximum and minimum brightnesses (the amplitude of the light curve) can be due to the shape of the object, or to bright and dark areas on its surface. For example, an asymmetrical asteroid's light curve generally has more pronounced peaks, while a more spherical object's light curve will be flatter. This allows astronomers to infer information about the shape and spin (but not size) of asteroids.
Asteroid lightcurve database
Light curve quality code
The Asteroid Lightcurve Database (LCDB) of the Collaborative Asteroid Lightcurve Link (CALL) uses a numeric code to assess the quality of a period solution for minor planet light curves (it does not necessarily assess the actual underlying data). Its quality code parameter U ranges from 0 (incorrect) to 3 (well-defined):
U = 0 → Result later proven incorrect
U = 1 → Result based on fragmentary light curve(s), may be completely wrong.
U = 2 → Result based on less than full coverage. Period may be wrong by 30 percent or ambiguous.
U = 3 → Secure result within the precision given. No ambiguity.
U = n.a. → Not available. Incomplete or inconclusive result.
A trailing plus sign (+) or minus sign (−) is also used to indicate a slightly better or worse quality than the unsigned value.
Occultation light curves
The occultation light curve is often characterised as binary, where the light from the star is terminated instantaneously, remains constant for the duration, and is reinstated instantaneously. The duration is equivalent to the length of a chord across the occulting body.
Circumstances where the transitions are not instantaneous are;
when either the occulting or occulted body are double, e.g. a double star or double asteroid, then a step light curve is observed.
when the occulted body is large, e.g. a star like Antares, then the transitions are gradual.
when the occulting body has an atmosphere, e.g. the moon Titan
The observations are typically recorded using video equipment and the disappearance and reappearance timed using a GPS disciplined Video Time Inserter (VTI).
Occultation light curves are archived at the VizieR service.
Exoplanet discovery
Periodic dips in a star's light curve graph could be due to an exoplanet passing in front of the star that it is orbiting. When an exoplanet passes in front of its star, light from that star is temporarily blocked, resulting in a dip in the star's light curve. These dips are periodic, as planets periodically orbit a star. Many exoplanets have been discovered via this method, which is known as the astronomical transit method.
Light curve inversion
Light curve inversion is a mathematical technique used to model the surfaces of rotating objects from their brightness variations. This can be used to effectively image starspots or asteroid surface albedos.
Microlensing
Microlensing is a process where relatively small and low-mass astronomical objects cause a brief small increase in the brightness of a more distant object. This is caused by the small relativistic effect as larger gravitational lenses, but allows the detection and analysis of otherwise-invisible stellar and planetary mass objects. The properties of these objects can be inferred from the shape of the lensing light curve. For example, PA-99-N2 is a microlensing event that may have been due to a star in the Andromeda Galaxy that has an exoplanet.
References
External links
The AAVSO online light curve generator can plot light curves for thousands of variable stars
The Open Astronomy Catalogs have light curves for several transient types, including supernovae
Lightcurves: An Introduction by NASA's Imagine the Universe
DAMIT Database of Asteroid Models from Inversion Techniques
Variable stars
Concepts in stellar astronomy
Planetary science | Light curve | [
"Physics",
"Astronomy"
] | 1,449 | [
"Concepts in stellar astronomy",
"Planetary science",
"Concepts in astrophysics",
"Astronomical sub-disciplines"
] |
66,723 | https://en.wikipedia.org/wiki/Respiratory%20system | The respiratory system (also respiratory apparatus, ventilatory system) is a biological system consisting of specific organs and structures used for gas exchange in animals and plants. The anatomy and physiology that make this happen varies greatly, depending on the size of the organism, the environment in which it lives and its evolutionary history. In land animals, the respiratory surface is internalized as linings of the lungs. Gas exchange in the lungs occurs in millions of small air sacs; in mammals and reptiles, these are called alveoli, and in birds, they are known as atria. These microscopic air sacs have a very rich blood supply, thus bringing the air into close contact with the blood. These air sacs communicate with the external environment via a system of airways, or hollow tubes, of which the largest is the trachea, which branches in the middle of the chest into the two main bronchi. These enter the lungs where they branch into progressively narrower secondary and tertiary bronchi that branch into numerous smaller tubes, the bronchioles. In birds, the bronchioles are termed parabronchi. It is the bronchioles, or parabronchi that generally open into the microscopic alveoli in mammals and atria in birds. Air has to be pumped from the environment into the alveoli or atria by the process of breathing which involves the muscles of respiration.
In most fish, and a number of other aquatic animals (both vertebrates and invertebrates), the respiratory system consists of gills, which are either partially or completely external organs, bathed in the watery environment. This water flows over the gills by a variety of active or passive means. Gas exchange takes place in the gills which consist of thin or very flat filaments and lammellae which expose a very large surface area of highly vascularized tissue to the water.
Other animals, such as insects, have respiratory systems with very simple anatomical features, and in amphibians, even the skin plays a vital role in gas exchange. Plants also have respiratory systems but the directionality of gas exchange can be opposite to that in animals. The respiratory system in plants includes anatomical features such as stomata, that are found in various parts of the plant.
Mammals
Anatomy
In humans and other mammals, the anatomy of a typical respiratory system is the respiratory tract. The tract is divided into an upper and a lower respiratory tract. The upper tract includes the nose, nasal cavities, sinuses, pharynx and the part of the larynx above the vocal folds. The lower tract (Fig. 2.) includes the lower part of the larynx, the trachea, bronchi, bronchioles and the alveoli.
The branching airways of the lower tract are often described as the respiratory tree or tracheobronchial tree (Fig. 2). The intervals between successive branch points along the various branches of "tree" are often referred to as branching "generations", of which there are, in the adult human, about 23. The earlier generations (approximately generations 0–16), consisting of the trachea and the bronchi, as well as the larger bronchioles which simply act as air conduits, bringing air to the respiratory bronchioles, alveolar ducts and alveoli (approximately generations 17–23), where gas exchange takes place. Bronchioles are defined as the small airways lacking any cartilaginous support.
The first bronchi to branch from the trachea are the right and left main bronchi. Second, only in diameter to the trachea (1.8 cm), these bronchi (1–1.4 cm in diameter) enter the lungs at each hilum, where they branch into narrower secondary bronchi known as lobar bronchi, and these branch into narrower tertiary bronchi known as segmental bronchi. Further divisions of the segmental bronchi (1 to 6 mm in diameter) are known as 4th order, 5th order, and 6th order segmental bronchi, or grouped together as subsegmental bronchi.
Compared to the 23 number (on average) of branchings of the respiratory tree in the adult human, the mouse has only about 13 such branchings.
The alveoli are the dead end terminals of the "tree", meaning that any air that enters them has to exit via the same route. A system such as this creates dead space, a volume of air (about 150 ml in the adult human) that fills the airways after exhalation and is breathed back into the alveoli before environmental air reaches them. At the end of inhalation, the airways are filled with environmental air, which is exhaled without coming in contact with the gas exchanger.
Ventilatory volumes
The lungs expand and contract during the breathing cycle, drawing air in and out of the lungs. The volume of air moved in or out of the lungs under normal resting circumstances (the resting tidal volume of about 500 ml), and volumes moved during maximally forced inhalation and maximally forced exhalation are measured in humans by spirometry. A typical adult human spirogram with the names given to the various excursions in volume the lungs can undergo is illustrated below (Fig. 3):
Not all the air in the lungs can be expelled during maximally forced exhalation (ERV). This is the residual volume (volume of air remaining even after a forced exhalation) of about 1.0–1.5 liters which cannot be measured by spirometry. Volumes that include the residual volume (i.e. functional residual capacity of about 2.5–3.0 liters, and total lung capacity of about 6 liters) can therefore also not be measured by spirometry. Their measurement requires special techniques.
The rates at which air is breathed in or out, either through the mouth or nose or into or out of the alveoli are tabulated below, together with how they are calculated. The number of breath cycles per minute is known as the respiratory rate. An average healthy human breathes 12–16 times a minute.
Mechanics of breathing
In mammals, inhalation at rest is primarily due to the contraction of the diaphragm. This is an upwardly domed sheet of muscle that separates the thoracic cavity from the abdominal cavity. When it contracts, the sheet flattens, (i.e. moves downwards as shown in Fig. 7) increasing the volume of the thoracic cavity in the antero-posterior axis. The contracting diaphragm pushes the abdominal organs downwards. But because the pelvic floor prevents the lowermost abdominal organs from moving in that direction, the pliable abdominal contents cause the belly to bulge outwards to the front and sides, because the relaxed abdominal muscles do not resist this movement (Fig. 7). This entirely passive bulging (and shrinking during exhalation) of the abdomen during normal breathing is sometimes referred to as "abdominal breathing", although it is, in fact, "diaphragmatic breathing", which is not visible on the outside of the body. Mammals only use their abdominal muscles during forceful exhalation (see Fig. 8, and discussion below). Never during any form of inhalation.
As the diaphragm contracts, the rib cage is simultaneously enlarged by the ribs being pulled upwards by the intercostal muscles as shown in Fig. 4. All the ribs slant downwards from the rear to the front (as shown in Fig. 4); but the lowermost ribs also slant downwards from the midline outwards (Fig. 5). Thus the rib cage's transverse diameter can be increased in the same way as the antero-posterior diameter is increased by the so-called pump handle movement shown in Fig. 4.
The enlargement of the thoracic cavity's vertical dimension by the contraction of the diaphragm, and its two horizontal dimensions by the lifting of the front and sides of the ribs, causes the intrathoracic pressure to fall. The lungs' interiors are open to the outside air and being elastic, therefore expand to fill the increased space, pleura fluid between double-layered pleura covering of lungs helps in reducing friction while lungs expansion and contraction. The inflow of air into the lungs occurs via the respiratory airways (Fig. 2). In a healthy person, these airways begin with the nose. (It is possible to begin with the mouth, which is the backup breathing system. However, chronic mouth breathing leads to, or is a sign of, illness.) It ends in the microscopic dead-end sacs called alveoli, which are always open, though the diameters of the various sections can be changed by the sympathetic and parasympathetic nervous systems. The alveolar air pressure is therefore always close to atmospheric air pressure (about 100 kPa at sea level) at rest, with the pressure gradients because of lungs contraction and expansion cause air to move in and out of the lungs during breathing rarely exceeding 2–3 kPa.
During exhalation, the diaphragm and intercostal muscles relax. This returns the chest and abdomen to a position determined by their anatomical elasticity. This is the "resting mid-position" of the thorax and abdomen (Fig. 7) when the lungs contain their functional residual capacity of air (the light blue area in the right hand illustration of Fig. 7), which in the adult human has a volume of about 2.5–3.0 liters (Fig. 3). Resting exhalation lasts about twice as long as inhalation because the diaphragm relaxes passively more gently than it contracts actively during inhalation.
The volume of air that moves in or out (at the nose or mouth) during a single breathing cycle is called the tidal volume. In a resting adult human, it is about 500 ml per breath. At the end of exhalation, the airways contain about 150 ml of alveolar air which is the first air that is breathed back into the alveoli during inhalation. This volume air that is breathed out of the alveoli and back in again is known as dead space ventilation, which has the consequence that of the 500 ml breathed into the alveoli with each breath only 350 ml (500 ml – 150 ml = 350 ml) is fresh warm and moistened air. Since this 350 ml of fresh air is thoroughly mixed and diluted by the air that remains in the alveoli after a normal exhalation (i.e. the functional residual capacity of about 2.5–3.0 liters), it is clear that the composition of the alveolar air changes very little during the breathing cycle (see Fig. 9). The oxygen tension (or partial pressure) remains close to 13–14 kPa (about 100 mm Hg), and that of carbon dioxide very close to 5.3 kPa (or 40 mm Hg). This contrasts with composition of the dry outside air at sea level, where the partial pressure of oxygen is 21 kPa (or 160 mm Hg) and that of carbon dioxide 0.04 kPa (or 0.3 mmHg).
During heavy breathing (hyperpnea), as, for instance, during exercise, inhalation is brought about by a more powerful and greater excursion of the contracting diaphragm than at rest (Fig. 8). In addition, the "accessory muscles of inhalation" exaggerate the actions of the intercostal muscles (Fig. 8). These accessory muscles of inhalation are muscles that extend from the cervical vertebrae and base of the skull to the upper ribs and sternum, sometimes through an intermediary attachment to the clavicles. When they contract, the rib cage's internal volume is increased to a far greater extent than can be achieved by contraction of the intercostal muscles alone. Seen from outside the body, the lifting of the clavicles during strenuous or labored inhalation is sometimes called clavicular breathing, seen especially during asthma attacks and in people with chronic obstructive pulmonary disease.
During heavy breathing, exhalation is caused by relaxation of all the muscles of inhalation. But now, the abdominal muscles, instead of remaining relaxed (as they do at rest), contract forcibly pulling the lower edges of the rib cage downwards (front and sides) (Fig. 8). This not only drastically decreases the size of the rib cage, but also pushes the abdominal organs upwards against the diaphragm which consequently bulges deeply into the thorax (Fig. 8). The end-exhalatory lung volume is now well below the resting mid-position and contains far less air than the resting "functional residual capacity". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human, there is always still at least 1 liter of residual air left in the lungs after maximum exhalation.
The automatic rhythmical breathing in and out, can be interrupted by coughing, sneezing (forms of very forceful exhalation), by the expression of a wide range of emotions (laughing, sighing, crying out in pain, exasperated intakes of breath) and by such voluntary acts as speech, singing, whistling and the playing of wind instruments. All of these actions rely on the muscles described above, and their effects on the movement of air in and out of the lungs.
Although not a form of breathing, the Valsalva maneuver involves the respiratory muscles. It is, in fact, a very forceful exhalatory effort against a tightly closed glottis, so that no air can escape from the lungs. Instead, abdominal contents are evacuated in the opposite direction, through orifices in the pelvic floor. The abdominal muscles contract very powerfully, causing the pressure inside the abdomen and thorax to rise to extremely high levels. The Valsalva maneuver can be carried out voluntarily but is more generally a reflex elicited when attempting to empty the abdomen during, for instance, difficult defecation, or during childbirth. Breathing ceases during this maneuver.
Gas exchange
The primary purpose of the respiratory system is the equalizing of the partial pressures of the respiratory gases in the alveolar air with those in the pulmonary capillary blood (Fig. 11). This process occurs by simple diffusion, across a very thin membrane (known as the blood–air barrier), which forms the walls of the pulmonary alveoli (Fig. 10). It consists of the alveolar epithelial cells, their basement membranes and the endothelial cells of the alveolar capillaries (Fig. 10). This blood gas barrier is extremely thin (in humans, on average, 2.2 μm thick). It is folded into about 300 million small air sacs called alveoli (each between 75 and 300 μm in diameter) branching off from the respiratory bronchioles in the lungs, thus providing an extremely large surface area (approximately 145 m2) for gas exchange to occur.
The air contained within the alveoli has a semi-permanent volume of about 2.5–3.0 liters which completely surrounds the alveolar capillary blood (Fig. 12). This ensures that equilibration of the partial pressures of the gases in the two compartments is very efficient and occurs very quickly. The blood leaving the alveolar capillaries and is eventually distributed throughout the body therefore has a partial pressure of oxygen of 13–14 kPa (100 mmHg), and a partial pressure of carbon dioxide of 5.3 kPa (40 mmHg) (i.e. the same as the oxygen and carbon dioxide gas tensions as in the alveoli). As mentioned in the section above, the corresponding partial pressures of oxygen and carbon dioxide in the ambient (dry) air at sea level are 21 kPa (160 mmHg) and 0.04 kPa (0.3 mmHg) respectively.
This marked difference between the composition of the alveolar air and that of the ambient air can be maintained because the functional residual capacity is contained in dead-end sacs connected to the outside air by fairly narrow and relatively long tubes (the airways: nose, pharynx, larynx, trachea, bronchi and their branches down to the bronchioles), through which the air has to be breathed both in and out (i.e. there is no unidirectional through-flow as there is in the bird lung). This typical mammalian anatomy combined with the fact that the lungs are not emptied and re-inflated with each breath (leaving a substantial volume of air, of about 2.5–3.0 liters, in the alveoli after exhalation), ensures that the composition of the alveolar air is only minimally disturbed when the 350 ml of fresh air is mixed into it with each inhalation. Thus the animal is provided with a very special "portable atmosphere", whose composition differs significantly from the present-day ambient air. It is this portable atmosphere (the functional residual capacity) to which the blood and therefore the body tissues are exposed – not to the outside air.
The resulting arterial partial pressures of oxygen and carbon dioxide are homeostatically controlled. A rise in the arterial partial pressure of CO2 and, to a lesser extent, a fall in the arterial partial pressure of O2, will reflexly cause deeper and faster breathing until the blood gas tensions in the lungs, and therefore the arterial blood, return to normal. The converse happens when the carbon dioxide tension falls, or, again to a lesser extent, the oxygen tension rises: the rate and depth of breathing are reduced until blood gas normality is restored.
Since the blood arriving in the alveolar capillaries has a partial pressure of O2 of, on average, 6 kPa (45 mmHg), while the pressure in the alveolar air is 13–14 kPa (100 mmHg), there will be a net diffusion of oxygen into the capillary blood, changing the composition of the 3 liters of alveolar air slightly. Similarly, since the blood arriving in the alveolar capillaries has a partial pressure of CO2 of also about 6 kPa (45 mmHg), whereas that of the alveolar air is 5.3 kPa (40 mmHg), there is a net movement of carbon dioxide out of the capillaries into the alveoli. The changes brought about by these net flows of individual gases into and out of the alveolar air necessitate the replacement of about 15% of the alveolar air with ambient air every 5 seconds or so. This is very tightly controlled by the monitoring of the arterial blood gases (which accurately reflect composition of the alveolar air) by the aortic and carotid bodies, as well as by the blood gas and pH sensor on the anterior surface of the medulla oblongata in the brain. There are also oxygen and carbon dioxide sensors in the lungs, but they primarily determine the diameters of the bronchioles and pulmonary capillaries, and are therefore responsible for directing the flow of air and blood to different parts of the lungs.
It is only as a result of accurately maintaining the composition of the 3 liters of alveolar air that with each breath some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, respiration will be slowed down or halted until the alveolar partial pressure of carbon dioxide has returned to 5.3 kPa (40 mmHg). It is therefore strictly speaking untrue that the primary function of the respiratory system is to rid the body of carbon dioxide "waste". The carbon dioxide that is breathed out with each breath could probably be more correctly be seen as a byproduct of the body's extracellular fluid carbon dioxide and pH homeostats
If these homeostats are compromised, then a respiratory acidosis, or a respiratory alkalosis will occur. In the long run these can be compensated by renal adjustments to the H+ and HCO3− concentrations in the plasma; but since this takes time, the hyperventilation syndrome can, for instance, occur when agitation or anxiety cause a person to breathe fast and deeply thus causing a distressing respiratory alkalosis through the blowing off of too much CO2 from the blood into the outside air.
Oxygen has a very low solubility in water, and is therefore carried in the blood loosely combined with hemoglobin. The oxygen is held on the hemoglobin by four ferrous iron-containing heme groups per hemoglobin molecule. When all the heme groups carry one O2 molecule each the blood is said to be “saturated” with oxygen, and no further increase in the partial pressure of oxygen will meaningfully increase the oxygen concentration of the blood. Most of the carbon dioxide in the blood is carried as bicarbonate ions (HCO3−) in the plasma. However the conversion of dissolved CO2 into HCO3− (through the addition of water) is too slow for the rate at which the blood circulates through the tissues on the one hand, and through alveolar capillaries on the other. The reaction is therefore catalyzed by carbonic anhydrase, an enzyme inside the red blood cells. The reaction can go in both directions depending on the prevailing partial pressure of CO2. A small amount of carbon dioxide is carried on the protein portion of the hemoglobin molecules as carbamino groups. The total concentration of carbon dioxide (in the form of bicarbonate ions, dissolved CO2, and carbamino groups) in arterial blood (i.e. after it has equilibrated with the alveolar air) is about 26 mM (or 58 ml/100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml/100 ml blood).
Control of ventilation
Ventilation of the lungs in mammals occurs via the respiratory centers in the medulla oblongata and the pons of the brainstem. These areas form a series of neural pathways which receive information about the partial pressures of oxygen and carbon dioxide in the arterial blood. This information determines the average rate of ventilation of the alveoli of the lungs, to keep these pressures constant. The respiratory center does so via motor nerves which activate the diaphragm and other muscles of respiration.
The breathing rate increases when the partial pressure of carbon dioxide in the blood increases. This is detected by central blood gas chemoreceptors on the anterior surface of the medulla oblongata. The aortic and carotid bodies, are the peripheral blood gas chemoreceptors which are particularly sensitive to the arterial partial pressure of O2 though they also respond, but less strongly, to the partial pressure of CO2. At sea level, under normal circumstances, the breathing rate and depth, is determined primarily by the arterial partial pressure of carbon dioxide rather than by the arterial partial pressure of oxygen, which is allowed to vary within a fairly wide range before the respiratory centers in the medulla oblongata and pons respond to it to change the rate and depth of breathing.
Exercise increases the breathing rate due to the extra carbon dioxide produced by the enhanced metabolism of the exercising muscles. In addition, passive movements of the limbs also reflexively produce an increase in the breathing rate.
Information received from stretch receptors in the lungs' limits tidal volume (the depth of inhalation and exhalation).
Responses to low atmospheric pressures
The alveoli are open (via the airways) to the atmosphere, with the result that alveolar air pressure is exactly the same as the ambient air pressure at sea level, at altitude, or in any artificial atmosphere (e.g. a diving chamber, or decompression chamber) in which the individual is breathing freely. With expansion of the lungs the alveolar air occupies a larger volume, and its pressure falls proportionally, causing air to flow in through the airways, until the pressure in the alveoli is again at the ambient air pressure. The reverse happens during exhalation. This process (of inhalation and exhalation) is exactly the same at sea level, as on top of Mt. Everest, or in a diving chamber or decompression chamber.
However, as one rises above sea level the density of the air decreases exponentially (see Fig. 14), halving approximately with every 5500 m rise in altitude. Since the composition of the atmospheric air is almost constant below 80 km, as a result of the continuous mixing effect of the weather, the concentration of oxygen in the air (mmols O2 per liter of ambient air) decreases at the same rate as the fall in air pressure with altitude. Therefore, in order to breathe in the same amount of oxygen per minute, the person has to inhale a proportionately greater volume of air per minute at altitude than at sea level. This is achieved by breathing deeper and faster (i.e. hyperpnea) than at sea level (see below).
There is, however, a complication that increases the volume of air that needs to be inhaled per minute (respiratory minute volume) to provide the same amount of oxygen to the lungs at altitude as at sea level. During inhalation, the air is warmed and saturated with water vapor during its passage through the nose passages and pharynx. Saturated water vapor pressure is dependent only on temperature. At a body core temperature of 37 °C it is 6.3 kPa (47.0 mmHg), irrespective of any other influences, including altitude. Thus at sea level, where the ambient atmospheric pressure is about 100 kPa, the moistened air that flows into the lungs from the trachea consists of water vapor (6.3 kPa), nitrogen (74.0 kPa), oxygen (19.7 kPa) and trace amounts of carbon dioxide and other gases (a total of 100 kPa). In dry air the partial pressure of O2 at sea level is 21.0 kPa (i.e. 21% of 100 kPa), compared to the 19.7 kPa of oxygen entering the alveolar air. (The tracheal partial pressure of oxygen is 21% of [100 kPa – 6.3 kPa] = 19.7 kPa). At the summit of Mt. Everest (at an altitude of 8,848 m or 29,029 ft), the total atmospheric pressure is 33.7 kPa, of which 7.1 kPa (or 21%) is oxygen. The air entering the lungs also has a total pressure of 33.7 kPa, of which 6.3 kPa is, unavoidably, water vapor (as it is at sea level). This reduces the partial pressure of oxygen entering the alveoli to 5.8 kPa (or 21% of [33.7 kPa – 6.3 kPa] = 5.8 kPa). The reduction in the partial pressure of oxygen in the inhaled air is therefore substantially greater than the reduction of the total atmospheric pressure at altitude would suggest (on Mt Everest: 5.8 kPa vs. 7.1 kPa).
A further minor complication exists at altitude. If the volume of the lungs were to be instantaneously doubled at the beginning of inhalation, the air pressure inside the lungs would be halved. This happens regardless of altitude. Thus, halving of the sea level air pressure (100 kPa) results in an intrapulmonary air pressure of 50 kPa. Doing the same at 5500 m, where the atmospheric pressure is only 50 kPa, the intrapulmonary air pressure falls to 25 kPa. Therefore, the same change in lung volume at sea level results in a 50 kPa difference in pressure between the ambient air and the intrapulmonary air, whereas it result in a difference of only 25 kPa at 5500 m. The driving pressure forcing air into the lungs during inhalation is therefore halved at this altitude. The rate of inflow of air into the lungs during inhalation at sea level is therefore twice that which occurs at 5500 m. However, in reality, inhalation and exhalation occur far more gently and less abruptly than in the example given. The differences between the atmospheric and intrapulmonary pressures, driving air in and out of the lungs during the breathing cycle, are in the region of only 2–3 kPa. A doubling or more of these small pressure differences could be achieved only by very major changes in the breathing effort at high altitudes.
All of the above influences of low atmospheric pressures on breathing are accommodated primarily by breathing deeper and faster (hyperpnea). The exact degree of hyperpnea is determined by the blood gas homeostat, which regulates the partial pressures of oxygen and carbon dioxide in the arterial blood. This homeostat prioritizes the regulation of the arterial partial pressure of carbon dioxide over that of oxygen at sea level. That is to say, at sea level the arterial partial pressure of CO2 is maintained at very close to 5.3 kPa (or 40 mmHg) under a wide range of circumstances, at the expense of the arterial partial pressure of O2, which is allowed to vary within a very wide range of values, before eliciting a corrective ventilatory response. However, when the atmospheric pressure (and therefore the partial pressure of O2 in the ambient air) falls to below 50-75% of its value at sea level, oxygen homeostasis is given priority over carbon dioxide homeostasis. This switch-over occurs at an elevation of about 2500 m (or about 8000 ft). If this switch occurs relatively abruptly, the hyperpnea at high altitude will cause a severe fall in the arterial partial pressure of carbon dioxide, with a consequent rise in the pH of the arterial plasma. This is one contributor to high altitude sickness. On the other hand, if the switch to oxygen homeostasis is incomplete, then hypoxia may complicate the clinical picture with potentially fatal results.
There are oxygen sensors in the smaller bronchi and bronchioles. In response to low partial pressures of oxygen in the inhaled air these sensors reflexively cause the pulmonary arterioles to constrict. (This is the exact opposite of the corresponding reflex in the tissues, where low arterial partial pressures of O2 cause arteriolar vasodilation.) At altitude this causes the pulmonary arterial pressure to rise resulting in a much more even distribution of blood flow to the lungs than occurs at sea level. At sea level, the pulmonary arterial pressure is very low, with the result that the tops of the lungs receive far less blood than the bases, which are relatively over-perfused with blood. It is only in the middle of the lungs that the blood and air flow to the alveoli are ideally matched. At altitude, this variation in the ventilation/perfusion ratio of alveoli from the tops of the lungs to the bottoms is eliminated, with all the alveoli perfused and ventilated in more or less the physiologically ideal manner. This is a further important contributor to the acclimatatization to high altitudes and low oxygen pressures.
The kidneys measure the oxygen content (mmol O2/liter blood, rather than the partial pressure of O2) of the arterial blood. When the oxygen content of the blood is chronically low, as at high altitude, the oxygen-sensitive kidney cells secrete erythropoietin (EPO) into the blood. This hormone stimulates the red bone marrow to increase its rate of red cell production, which leads to an increase in the hematocrit of the blood, and a consequent increase in its oxygen carrying capacity (due to the now high hemoglobin content of the blood). In other words, at the same arterial partial pressure of O2, a person with a high hematocrit carries more oxygen per liter of blood than a person with a lower hematocrit does. High altitude dwellers therefore have higher hematocrits than sea-level residents.
Other functions of the lungs
Local defenses
Irritation of nerve endings within the nasal passages or airways, can induce a cough reflex and sneezing. These responses cause air to be expelled forcefully from the trachea or nose, respectively. In this manner, irritants caught in the mucus which lines the respiratory tract are expelled or moved to the mouth where they can be swallowed. During coughing, contraction of the smooth muscle in the airway walls narrows the trachea by pulling the ends of the cartilage plates together and by pushing soft tissue into the lumen. This increases the expired airflow rate to dislodge and remove any irritant particle or mucus.
Respiratory epithelium can secrete a variety of molecules that aid in the defense of the lungs. These include secretory immunoglobulins (IgA), collectins, defensins and other peptides and proteases, reactive oxygen species, and reactive nitrogen species. These secretions can act directly as antimicrobials to help keep the airway free of infection. A variety of chemokines and cytokines are also secreted that recruit the traditional immune cells and others to the site of infections.
Surfactant immune function is primarily attributed to two proteins: SP-A and SP-D. These proteins can bind to sugars on the surface of pathogens and thereby opsonize them for uptake by phagocytes. It also regulates inflammatory responses and interacts with the adaptive immune response. Surfactant degradation or inactivation may contribute to enhanced susceptibility to lung inflammation and infection.
Most of the respiratory system is lined with mucous membranes that contain mucosa-associated lymphoid tissue, which produces white blood cells such as lymphocytes.
Prevention of alveolar collapse
The lungs make a surfactant, a surface-active lipoprotein complex (phospholipoprotein) formed by type II alveolar cells. It floats on the surface of the thin watery layer which lines the insides of the alveoli, reducing the water's surface tension.
The surface tension of a watery surface (the water-air interface) tends to make that surface shrink. When that surface is curved as it is in the alveoli of the lungs, the shrinkage of the surface decreases the diameter of the alveoli. The more acute the curvature of the water-air interface the greater the tendency for the alveolus to collapse. This has three effects. Firstly, the surface tension inside the alveoli resists expansion of the alveoli during inhalation (i.e. it makes the lung stiff, or non-compliant). Surfactant reduces the surface tension and therefore makes the lungs more compliant, or less stiff, than if it were not there. Secondly, the diameters of the alveoli increase and decrease during the breathing cycle. This means that the alveoli have a greater tendency to collapse (i.e. cause atelectasis) at the end of exhalation than at the end of inhalation. Since surfactant floats on the watery surface, its molecules are more tightly packed together when the alveoli shrink during exhalation. This causes them to have a greater surface tension-lowering effect when the alveoli are small than when they are large (as at the end of inhalation, when the surfactant molecules are more widely spaced). The tendency for the alveoli to collapse is therefore almost the same at the end of exhalation as at the end of inhalation. Thirdly, the surface tension of the curved watery layer lining the alveoli tends to draw water from the lung tissues into the alveoli. Surfactant reduces this danger to negligible levels, and keeps the alveoli dry.
Pre-term babies who are unable to manufacture surfactant have lungs that tend to collapse each time they breathe out. Unless treated, this condition, called respiratory distress syndrome, is fatal. Basic scientific experiments, carried out using cells from chicken lungs, support the potential for using steroids as a means of furthering the development of type II alveolar cells. In fact, once a premature birth is threatened, every effort is made to delay the birth, and a series of steroid injections is frequently administered to the mother during this delay in an effort to promote lung maturation.
Contributions to whole body functions
The lung vessels contain a fibrinolytic system that dissolves clots that may have arrived in the pulmonary circulation by embolism, often from the deep veins in the legs. They also release a variety of substances that enter the systemic arterial blood, and they remove other substances from the systemic venous blood that reach them via the pulmonary artery. Some prostaglandins are removed from the circulation, while others are synthesized in the lungs and released into the blood when lung tissue is stretched.
The lungs activate one hormone. The physiologically inactive decapeptide angiotensin I is converted to the aldosterone-releasing octapeptide, angiotensin II, in the pulmonary circulation. The reaction occurs in other tissues as well, but it is particularly prominent in the lungs. Angiotensin II also has a direct effect on arteriolar walls, causing arteriolar vasoconstriction, and consequently a rise in arterial blood pressure. Large amounts of the angiotensin-converting enzyme responsible for this activation are located on the surfaces of the endothelial cells of the alveolar capillaries. The converting enzyme also inactivates bradykinin. Circulation time through the alveolar capillaries is less than one second, yet 70% of the angiotensin I reaching the lungs is converted to angiotensin II in a single trip through the capillaries. Four other peptidases have been identified on the surface of the pulmonary endothelial cells.
Vocalization
The movement of gas through the larynx, pharynx and mouth allows humans to speak, or phonate. Vocalization, or singing, in birds occurs via the syrinx, an organ located at the base of the trachea. The vibration of air flowing across the larynx (vocal cords), in humans, and the syrinx, in birds, results in sound. Because of this, gas movement is vital for communication purposes.
Temperature control
Panting in dogs, cats, birds and some other animals provides a means of reducing body temperature, by evaporating saliva in the mouth (instead of evaporating sweat on the skin).
Clinical significance
Disorders of the respiratory system can be classified into several general groups:
Airway obstructive conditions (e.g., emphysema, bronchitis, asthma)
Pulmonary restrictive conditions (e.g., fibrosis, sarcoidosis, alveolar damage, pleural effusion)
Vascular diseases (e.g., pulmonary edema, pulmonary embolism, pulmonary hypertension)
Infectious, environmental and other "diseases" (e.g., pneumonia, tuberculosis, asbestosis, particulate pollutants)
Primary cancers (e.g. bronchial carcinoma, mesothelioma)
Secondary cancers (e.g. cancers that originated elsewhere in the body, but have seeded themselves in the lungs)
Insufficient surfactant (e.g. respiratory distress syndrome in pre-term babies) .
Disorders of the respiratory system are usually treated by a pulmonologist and respiratory therapist.
Where there is an inability to breathe or insufficiency in breathing, a medical ventilator may be used.
Exceptional mammals
Cetaceans
Horses
Horses are obligate nasal breathers which means that they are different from many other mammals because they do not have the option of breathing through their mouths and must take in air through their noses. A flap of tissue called the soft palate blocks off the pharynx from the mouth (oral cavity) of the horse, except when swallowing. This helps to prevent the horse from inhaling food, but does not allow use of the mouth to breathe when in respiratory distress, a horse can only breathe through its nostrils.
Elephants
The elephant is the only mammal known to have no pleural space. Instead, the parietal and visceral pleura are both composed of dense connective tissue and joined to each other via loose connective tissue. This lack of a pleural space, along with an unusually thick diaphragm, are thought to be evolutionary adaptations allowing the elephant to remain underwater for long periods while breathing through its trunk which emerges as a snorkel.
In the elephant the lungs are attached to the diaphragm and breathing relies mainly on the diaphragm rather than the expansion of the ribcage.
Birds
The respiratory system of birds differs significantly from that found in mammals. Firstly, they have rigid lungs which do not expand and contract during the breathing cycle. Instead an extensive system of air sacs (Fig. 15) distributed throughout their bodies act as the bellows drawing environmental air into the sacs, and expelling the spent air after it has passed through the lungs (Fig. 18). Birds also do not have diaphragms or pleural cavities.
Bird lungs are smaller than those in mammals of comparable size, but the air sacs account for 15% of the total body volume, compared to the 7% devoted to the alveoli which act as the bellows in mammals.
Inhalation and exhalation are brought about by alternately increasing and decreasing the volume of the entire thoraco-abdominal cavity (or coelom) using both their abdominal and costal muscles. During inhalation the muscles attached to the vertebral ribs (Fig. 17) contract angling them forwards and outwards. This pushes the sternal ribs, to which they are attached at almost right angles, downwards and forwards, taking the sternum (with its prominent keel) in the same direction (Fig. 17). This increases both the vertical and transverse diameters of thoracic portion of the trunk. The forward and downward movement of, particularly, the posterior end of the sternum pulls the abdominal wall downwards, increasing the volume of that region of the trunk as well. The increase in volume of the entire trunk cavity reduces the air pressure in all the thoraco-abdominal air sacs, causing them to fill with air as described below.
During exhalation the external oblique muscle which is attached to the sternum and vertebral ribs anteriorly, and to the pelvis (pubis and ilium in Fig. 17) posteriorly (forming part of the abdominal wall) reverses the inhalatory movement, while compressing the abdominal contents, thus increasing the pressure in all the air sacs. Air is therefore expelled from the respiratory system in the act of exhalation.
During inhalation air enters the trachea via the nostrils and mouth, and continues to just beyond the syrinx at which point the trachea branches into two primary bronchi, going to the two lungs (Fig. 16). The primary bronchi enter the lungs to become the intrapulmonary bronchi, which give off a set of parallel branches called ventrobronchi and, a little further on, an equivalent set of dorsobronchi (Fig. 16). The ends of the intrapulmonary bronchi discharge air into the posterior air sacs at the caudal end of the bird. Each pair of dorso-ventrobronchi is connected by a large number of parallel microscopic air capillaries (or parabronchi) where gas exchange occurs (Fig. 16). As the bird inhales, tracheal air flows through the intrapulmonary bronchi into the posterior air sacs, as well as into the dorsobronchi, but not into the ventrobronchi (Fig. 18). This is due to the bronchial architecture which directs the inhaled air away from the openings of the ventrobronchi, into the continuation of the intrapulmonary bronchus towards the dorsobronchi and posterior air sacs. From the dorsobronchi the inhaled air flows through the parabronchi (and therefore the gas exchanger) to the ventrobronchi from where the air can only escape into the expanding anterior air sacs. So, during inhalation, both the posterior and anterior air sacs expand, the posterior air sacs filling with fresh inhaled air, while the anterior air sacs fill with "spent" (oxygen-poor) air that has just passed through the lungs.
During exhalation the pressure in the posterior air sacs (which were filled with fresh air during inhalation) increases due to the contraction of the oblique muscle described above. The aerodynamics of the interconnecting openings from the posterior air sacs to the dorsobronchi and intrapulmonary bronchi ensures that the air leaves these sacs in the direction of the lungs (via the dorsobronchi), rather than returning down the intrapulmonary bronchi (Fig. 18). From the dorsobronchi the fresh air from the posterior air sacs flows through the parabronchi (in the same direction as occurred during inhalation) into ventrobronchi. The air passages connecting the ventrobronchi and anterior air sacs to the intrapulmonary bronchi direct the "spent", oxygen poor air from these two organs to the trachea from where it escapes to the exterior. Oxygenated air therefore flows constantly (during the entire breathing cycle) in a single direction through the parabronchi.
The blood flow through the bird lung is at right angles to the flow of air through the parabronchi, forming a cross-current flow exchange system (Fig. 19). The partial pressure of oxygen in the parabronchi declines along their lengths as O2 diffuses into the blood. The blood capillaries leaving the exchanger near the entrance of airflow take up more O2 than do the capillaries leaving near the exit end of the parabronchi. When the contents of all capillaries mix, the final partial pressure of oxygen of the mixed pulmonary venous blood is higher than that of the exhaled air, but is nevertheless less than half that of the inhaled air, thus achieving roughly the same systemic arterial blood partial pressure of oxygen as mammals do with their bellows-type lungs.
The trachea is an area of dead space: the oxygen-poor air it contains at the end of exhalation is the first air to re-enter the posterior air sacs and lungs. In comparison to the mammalian respiratory tract, the dead space volume in a bird is, on average, 4.5 times greater than it is in mammals of the same size. Birds with long necks will inevitably have long tracheae, and must therefore take deeper breaths than mammals do to make allowances for their greater dead space volumes. In some birds (e.g. the whooper swan, Cygnus cygnus, the white spoonbill, Platalea leucorodia, the whooping crane, Grus americana, and the helmeted curassow, Pauxi pauxi) the trachea, which some cranes can be 1.5 m long, is coiled back and forth within the body, drastically increasing the dead space ventilation. The purpose of this extraordinary feature is unknown.
Reptiles
The anatomical structure of the lungs is less complex in reptiles than in mammals, with reptiles lacking the very extensive airway tree structure found in mammalian lungs. Gas exchange in reptiles still occurs in alveoli however. Reptiles do not possess a diaphragm. Thus, breathing occurs via a change in the volume of the body cavity which is controlled by contraction of intercostal muscles in all reptiles except turtles. In turtles, contraction of specific pairs of flank muscles governs inhalation and exhalation.
Amphibians
Both the lungs and the skin serve as respiratory organs in amphibians. The ventilation of the lungs in amphibians relies on positive pressure ventilation. Muscles lower the floor of the oral cavity, enlarging it and drawing in air through the nostrils into the oral cavity. With the nostrils and mouth closed, the floor of the oral cavity is then pushed up, which forces air down the trachea into the lungs. The skin of these animals is highly vascularized and moist, with moisture maintained via secretion of mucus from specialised cells, and is involved in cutaneous respiration. While the lungs are of primary organs for gas exchange between the blood and the environmental air (when out of the water), the skin's unique properties aid rapid gas exchange when amphibians are submerged in oxygen-rich water.
Some amphibians have gills, either in the early stages of their development (e.g. tadpoles of frogs), while others retain them into adulthood (e.g. some salamanders).
Fish
Oxygen is poorly soluble in water. Fully aerated fresh water therefore contains only 8–10 ml O2/liter compared to the O2 concentration of 210 ml/liter in the air at sea level. Furthermore, the coefficient of diffusion (i.e. the rate at which a substances diffuses from a region of high concentration to one of low concentration, under standard conditions) of the respiratory gases is typically 10,000 faster in air than in water. Thus oxygen, for instance, has a diffusion coefficient of 17.6 mm2/s in air, but only 0.0021 mm2/s in water. The corresponding values for carbon dioxide are 16 mm2/s in air and 0.0016 mm2/s in water. This means that when oxygen is taken up from the water in contact with a gas exchanger, it is replaced considerably more slowly by the oxygen from the oxygen-rich regions small distances away from the exchanger than would have occurred in air. Fish have developed gills deal with these problems. Gills are specialized organs containing filaments, which further divide into lamellae. The lamellae contain a dense thin walled capillary network that exposes a large gas exchange surface area to the very large volumes of water passing over them.
Gills use a countercurrent exchange system that increases the efficiency of oxygen-uptake from the water. Fresh oxygenated water taken in through the mouth is uninterruptedly "pumped" through the gills in one direction, while the blood in the lamellae flows in the opposite direction, creating the countercurrent blood and water flow (Fig. 22), on which the fish's survival depends.
Water is drawn in through the mouth by closing the operculum (gill cover), and enlarging the mouth cavity (Fig. 23). Simultaneously the gill chambers enlarge, producing a lower pressure there than in the mouth causing water to flow over the gills. The mouth cavity then contracts, inducing the closure of the passive oral valves, thereby preventing the back-flow of water from the mouth (Fig. 23). The water in the mouth is, instead, forced over the gills, while the gill chambers contract emptying the water they contain through the opercular openings (Fig. 23). Back-flow into the gill chamber during the inhalatory phase is prevented by a membrane along the ventroposterior border of the operculum (diagram on the left in Fig. 23). Thus the mouth cavity and gill chambers act alternately as suction pump and pressure pump to maintain a steady flow of water over the gills in one direction. Since the blood in the lamellar capillaries flows in the opposite direction to that of the water, the consequent countercurrent flow of blood and water maintains steep concentration gradients for oxygen and carbon dioxide along the entire length of each capillary (lower diagram in Fig. 22). Oxygen is, therefore, able to continually diffuse down its gradient into the blood, and the carbon dioxide down its gradient into the water. Although countercurrent exchange systems theoretically allow an almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood.
In certain active pelagic sharks, water passes through the mouth and over the gills while they are moving, in a process known as "ram ventilation". While at rest, most sharks pump water over their gills, as most bony fish do, to ensure that oxygenated water continues to flow over their gills. But a small number of species have lost the ability to pump water through their gills and must swim without rest. These species are obligate ram ventilators and would presumably asphyxiate if unable to move. Obligate ram ventilation is also true of some pelagic bony fish species.
There are a few fish that can obtain oxygen for brief periods of time from air swallowed from above the surface of the water. Thus lungfish possess one or two lungs, and the labyrinth fish have developed a special "labyrinth organ", which characterizes this suborder of fish. The labyrinth organ is a much-folded suprabranchial accessory breathing organ. It is formed by a vascularized expansion of the epibranchial bone of the first gill arch, and is used for respiration in air. This organ allows labyrinth fish to take in oxygen directly from the air, instead of taking it from the water in which they reside through the use of gills. The labyrinth organ helps the oxygen in the inhaled air to be absorbed into the bloodstream. As a result, labyrinth fish can survive for a short period of time out of water, as they can inhale the air around them, provided they stay moist. Labyrinth fish are not born with functional labyrinth organs. The development of the organ is gradual and most juvenile labyrinth fish breathe entirely with their gills and develop the labyrinth organs when they grow older.
Invertebrates
Arthropods
Some species of crab use a respiratory organ called a branchiostegal lung. Its gill-like structure increases the surface area for gas exchange which is more suited to taking oxygen from the air than from water. Some of the smallest spiders and mites can breathe simply by exchanging gas through the surface of the body. Larger spiders, scorpions and other arthropods use a primitive book lung.
Insects
Most insects breath passively through their spiracles (special openings in the exoskeleton) and the air reaches every part of the body by means of a series of smaller and smaller tubes called 'trachaea' when their diameters are relatively large, and 'tracheoles' when their diameters are very small. The tracheoles make contact with individual cells throughout the body. They are partially filled with fluid, which can be withdrawn from the individual tracheoles when the tissues, such as muscles, are active and have a high demand for oxygen, bringing the air closer to the active cells. This is probably brought about by the buildup of lactic acid in the active muscles causing an osmotic gradient, moving the water out of the tracheoles and into the active cells. Diffusion of gases is effective over small distances but not over larger ones, this is one of the reasons insects are all relatively small. Insects which do not have spiracles and trachaea, such as some Collembola, breathe directly through their skins, also by diffusion of gases.
The number of spiracles an insect has is variable between species, however, they always come in pairs, one on each side of the body, and usually one pair per segment. Some of the Diplura have eleven, with four pairs on the thorax, but in most of the ancient forms of insects, such as Dragonflies and Grasshoppers there are two thoracic and eight abdominal spiracles. However, in most of the remaining insects, there are fewer. It is at the level of the tracheoles that oxygen is delivered to the cells for respiration.
Insects were once believed to exchange gases with the environment continuously by the simple diffusion of gases into the tracheal system. More recently, however, large variation in insect ventilatory patterns has been documented and insect respiration appears to be highly variable. Some small insects do not demonstrate continuous respiratory movements and may lack muscular control of the spiracles. Others, however, utilize muscular contraction of the abdomen along with coordinated spiracle contraction and relaxation to generate cyclical gas exchange patterns and to reduce water loss into the atmosphere. The most extreme form of these patterns is termed discontinuous gas exchange cycles.
Molluscs
Molluscs generally possess gills that allow gas exchange between the aqueous environment and their circulatory systems. These animals also possess a heart that pumps blood containing hemocyanin as its oxygen-capturing molecule. Hence, this respiratory system is similar to that of vertebrate fish. The respiratory system of gastropods can include either gills or a lung.
Plants
Plants use carbon dioxide gas in the process of photosynthesis, and exhale oxygen gas as waste. The chemical equation of photosynthesis is 6 CO2 (carbon dioxide) and 6 H2O (water), which in the presence of sunlight makes C6H12O6 (glucose) and 6 O2 (oxygen). Photosynthesis uses electrons on the carbon atoms as the repository for the energy obtained from sunlight. Respiration is the opposite of photosynthesis. It reclaims the energy to power chemical reactions in cells. In so doing the carbon atoms and their electrons are combined with oxygen forming CO2 which is easily removed from both the cells and the organism. Plants use both processes, photosynthesis to capture the energy and oxidative metabolism to use it.
Plant respiration is limited by the process of diffusion. Plants take in carbon dioxide through holes, known as stomata, that can open and close on the undersides of their leaves and sometimes other parts of their anatomy. Most plants require some oxygen for catabolic processes (break-down reactions that release energy). But the quantity of O2 used per hour is small as they are not involved in activities that require high rates of aerobic metabolism. Their requirement for air, however, is very high as they need CO2 for photosynthesis, which constitutes only 0.04% of the environmental air. Thus, to make 1 g of glucose requires the removal of all the CO2 from at least 18.7 liters of air at sea level. But inefficiencies in the photosynthetic process cause considerably greater volumes of air to be used.
See also
Pulmonary function testing (PFT)
References
External links
A high school level description of the respiratory system
Introduction to Respiratory System
Science aid: Respiratory System A simple guide for high school students
The Respiratory System University level (Microsoft Word document)
Lectures in respiratory physiology by noted respiratory physiologist John B. West (also at YouTube)
Articles containing video clips | Respiratory system | [
"Biology"
] | 12,343 | [
"Organ systems",
"Respiratory system"
] |
66,819 | https://en.wikipedia.org/wiki/Root%20mean%20square | In mathematics, the root mean square (abbrev. RMS, or rms) of a set of numbers is the square root of the set's mean square.
Given a set , its RMS is denoted as either or . The RMS is also known as the quadratic mean (denoted ), a special case of the generalized mean. The RMS of a continuous function is denoted and can be defined in terms of an integral of the square of the function.
In estimation theory, the root-mean-square deviation of an estimator measures how far the estimator strays from the data.
Definition
The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values, or the square of the function that defines the continuous waveform.
In the case of a set of n values , the RMS is
The corresponding formula for a continuous function (or waveform) f(t) defined over the interval is
and the RMS for a function over all time is
The RMS over all time of a periodic function is equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a sample consisting of equally spaced observations. Additionally, the RMS value of various waveforms can also be determined without calculus, as shown by Cartwright.
In the case of the RMS statistic of a random process, the expected value is used instead of the mean.
In common waveforms
If the waveform is a pure sine wave, the relationships between amplitudes (peak-to-peak, peak) and RMS are fixed and known, as they are for any continuous periodic wave. However, this is not true for an arbitrary waveform, which may not be periodic or continuous. For a zero-mean sine wave, the relationship between RMS and peak-to-peak amplitude is:
Peak-to-peak
For other waveforms, the relationships are not the same as they are for sine waves. For example, for either a triangular or sawtooth wave:
Peak-to-peak
In waveform combinations
Waveforms made by summing known simple waveforms have an RMS value that is the root of the sum of squares of the component RMS values, if the component waveforms are orthogonal (that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself).
Alternatively, for waveforms that are perfectly positively correlated, or "in phase" with each other, their RMS values sum directly.
Uses
In electrical engineering
Current
The RMS of an alternating electric current equals the value of constant direct current that would dissipate the same power in a resistive load.
Voltage
A special case of RMS of waveform combinations is:
where refers to the direct current (or average) component of the signal, and is the alternating current component of the signal.
Average electrical power
Electrical engineers often need to know the power, P, dissipated by an electrical resistance, R. It is easy to do the calculation when there is a constant current, I, through the resistance. For a load of R ohms, power is given by:
However, if the current is a time-varying function, I(t), this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to discuss the average power dissipated over time, which is calculated by taking the average power dissipation:
So, the RMS value, IRMS, of the function I(t) is the constant current that yields the same power dissipation as the time-averaged power dissipation of the current I(t).
Average power can also be found using the same method that in the case of a time-varying voltage, V(t), with RMS value VRMS,
This equation can be used for any periodic waveform, such as a sinusoidal or sawtooth waveform, allowing us to calculate the mean power delivered into a specified load.
By taking the square root of both these equations and multiplying them together, the power is found to be:
Both derivations depend on voltage and current being proportional (that is, the load, R, is purely resistive). Reactive loads (that is, loads capable of not just dissipating energy but also storing it) are discussed under the topic of AC power.
In the common case of alternating current when I(t) is a sinusoidal current, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. If Ip is defined to be the peak current, then:
where t is time and ω is the angular frequency (ω = 2/T, where T is the period of the wave).
Since Ip is a positive constant and was to be squared within the integral:
Using a trigonometric identity to eliminate squaring of trig function:
but since the interval is a whole number of complete cycles (per definition of RMS), the sine terms will cancel out, leaving:
A similar analysis leads to the analogous equation for sinusoidal voltage:
where IP represents the peak current and VP represents the peak voltage.
Because of their usefulness in carrying out power calculations, listed voltages for power outlets (for example, 120V in the US, or 230V in Europe) are almost always quoted in RMS values, and not peak values. Peak values can be calculated from RMS values from the above formula, which implies V = VRMS × , assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 × , or about 170 volts. The peak-to-peak voltage, being double this, is about 340 volts. A similar calculation indicates that the peak mains voltage in Europe is about 325 volts, and the peak-to-peak mains voltage, about 650 volts.
RMS quantities such as electric current are usually calculated over one cycle. However, for some purposes the RMS current over a longer period is required when calculating transmission power losses. The same principle applies, and (for example) a current of 10 amps used for 12 hours each 24-hour day represents an average current of 5 amps, but an RMS current of 7.07 amps, in the long term.
The term RMS power is sometimes erroneously used (e.g., in the audio industry) as a synonym for mean power or average power (it is proportional to the square of the RMS voltage or RMS current in a resistive load). For a discussion of audio power measurements and their shortcomings, see Audio power.
Speed
In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:
where R represents the gas constant, 8.314 J/(mol·K), T is the temperature of the gas in kelvins, and M is the molar mass of the gas in kilograms per mole. In physics, speed is defined as the scalar magnitude of velocity. For a stationary gas, the average speed of its molecules can be in the order of thousands of km/h, even though the average velocity of its molecules is zero.
Error
When two data sets — one set from theoretical prediction and the other from actual measurement of some physical variable, for instance — are compared, the RMS of the pairwise differences of the two data sets can serve as a measure of how far on average the error is from 0. The mean of the absolute values of the pairwise differences could be a useful measure of the variability of the differences. However, the RMS of the differences is usually the preferred measure, probably due to mathematical convention and compatibility with other formulae.
In frequency domain
The RMS can be computed in the frequency domain, using Parseval's theorem. For a sampled signal , where is the sampling period,
where and N is the sample size, that is, the number of observations in the sample and DFT coefficients.
In this case, the RMS computed in the time domain is the same as in the frequency domain:
Relationship to other statistics
The standard deviation of a population or a waveform is the RMS deviation of from its arithmetic mean . They are related to the RMS value of by
.
From this it is clear that the RMS value is always greater than or equal to the average, in that the RMS includes the squared deviation (error) as well.
Physical scientists often use the term root mean square as a synonym for standard deviation when it can be assumed the input signal has zero mean, that is, referring to the square root of the mean squared deviation of a signal from a given baseline or fit. This is useful for electrical engineers in calculating the "AC only" RMS of a signal. Standard deviation being the RMS of a signal's variation about the mean, rather than about 0, the DC component is removed (that is, RMS(signal) = stdev(signal) if the mean signal is 0).
See also
Average rectified value (ARV)
Central moment
Geometric mean
Glossary of mathematical symbols
L2 norm
Least squares
Mean squared displacement
Pythagorean addition
True RMS converter
Notes
References
External links
A case for why RMS is a misnomer when applied to audio power
A Java applet on learning RMS
Means
Statistical deviation and dispersion
it:Valore efficace | Root mean square | [
"Physics",
"Mathematics"
] | 1,971 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
67,042 | https://en.wikipedia.org/wiki/Weatherization | Weatherization (American English) or weatherproofing (British English) is the practice of protecting a building and its interior from the elements, particularly from sunlight, precipitation, and wind, and of modifying a building to reduce energy consumption and optimize energy efficiency.
Weatherization is distinct from building insulation, although building insulation requires weatherization for proper functioning. Many types of insulation can be thought of as weatherization, because they block drafts or protect from cold winds. Whereas insulation primarily reduces conductive heat flow, weatherization primarily reduces convective heat flow.
In the United States, buildings use one third of all energy consumed and two thirds of all electricity. Due to the high energy usage, they are a major source of the pollution that causes urban air quality problems and pollutants that contribute to climate change. Building energy usage accounts for 49 percent of sulfur dioxide emissions, 25 percent of nitrous oxide emissions, and 10 percent of particulate emissions.
Procedures
Typical weatherization procedures include:
Sealing bypasses (cracks, gaps, holes), especially around doors, windows, pipes and wiring that penetrate the ceiling and floor, and other areas with high potential for heat loss, using caulk, foam sealant, weather-stripping, window film, door sweeps, electrical receptacle gaskets, and so on to reduce infiltration.
Sealing recessed lighting fixtures ('can lights' or 'high-hats'), which leak large amounts of air into unconditioned attic space.
Sealing air ducts, which can account for 20% of heat loss, using fiber-reinforced mastic (not duck/duct tape, which is not suitable for this purpose)
Installing/replacing dampers in exhaust ducts, to prevent outside air from entering the house when the exhaust fan or clothes dryer is not in use.
Protecting pipes from corrosion and freezing.
Installing footing drains, foundation waterproofing membranes, interior perimeter drains, sump pump, gutters, downspout extensions, downward-sloping grading, French drains, swales, and other techniques to protect a building from both surface water and ground water.
Providing proper ventilation to unconditioned spaces to protect a building from the effects of condensation. See Ventilation issues in houses
Installing roofing, building wrap, siding, flashing, skylights or solar tubes and making sure they are in good condition on an existing building.
Installing insulation in walls, floors, and ceilings, around ducts and pipes, around water heaters, and near the foundation and sill.
Installing storm doors and storm windows.
Replacing old drafty doors with tightly sealing, foam-core doors.
Retrofitting older windows with a stop or parting bead across the sill where it meets the sash.
Replacing older windows with low-energy, double-glazed windows.
The phrase "whole-house weatherization" extends the traditional definition of weatherization to include installation of modern, energy-saving heating and cooling equipment, or repair of old, inefficient equipment (furnaces, boilers, water heaters, programmable thermostats, air conditioners, and so on). The "Whole-House" approach also looks at how the house performs as a system.
Air quality
Weatherization generally does not cause indoor air quality problems by adding new pollutants to the air. (There are a few exceptions, such as caulking, that can sometimes emit pollutants.) However, measures such as installing storm windows, weather stripping, caulking, and blown-in wall insulation can reduce the amount of outdoor air infiltrating into a home. Consequently, after weatherization, concentrations of indoor air pollutants from sources inside the home can increase.
Weatherization may have a negative impact on indoor air quality, if done improperly, exacerbating respiratory conditions especially among occupants with pre-existing respiratory illnesses. This may occur because of a drastic decrease in air exchange rate in the home, introduction of new chemicals, and poor management of indoor moisture due to a poorly performed weatherization work. Low air exchange rates may lead to higher concentrations of pollutants in the air when ventilation is not sufficiently addressed during weatherization work. However, the situation may be different in case of a house situated in an area with high outdoor air pollution levels such as in close proximity (<200 m) from a busy major road. In such a scenario, a more airtight building envelope can actually offer protection against infiltration of outdoor air pollution. The same is true for the protection offered by tighter building envelopes during wildfire events that cause elevated levels of outdoor air pollution.
US Weatherization Assistance Program
Weatherization is a set of measures and practices aimed at improving the energy efficiency of a building or home, primarily to reduce energy consumption and lower utility bills. The main goal of weatherization is to make a structure more comfortable and cost-effective to live in, especially during extreme weather conditions. It involves making various improvements to a building's insulation, air sealing, and overall energy systems.
The American Council for an Energy-Efficient Economy estimates that over 7 million homes have been weatherized, giving yearly savings of 2.6 TWh of electricity, of fossil gas and of reduced carbon dioxide emissions. The US Department of Energy estimates weatherization returns $2.69 for each dollar spent on the program, realized in energy and non-energy benefits. Families whose homes are weatherized are expected to save $358 on their first year's utility bills.
Low Income Home Energy Assistance Programs in many states work side by side with WAP to provide both immediate and long-term solutions to energy poverty.
See also
Building envelope
Building indoor environment
Building performance
Central heating
Heating, ventilation and air conditioning (HVAC)
Low-energy house
Vapor barrier
WikiBooks How-to guide to Weatherization
References
External links
Houston Advanced Research Center
The Weatherization Assistance Program (WAP) Technical Assistance Center (WAPTAC)
The WAP System for Identifying and Reviewing New Technologies and Techniques
Weatherization Information Portal
Home Energy Weatherization Articles
https://www.rhinoshieldwis.com/
http://rhinoshieldjax.com/
Thermodynamics
Heating, ventilation, and air conditioning
United States Department of Energy | Weatherization | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,274 | [
"Thermodynamics",
"Dynamical systems"
] |
67,043 | https://en.wikipedia.org/wiki/Thermal%20insulation | Thermal insulation is the reduction of heat transfer (i.e., the transfer of thermal energy between objects of differing temperature) between objects in thermal contact or in range of radiative influence. Thermal insulation can be achieved with specially engineered methods or processes, as well as with suitable object shapes and materials.
Heat flow is an inevitable consequence of contact between objects of different temperature. Thermal insulation provides a region of insulation in which thermal conduction is reduced, creating a thermal break or thermal barrier, or thermal radiation is reflected rather than absorbed by the lower-temperature body.
The insulating capability of a material is measured as the inverse of thermal conductivity (k). Low thermal conductivity is equivalent to high insulating capability (resistance value). In thermal engineering, other important properties of insulating materials are product density (ρ) and specific heat capacity (c).
Definition
Thermal conductivity k is measured in watts-per-meter per kelvin (W·m−1·K−1 or W/mK). This is because heat transfer, measured as power, has been found to be (approximately) proportional to
difference of temperature
the surface area of thermal contact
the inverse of the thickness of the material
From this, it follows that the power of heat loss is given by
Thermal conductivity depends on the material and for fluids, its temperature and pressure. For comparison purposes, conductivity under standard conditions (20 °C at 1 atm) is commonly used. For some materials, thermal conductivity may also depend upon the direction of heat transfer.
The act of insulation is accomplished by encasing an object in material with low thermal conductivity in high thickness. Decreasing the exposed surface area could also lower heat transfer, but this quantity is usually fixed by the geometry of the object to be insulated.
Multi-layer insulation is used where radiative loss dominates, or when the user is restricted in volume and weight of the insulation (e.g. emergency blanket, radiant barrier)
Insulation of cylinders
For insulated cylinders, a critical radius blanket must be reached. Before the critical radius is reached, any added insulation increases heat transfer. The convective thermal resistance is inversely proportional to the surface area and therefore the radius of the cylinder, while the thermal resistance of a cylindrical shell (the insulation layer) depends on the ratio between outside and inside radius, not on the radius itself. If the outside radius of a cylinder is increased by applying insulation, a fixed amount of conductive resistance (equal to 2×π×k×L(Tin-Tout)/ln(Rout/Rin)) is added. However, at the same time, the convective resistance is reduced. This implies that adding insulation below a certain critical radius actually increases the heat transfer. For insulated cylinders, the critical radius is given by the equation
This equation shows that the critical radius depends only on the heat transfer coefficient and the thermal conductivity of the insulation. If the radius of the insulated cylinder is smaller than the critical radius for insulation, the addition of any amount of insulation will increase heat transfer.
Applications
Clothing and natural animal insulation in birds and mammals
Gases possess poor thermal conduction properties compared to liquids and solids and thus make good insulation material if they can be trapped. In order to further augment the effectiveness of a gas (such as air), it may be disrupted into small cells, which cannot effectively transfer heat by natural convection. Convection involves a larger bulk flow of gas driven by buoyancy and temperature differences, and it does not work well in small cells where there is little density difference to drive it, and the high surface-to-volume ratios of the small cells retards gas flow in them by means of viscous drag.
In order to accomplish small gas cell formation in man-made thermal insulation, glass and polymer materials can be used to trap air in a foam-like structure. This principle is used industrially in building and piping insulation such as (glass wool), cellulose, rock wool, polystyrene foam (styrofoam), urethane foam, vermiculite, perlite, and cork. Trapping air is also the principle in all highly insulating clothing materials such as wool, down feathers and fleece.
The air-trapping property is also the insulation principle employed by homeothermic animals to stay warm, for example down feathers, and insulating hair such as natural sheep's wool. In both cases the primary insulating material is air, and the polymer used for trapping the air is natural keratin protein.
Buildings
Maintaining acceptable temperatures in buildings (by heating and cooling) uses a large proportion of global energy consumption. Building insulations also commonly use the principle of small trapped air-cells as explained above, e.g. fiberglass (specifically glass wool), cellulose, rock wool, polystyrene foam, urethane foam, vermiculite, perlite, cork, etc. For a period of time, asbestos was also used, however, it caused health problems.
Window insulation film can be applied in weatherization applications to reduce incoming thermal radiation in summer and loss in winter.
When well insulated, a building is:
energy efficient and cheaper to keep warm in the winter, or cool in the summer. Energy efficiency will lead to a reduced carbon footprint.
more comfortable because there is uniform temperatures throughout the space. There is less temperature gradient both vertically (between ankle height and head height) and horizontally from exterior walls, ceilings and windows to the interior walls, thus producing a more comfortable occupant environment when outside temperatures are extremely cold or hot.
In industry, energy has to be expended to raise, lower, or maintain the temperature of objects or process fluids. If these are not insulated, this increases the energy requirements of a process, and therefore the cost and environmental impact.
Mechanical systems
Space heating and cooling systems distribute heat throughout buildings by means of pipes or ductwork. Insulating these pipes using pipe insulation reduces energy into unoccupied rooms and prevents condensation from occurring on cold and chilled pipework.
Pipe insulation is also used on water supply pipework to help delay pipe freezing for an acceptable length of time.
Mechanical insulation is commonly installed in industrial and commercial facilities.
Passive radiative cooling surfaces
Thermal insulation has been found to improve the thermal emittance of passive radiative cooling surfaces by increasing the surface's ability to lower temperatures below ambient under direct solar intensity. Different materials may be used for thermal insulation, including polyethylene aerogels that reduce solar absorption and parasitic heat gain which may improve the emitter's performance by over 20%. Other aerogels also exhibited strong thermal insulation performance for radiative cooling surfaces, including a silica-alumina nanofibrous aerogel.
Refrigeration
A refrigerator consists of a heat pump and a thermally insulated compartment.
Spacecraft
Launch and re-entry place severe mechanical stresses on spacecraft, so the strength of an insulator is critically important; the failure of insulating tiles on the Space Shuttle Columbia caused the shuttle airframe to overheat and break apart during reentry, killing the astronauts on board. Re-entry through the atmosphere generates very high temperatures due to compression of the air at high speeds. Insulators must meet demanding physical properties beyond their thermal transfer retardant properties. Examples of insulation used on spacecraft include reinforced carbon-carbon composite nose cone and silica fiber tiles of the Space Shuttle. See also Insulative paint.
Automotive
Internal combustion engines produce a lot of heat during their combustion cycle. This can have a negative effect when it reaches various heat-sensitive components such as sensors, batteries, and starter motors. As a result, thermal insulation is necessary to prevent the heat from the exhaust from reaching these components.
High performance cars often use thermal insulation as a means to increase engine performance.
Greenhouse
Thermal insulation is integral to greenhouse design, enabling controlled environments for plant growth and energy efficiency. Common insulation techniques include the use of double-layer films and multi-wall polycarbonate panels, which trap air between layers to reduce heat transfer while maintaining sufficient light transmission for photosynthesis. These measures improve energy efficiency and support year-round crop production, significantly reducing heating costs.
Factors influencing performance
Insulation performance is influenced by many factors, the most prominent of which include:
Thermal conductivity ("k" or "λ" value)
Surface emissivity ("ε" value)
Insulation thickness
Density
Specific heat capacity
Thermal bridging
It is important to note that the factors influencing performance may vary over time as material ages or environmental conditions change.
Calculating requirements
Industry standards are often rules of thumb, developed over many years, that offset many conflicting goals: what people will pay for, manufacturing cost, local climate, traditional building practices, and varying standards of comfort. Both heat transfer and layer analysis may be performed in large industrial applications, but in household situations (appliances and building insulation), airtightness is the key in reducing heat transfer due to air leakage (forced or natural convection). Once airtightness is achieved, it has often been sufficient to choose the thickness of the insulating layer based on rules of thumb. Diminishing returns are achieved with each successive doubling of the insulating layer.
It can be shown that for some systems, there is a minimum insulation thickness required for an improvement to be realized.
See also
Down feather
Aerogel
References
Further reading
US DOE publication, Residential Insulation
US DOE publication, Energy Efficient Windows
US EPA publication on home sealing
DOE/CE 2002
Heat transfer
Thermal protection.
Insulators | Thermal insulation | [
"Physics",
"Chemistry"
] | 1,951 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
67,046 | https://en.wikipedia.org/wiki/Thermal%20mass | In building design, thermal mass is a property of the matter of a building that requires a flow of heat in order for it to change temperature.
Not all writers agree on what physical property of matter "thermal mass" describes. Most writers use it as a synonym for heat capacity, the ability of a body to store thermal energy. It is typically referred to by the symbol Cth, and its SI unit is J/K or J/°C (which are equivalent).
However:
Christoph Reinhart at MIT describes thermal mass as its volume times its volumetric heat capacity.
Randa Ghattas, Franz-Joseph Ulm and Alison Ledwith, also at MIT, write that "It [thermal mass] is dependent on the relationship between the specific heat capacity, density, thickness and conductivity of a material" although they don't provide a unit, describing materials only as "low" or "high" thermal mass.
Chris Reardon equates thermal mass with volumetric heat capacity .
The lack of a consistent definition of what property of matter thermal mass describes has led some writers to dismiss its use in building design as pseudoscience.
Background
The equation relating thermal energy to thermal mass is:
where Q is the thermal energy transferred, Cth is the thermal mass of the body, and ΔT is the change in temperature.
For example, if 250 J of heat energy is added to a copper gear with a thermal mass of 38.46 J/°C, its temperature will rise by 6.50 °C.
If the body consists of a homogeneous material with sufficiently known physical properties, the thermal mass is simply the mass of material present times the specific heat capacity of that material. For bodies made of many materials, the sum of heat capacities for their pure components may be used in the calculation, or in some cases (as for a whole animal, for example) the number may simply be measured for the entire body in question, directly.
As an extensive property, heat capacity is characteristic of an object; its corresponding intensive property is specific heat capacity, expressed in terms of a measure of the amount of material such as mass or number of moles, which must be multiplied by similar units to give the heat capacity of the entire body of material. Thus the heat capacity can be equivalently calculated as the product of the mass m of the body and the specific heat capacity c for the material, or the product of the number of moles of molecules present n and the molar specific heat capacity . For discussion of why the thermal energy storage abilities of pure substances vary, see factors that affect specific heat capacity.
For a body of uniform composition, can be approximated by
where is the mass of the body and is the isobaric specific heat capacity of the material averaged over temperature range in question. For bodies composed of numerous different materials, the thermal masses for the different components can just be added together.
Heat capacity in buildings
Christoph Reinhard describes the impact of heat capacity this way:
If the outside diurnal temperature swing frequently oscillates around a desired (balance point) temperature, adding thermal mass may increase the hoursof comfort in a giventime interval.
Thermal mass may act as a liability to keep a space comfortable e.g. when it is only used intermittently.
Thermal mass has really no effect if the direction of heat flow through the building envelope stays constant for extended periodsof time.
Heat capacity is not normally calculated in the engineering of buildings. In the United States and Canada, national building codes and most state and local jurisdictions require that heating and cooling equipment be sized in accordance with Manual J of the Air Conditioning Contractors of America Association.
The Manual J process uses detailed measurements of a building's dimensions, construction, insulation, air-tightness, features and occupant loads, but it does not take into effect the heat capacity. Some heat capacity is presumed in the Manual J process, equipment sized according to Manual J is sized to maintain comfort at the first percentile of temperature for heating and the 99th percentile of temperature for cooling. The process presumes that the building has sufficient heat capacity to maintain comfort during brief excursions outside of those extremes.
See also
Earthship
Rammed earth wall
Specific heat capacity
Thermal energy storage
Trombe wall
References
Heating, ventilation, and air conditioning
Heat transfer
Mass
Thermodynamics | Thermal mass | [
"Physics",
"Chemistry",
"Mathematics"
] | 877 | [
"Transport phenomena",
"Scalar physical quantities",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Matter",
"Dynamical systems"
] |
67,211 | https://en.wikipedia.org/wiki/Electron%20configuration | In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s, and 2p subshells are occupied by two, two, and six electrons, respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by the nuclei and all the other electrons. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, a level of energy is associated with each electron configuration. In certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements, for describing the chemical bonds that hold atoms together, and in understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n, that electrons may occupy. In each term of an electron configuration, n is the positive integer that precedes each orbital letter (helium's electron configuration is 1s2, therefore n = 1, and the orbital contains two electrons). An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate two electrons, the second shell eight electrons, the third shell eighteen, and so on. The factor of two arises because the number of allowed states doubles with each successive shell due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin + (usually denoted by an up-arrow) and one with a spin of − (with a down-arrow).
A subshell is the set of states defined by a common azimuthal quantum number, , within a shell. The value of is in the range from 0 to n − 1. The values = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. For example, the 3d subshell has n = 3 and = 2. The maximum number of electrons that can be placed in a subshell is given by 2(2 + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell.
The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics, in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.
Exhaustive technical details about the complete quantum mechanical theory of atomic spectra and structure can be found and studied in the basic book of Robert D. Cowan.
Notation
Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic subshell labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each subshell placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3.
For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used. The electron configuration can be visualized as the core electrons, equivalent to the noble gas of the preceding period, and the valence electrons: each element in a period differs only by the last few subshells. Phosphorus, for instance, is in the third period. It differs from the second-period neon, whose configuration is 1s2 2s2 2p6, only by the presence of a third shell. The portion of its configuration that is equivalent to neon is abbreviated as [Ne], allowing the configuration of phosphorus to be written as [Ne] 3s2 3p3 rather than writing out the details of the configuration of neon explicitly. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element.
For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies that is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti.
The superscript 1 for a singly occupied subshell is not compulsory; for example aluminium may be written as either [Ne] 3s2 3p1 or [Ne] 3s2 3p. In atoms where a subshell is unoccupied despite higher subshells being occupied (as is the case in some ions, as well as certain neutral atoms shown to deviate from the Madelung rule), the empty subshell is either denoted with a superscript 0 or left out altogether. For example, neutral palladium may be written as either or simply , and the lanthanum(III) ion may be written as either or simply [Xe].
It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, , of 0, 1, 2 or 3 respectively. After f, the sequence continues alphabetically g, h, i... ( = 4, 5, 6...), skipping j, although orbitals of these types are rarely required.
The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below).
Energy of ground state and excited states
The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state.
As an example, the ground state configuration of the sodium atom is 1s2 2s2 2p6 3s1, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p subshell, to obtain the
1s2 2s2 2p6 3p1 configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm.
Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to X-ray photons. This would be the case for example to excite a 2p electron of sodium to the 3s level and form the excited 1s2 2s2 2p5 3s2 configuration.
The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule.
History
Irving Langmuir was the first to propose in his 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his "concentric theory of atomic structure". Langmuir had developed his work on electron atomic structure from other chemists as is shown in the development of the History of the periodic table and the Octet rule.
Niels Bohr (1923) incorporated Langmuir's model that the periodicity in the properties of the elements might be explained by the electronic structure of the atom. His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as 2.4.4.6 instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6). Bohr used 4 and 6 following Alfred Werner's 1893 paper. In fact, the chemists accepted the concept of atoms long before the physicists. Langmuir began his paper referenced above by saying,«…The problem of the structure of atoms has been attacked mainly by physicists who have given little consideration to the chemical properties which must ultimately be explained by a theory of atomic structure. The vast store of knowledge of chemical properties and relationships, such as is summarized by the Periodic Table, should serve as a better foundation for a theory of atomic structure than the relatively meager experimental data along purely physical lines... These electrons arrange themselves in a series of concentric shells, the first shell containing two electrons, while all other shells tend to hold eight.…»The valence electrons in the atom were described by Richard Abegg in 1904.
In 1924, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6. However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).
Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli in 1923 to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli hypothesized successfully that the Zeeman effect can be explained as depending only on the response of the outermost (i.e., valence) electrons of the atom. Pauli was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):
{{Blockquote|
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [], j [m] and m [ms].}}
The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936), see below) for the order in which atomic orbitals are filled with electrons.
Atoms: Aufbau principle and Madelung rule
The aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy subshells are filled before electrons are placed in higher-energy orbitals.The principle works very well (for the ground states of the atoms) for the known 118 elements, although it is sometimes slightly wrong. The modern form of the aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936, and later given a theoretical justification by V. M. Klechkowski:
Subshells are filled in the order of increasing n + .
Where two subshells have the same value of n + , they are filled in order of increasing n.
This gives the following order for filling the orbitals:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, , 6f, 7d, 8p, and 9s)
In this list the subshells in parentheses are not occupied in the ground state of the heaviest atom now known (Og, Z = 118).
The aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry.
Periodic table
The form of the periodic table is closely related to the atomic electron configuration for each element. For example, all the elements of group 2 (the table's second column) have an electron configuration of [E] ns (where [E] is a noble gas configuration), and have notable similarities in their chemical properties. The periodicity of the periodic table in terms of periodic table blocks is due to the number of electrons (2, 6, 10, and 14) needed to fill s, p, d, and f subshells. These blocks appear as the rectangular sections of the periodic table. The single exception is helium, which despite being an s-block atom is conventionally placed with the other noble gasses in the p-block due to its chemical inertness, a consequence of its full outer shell (though there is discussion in the contemporary literature on whether this exception should be retained).
The electrons in the valence (outermost) shell largely determine each element's chemical properties. The similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.
Shortcomings of the aufbau principle
The aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly (although there are mathematical approximations available, such as the Hartree–Fock method).
The fact that the aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift.)
Ionization of the transition metals
The naïve application of the aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s and [Ar] 4s respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n + = 4 (n = 4, = 0) while the 3d-orbital has n + = 5 (n = 3, = 2). After calcium, most neutral atoms in the first series of transition metals (scandium through zinc) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d 4s and [Ar] 3d 4s respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". However, this is not supported by the facts, as tungsten (W) has a Madelung-following d s configuration and not d s, and niobium (Nb) has an anomalous d s configuration that does not give it a half-filled or completely filled subshell.
The apparent paradox arises when electrons are removed'' from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals. The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ...
This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly does not. There is no special reason why the Fe ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree–Fock method of atomic structure calculation. More recently Scerri has argued that contrary to what is stated in the vast majority of sources including the title of his previous article on the subject, 3d orbitals rather than 4s are in fact preferentially occupied.
In chemical environments, configurations can change even more: Th3+ as a bare ion has a configuration of [Rn] 5f1, yet in most ThIII compounds the thorium atom has a 6d1 configuration instead. Mostly, what is present is rather a superposition of various configurations. For instance, copper metal is poorly described by either an [Ar] 3d 4s or an [Ar] 3d 4s configuration, but is rather well described as a 90% contribution of the first and a 10% contribution of the second. Indeed, visible light is already enough to excite electrons in most transition metals, and they often continuously "flow" through different configurations when that happens (copper and its group are an exception).
Similar ion-like 3d 4s configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom.
Other exceptions to Madelung's rule
There are several more exceptions to Madelung's rule among the heavier elements, and as atomic number increases it becomes more and more difficult to find simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, which are an approximate method for taking account of the effect of the other electrons on orbital energies. Qualitatively, for example, the 4d elements have the greatest concentration of Madelung anomalies, because the 4d–5s gap is larger than the 3d–4s and 5d–6s gaps.
For the heavier elements, it is also necessary to take account of the effects of special relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. This is the reason why the 6d elements are predicted to have no Madelung anomalies apart from lawrencium (for which relativistic effects stabilise the p1/2 orbital as well and cause its occupancy in the ground state), as relativity intervenes to make the 7s orbitals lower in energy than the 6d ones.
The table below shows the configurations of the f-block (green) and d-block (blue) atoms. It shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page). However this also depends on the charge: a calcium atom has 4s lower in energy than 3d, but a Ca2+ cation has 3d lower in energy than 4s. In practice the configurations predicted by the Madelung rule are at least close to the ground state even in these anomalous cases. The empty f orbitals in lanthanum, actinium, and thorium contribute to chemical bonding, as do the empty p orbitals in transition metals.
Vacant s, d, and f orbitals have been shown explicitly, as is occasionally done, to emphasise the filling order and to clarify that even orbitals unoccupied in the ground state (e.g. lanthanum 4f or palladium 5s) may be occupied and bonding in chemical compounds. (The same is also true for the p-orbitals, which are not explicitly shown because they are only actually occupied for lawrencium in gas-phase ground states.)
The various anomalies describe the free atoms and do not necessarily predict chemical behavior. Thus for example neodymium typically forms the +3 oxidation state, despite its configuration that if interpreted naïvely would suggest a more stable +2 oxidation state corresponding to losing only the 6s electrons. Contrariwise, uranium as is not very stable in the +3 oxidation state either, preferring +4 and +6.
The electron-shell configuration of elements beyond hassium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120. Element 121 should have the anomalous configuration , having a p rather than a g electron. Electron configurations beyond this are tentative and predictions differ between models, but Madelung's rule is expected to break down due to the closeness in energy of the , 6f, 7d, and 8p1/2 orbitals. That said, the filling sequence 8s, , 6f, 7d, 8p is predicted to hold approximately, with perturbations due to the huge spin-orbit splitting of the 8p and 9p shells, and the huge relativistic stabilisation of the 9s shell.
Open and closed shells
In the context of atomic orbitals, an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction. Conversely a closed shell is obtained with a completely filled valence shell. This configuration is very stable.
For molecules, "open shell" signifies that there are unpaired electrons. In molecular orbital theory, this leads to molecular orbitals that are singly occupied. In computational chemistry implementations of molecular orbital theory, open-shell molecules have to be handled by either the restricted open-shell Hartree–Fock method or the unrestricted Hartree–Fock method. Conversely a closed-shell configuration corresponds to a state where all molecular orbitals are either doubly occupied or empty (a singlet state). Open shell molecules are more difficult to study computationally.
Noble gas configuration
Noble gas configuration is the electron configuration of noble gases. The basis of all chemical reactions is the tendency of chemical elements to acquire stability. Main-group atoms generally obey the octet rule, while transition metals generally obey the 18-electron rule. The noble gases (He, Ne, Ar, Kr, Xe, Rn) are less reactive than other elements because they already have a noble gas configuration. Oganesson is predicted to be more reactive due to relativistic effects for heavy atoms.
{|class=wikitable
! Period
! Element
! colspan="7"| Configuration
|-
| 1 || He || 1s2|| || || || || ||
|-
| 2 || Ne || 1s2||2s2 2p6|| || || || ||
|-
| 3 || Ar || 1s2||2s2 2p6||3s2 3p6|| || || ||
|-
| 4 || Kr || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6|| || ||
|-
| 5 || Xe || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6|| ||
|-
| 6 || Rn || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6||6s2 4f14 5d10 6p6||
|-
| 7 || Og || 1s2||2s2 2p6||3s2 3p6||4s2 3d10 4p6||5s2 4d10 5p6||6s2 4f14 5d10 6p6||7s2 5f14 6d10 7p6
|}
Every system has the tendency to acquire the state of stability or a state of minimum energy, and so chemical elements take part in chemical reactions to acquire a stable electronic configuration similar to that of its nearest noble gas. An example of this tendency is two hydrogen (H) atoms reacting with one oxygen (O) atom to form water (H2O). Neutral atomic hydrogen has one electron in its valence shell, and on formation of water it acquires a share of a second electron coming from oxygen, so that its configuration is similar to that of its nearest noble gas helium (He) with two electrons in its valence shell. Similarly, neutral atomic oxygen has six electrons in its valence shell, and acquires a share of two electrons from the two hydrogen atoms, so that its configuration is similar to that of its nearest noble gas neon with eight electrons in its valence shell.
Electron configuration in molecules
Electron configuration in molecules is more complex than the electron configuration of atoms, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry, rather than the atomic orbital labels used for atoms and monatomic ions; hence, the electron configuration of the dioxygen molecule, O, is written 1σ 1σ 2σ 2σ 3σ 1π 1π, or equivalently 1σ 1σ 2σ 2σ 1π 3σ 1π. The term 1π represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.
The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings.
Electron configuration in solids
In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.
Applications
The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified forms of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form.
This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever-larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the aufbau principle. Not all methods in computational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method that discards the model.
For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems.
A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.
See also
Born–Oppenheimer approximation
d electron count
Electron configurations of the elements (data page)
Extended periodic table – discusses the limits of the periodic table
Group (periodic table)
HOMO/LUMO
Molecular term symbol
Octet rule
Periodic table (electron configurations)
Spherical harmonics
Unpaired electron
Valence shell
Notes
References
External links
What does an atom look like? Configuration in 3D
Atomic physics
Chemical properties
Electron states
Molecular physics
Quantum chemistry
Theoretical chemistry | Electron configuration | [
"Physics",
"Chemistry"
] | 6,947 | [
"Electron",
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Theoretical chemistry",
"Atomic physics",
" molecular",
"nan",
"Atomic",
"Electron states",
" and optical physics"
] |
67,405 | https://en.wikipedia.org/wiki/Great%20Red%20Spot | The Great Red Spot is a persistent high-pressure region in the atmosphere of Jupiter, producing an anticyclonic storm that is the largest in the Solar System. It is the most recognizable feature on Jupiter, owing to its red-orange color whose origin is still unknown. Located 22 degrees south of Jupiter's equator, it produces wind-speeds up to 432 km/h (268 mph). It was first observed in September 1831, with 60 recorded observations between then and 1878, when continuous observations began. A similar spot was observed from 1665 to 1713; if this is the same storm, it has existed for at least years, but a study from 2024 suggests this is not the case.
Observation history
First observations
The Great Red Spot may have existed before 1665, but it could be that the present spot was first seen only in 1830, and was well studied only after a prominent appearance in 1879. The storm that was seen in the 17th century may have been different from the storm that exists today. A long gap separates its period of current study after 1830 from its 17th century discovery. Whether the original spot dissipated and reformed, whether it faded, or if the observational record was simply poor is unknown.
The first sighting of the Great Red Spot is often credited to Robert Hooke, who described a spot on the planet in May 1664. However, it is likely that Hooke's spot was not only in another belt altogether (the North Equatorial Belt, as opposed to the current Great Red Spot's location in the South Equatorial Belt), but also that it was in the shadow of a transiting moon, most likely Callisto. Far more convincing is Giovanni Cassini's description of a "permanent spot" the following year. With fluctuations in visibility, Cassini's spot was observed from 1665 to 1713, but the 48-year observational gap makes the identity of the two spots inconclusive. The older spot's shorter observational history and slower motion than the modern spot makes it difficult to conclude that they are the same.
A minor mystery concerns a Jovian spot depicted in a 1711 canvas by Donato Creti, which is exhibited in the Vatican. Part of a series of panels in which different (magnified) heavenly bodies serve as backdrops for various Italian scenes, and all overseen by the astronomer Eustachio Manfredi for accuracy, Creti's painting is the first known depiction of the Great Red Spot as red (albeit raised to the Jovian northern hemisphere due to an optical inversion inherent to the era's telescopes). No Jovian feature was explicitly described in writing as red before the late 19th century.
The Great Red Spot has been observed since 5 September 1831. By 1879, over 60 observations had been recorded. Since it came into prominence in 1879, it has been under continuous observation.
A 2024 study of historical observations suggests that the "permanent spot" observed from 1665 to 1713 may not be the same as the modern Great Red Spot observed since 1831. It is suggested that the original spot disappeared, and later another spot formed, which is the one seen today.
Late 20th and 21st centuries
On 25 February 1979, when the Voyager 1 spacecraft was from Jupiter, it transmitted the first detailed image of the Great Red Spot. Cloud details as small as across were visible. The colorful, wavy cloud pattern seen to the left (west) of the Red Spot is a region of extraordinarily complex and variable wave motion.
In the 21st century, the major diameter of the Great Red Spot has been observed to be shrinking in size. At the start of 2004, its length was about half that of a century earlier, when it reached a size of , about three times the diameter of Earth. At the present rate of reduction, it will become circular by 2040. It is not known how long the spot will last, or whether the change is a result of normal fluctuations. In 2019, the Great Red Spot began "flaking" at its edge, with fragments of the storm breaking off and dissipating. The shrinking and "flaking" fueled speculation from some astronomers that the Great Red Spot could dissipate within 20 years. However, other astronomers believe that the apparent size of the Great Red Spot reflects its cloud coverage and not the size of the actual, underlying vortex, and they also believe that the flaking events can be explained by interactions with other cyclones or anticyclones, including incomplete absorptions of smaller systems; if this is the case, this would mean that the Great Red Spot is not in danger of dissipating.
A smaller spot, designated Oval BA, which formed in March 2000 from the merging of three white ovals, has turned reddish in color. Astronomers have named it the Little Red Spot or Red Jr. As of 5 June 2006, the Great Red Spot and Oval BA appeared to be approaching convergence. The storms pass each other about every two years, but the passings of 2002 and 2004 were of little significance. Amy Simon-Miller, of the Goddard Space Flight Center, predicted the storms would have their closest passing on 4 July 2006. She worked with Imke de Pater and Phil Marcus of UC Berkeley as well as a team of professional astronomers beginning in April 2006 to study the storms using the Hubble Space Telescope; on 20 July 2006, the two storms were photographed passing each other by the Gemini Observatory without converging. In May 2008, a third storm turned red.
The Juno spacecraft, which entered into a polar orbit around Jupiter in 2016, flew over the Great Red Spot upon its close approach to Jupiter on 11 July 2017, taking several images of the storm from a distance of about above the surface. Over the duration of the Juno mission, the spacecraft continued to study the composition and evolution of Jupiter's atmosphere, especially its Great Red Spot.
The Great Red Spot should not be confused with the Great Dark Spot, a feature observed near the northern pole of Jupiter in 2000 with the Cassini–Huygens spacecraft. There is also a feature in the atmosphere of Neptune called the Great Dark Spot. The latter feature was imaged by Voyager 2 in 1989 and may have been an atmospheric hole rather than a storm. It was no longer present as of 1994, although a similar spot had appeared farther to the north.
Mechanical dynamics
Jupiter's Great Red Spot rotates counterclockwise, with a period of about 4.5 Earth days, or 11 Jovian days, as of 2008. Measuring in width as of 3 April 2017, the Great Red Spot is 1.3 times the diameter of Earth. The cloud-tops of this storm are about above the surrounding cloud-tops. The storm has continued to exist for centuries because there is no planetary surface (only a mantle of hydrogen) to provide friction; circulating gas eddies persist for a very long time in the atmosphere because there is nothing to oppose their angular momentum.
Infrared data has long indicated that the Great Red Spot is colder (and thus higher in altitude) than most of the other clouds on the planet. The upper atmosphere above the storm, however, has substantially higher temperatures than the rest of the planet. Acoustic (sound) waves rising from the turbulence of the storm below have been proposed as an explanation for the heating of this region. The acoustic waves travel vertically up to a height of above the storm where they break in the upper atmosphere, converting wave energy into heat. This creates a region of upper atmosphere that is —several hundred kelvins warmer than the rest of the planet at this altitude. The effect is described as like "crashing [...] ocean waves on a beach".
Careful tracking of atmospheric features revealed the Great Red Spot's counterclockwise circulation as far back as 1966, observations dramatically confirmed by the first time-lapse movies from the Voyager fly-bys. The spot is confined by a modest eastward jet stream to its south and a very strong westward one to its north. Though winds around the edge of the spot peak at about , currents inside it seem stagnant, with little inflow or outflow. The rotation period of the spot has decreased with time, perhaps as a direct result of its steady reduction in size.
The Great Red Spot's latitude has been stable for the duration of good observational records, typically varying by about a degree. Its longitude, however, is subject to constant variation, including a 90-day longitudinal oscillation with an amplitude of ~1°. Because Jupiter does not rotate uniformly at all latitudes, astronomers have defined three different systems for defining longitude. System II is used for latitudes of more than 10 degrees and was originally based on the average rotational period of the Great Red Spot of 9h 55m 42s. Despite this, however, the spot has "lapped" the planet in System II at least 10 times since the early 19th century. Its drift rate has changed dramatically over the years and has been linked to the brightness of the South Equatorial Belt and the presence or absence of a South Tropical Disturbance.
Internal depth and structure
Jupiter's Great Red Spot (GRS) is an elliptical shaped anticyclone, occurring at 22 degrees below the equator, in Jupiter's southern hemisphere. The largest anticyclonic storm (~16,000 km) in our solar system, little is known about its internal depth and structure. Visible imaging and cloud-tracking from in-situ observation determined the velocity and vorticity of the GRS, which is located in a thin anticyclonic ring at 70–85% of the radius and is located along Jupiter's fastest westward moving jet stream. During NASA's 2016 Juno mission, gravity signature and thermal infrared data were obtained that offered insight into the structural dynamics and depth of the GRS. During July 2017, the Juno spacecraft conducted a second pass of the GRS to collect Microwave Radiometer (MWR) scans of the GRS to determine how far the GRS extended toward the surface of the condensed H2O layer. These MWR scans suggested that the GRS vertical depth extended to about 240 km below cloud level, with an estimated drop in atmospheric pressure to 100 bar. Two methods of analysis that constrain the data collected were the mascon approach, which found a depth of ~290 km, and the Slepian approach showing wind extending to ~310 km. These methods, along with gravity signature MWR data, suggest that the GRS zonal winds still increase at a rate of 50% of the velocity of the viable cloud level, before the wind decay starts at lower levels. This rate of wind decay and gravity data suggest the depth of the GRS is between 200 and 500 km.
Galileo and Cassini's thermal infrared imaging and spectroscopy of the GRS were conducted during 1995–2008, in order to find evidence of thermal inhomogeneities within the internal structure vortex of the GRS. Previous thermal infrared temperature maps from the Voyager, Galileo, and Cassini missions suggested the GRS is a structure of an anticyclonic vortex with a cold core within a upwelling warmer annulus; this data shows a gradient in the temperature of the GRS. Better understanding of Jupiter's atmospheric temperature, aerosol particle opacity, and ammonia gas composition was provided by thermal-IR imaging: a direct correlation of the visible cloud layers reactions, thermal gradient and compositional mapping to observational data were collected over decades. During December 2000, high spatial resolution images from Galileo, of an atmospheric turbulent area to the northwest of the GRS, showed a thermal contrast between the warmest region of the anticyclone and regions to the east and west of the GRS.
The vertical temperature of the structure of the GRS is constrained between the 100–600 mbar range, with the vertical temperature of the GRS core at approximately 400 mbar of pressure being 1.0–1.5°K, much warmer than regions of the GRS to the east–west, and 3.0–3.5°K warmer than regions to the north–south of the structure's edge. This structure is consistent with the data collected by the VISIR (VLT Mid-Infrared Imager Spectrometer on the ESO Very Large Telescope) imaging obtained in 2006; this revealed that the GRS was physically present at a wide range of altitudes that occur within the atmospheric pressure range of 80–600 mbar, and confirms the thermal infrared mapping result. To develop a model of the internal structure of the GRS, the Cassini instrument Composite Infrared Spectrometer (CIRS) and ground based spatial imaging mapped the composition of the phosphine and ammonia aerosols (PH3, NH3) and para-hydroxybenzoic acid within the anticyclonic circulation of the GRS. The images that were collected from the CIRS and ground-based imaging trace the vertical motion in the Jovian atmosphere by PH3 and NH3 spectra.
The highest concentrations of PH3 and NH3 were found to the north of the GRS peripheral rotation. They aided in determining the southward jet movement and showed evidence of an increase in altitude of the column of aerosols with pressures ranging from 200–500 mbar. However, the NH3 composition data shows that there is a major depletion of NH3 below the visible cloud layer at the southern peripheral ring of the GRS; this lower opacity is relative to a narrow band of atmospheric subsidence. The low mid-IR aerosol opacity, along with the temperature gradients, the altitude difference, and the vertical movement of the zonal winds, are involved with the development and sustainability of the vorticity. The stronger atmospheric subsidence and compositional asymmetries of the GRS suggest that the structure exhibits a degree of tilt from the northern edge to the southern edge of the structure. The GRS depth and internal structure has been constantly changing over decades; however there is still no logical reason that it is 200–500 km in depth, but the jet streams that supply the force that powers the GRS vortex are well below the structure base.
Color and composition
It is not known what causes the Great Red Spot's reddish color. Hypotheses supported by laboratory experiments suppose that it may be caused by chemical products created from the solar ultraviolet irradiation of ammonium hydrosulfide and the organic compound acetylene, which produces a reddish material—likely complex organic compounds called tholins. The high altitude of the compounds may also contribute to the coloring.
The Great Red Spot varies greatly in hue, from almost brick-red to pale salmon or even white. The spot occasionally disappears, becoming evident only through the Red Spot Hollow, which is its location in the South Equatorial Belt (SEB). Its visibility is apparently coupled to the SEB; when the belt is bright white, the spot tends to be dark, and when it is dark, the spot is usually light. These periods when the spot is dark or light occur at irregular intervals; from 1947 to 1997, the spot was darkest in the periods 1961–1966, 1968–1975, 1989–1990, and 1992–1993.
See also
Extraterrestrial vortex
Great Dark Spot
Great White Spot, a similar storm on Saturn
Hypercane
WISEP J190648.47+401106.8
References
Further reading
External links
Video based on Juno's Perijove 7 overflight by Seán Doran (see album for more)
Jupiter
Planetary spots
Anticyclones
Vortices
Storms
1830 in science | Great Red Spot | [
"Chemistry",
"Mathematics"
] | 3,165 | [
"Dynamical systems",
"Vortices",
"Fluid dynamics"
] |
67,488 | https://en.wikipedia.org/wiki/Well%20drilling | Well drilling is the process of drilling a hole in the ground for the extraction of a natural resource such as ground water, brine, natural gas, or petroleum, for the injection of a fluid from surface to a subsurface reservoir or for subsurface formations evaluation or monitoring. Drilling for the exploration of the nature of the material underground (for instance in search of metallic ore) is best described as borehole drilling.
The earliest wells were water wells, shallow pits dug by hand in regions where the water table approached the surface, usually with masonry or wooden walls lining the interior to prevent collapse. Modern drilling techniques utilize long drill shafts, producing holes much narrower and deeper than could be produced by digging.
Well drilling can be done either manually or mechanically and the nature of required equipment varies from extremely simple and cheap to very sophisticated.
Managed Pressure Drilling (MPD) is defined by the International Association of Drilling Contractors (IADC) as “an adaptive drilling process used to more precisely control the annular pressure profile throughout the wellbore." The objectives of MPD are “to ascertain the downhole pressure environment limits and to manage the annular hydraulic pressure profile accordingly."
History
Earliest Record
The earliest record of well drilling dates from 347 AD in China. Petroleum was used in ancient China for "lighting, as a lubricant for cart axles and the bearings of water-powered drop hammers, as a source of carbon for inksticks, and as a medical remedy for sores on humans and mange in animals." In ancient China, deep well drilling machines were in the forefront of brine well production by the 1st century BC. The ancient Chinese developed advanced sinking wells and were the first civilization to use a well-drilling machine and to use bamboo well casings to keep the holes open.
Modern Era
In the modern era, the first roller cone patent was for the rotary rock bit and was issued to American businessman and inventor Howard Hughes Sr. in 1909. It consisted of two interlocking cones. American businessman Walter Benona Sharp worked very closely with Hughes in developing the rock bit. The success of this bit led to the founding of the Sharp-Hughes Tool Company. In 1933 two Hughes engineers, one of whom was Ralph Neuhaus, invented the tricone bit, which has three cones. The Hughes patent for the tricone bit lasted until 1951, after which other companies made similar bits. However, Hughes still held 40% of the world's drill bit market in 2000. The superior wear performance of polycrystalline diamond compact (PDC) bits gradually eroded the dominance of roller cone bits and early in this century PDC drill bit revenues overtook those of roller cone bits. The technology of both bit types has advanced significantly to provide improved durability and rate of penetration of the rock. This has been driven by the economics of the industry, and by the change from the empirical approach of Hughes in the 1930s, to modern day domain Finite Element codes for both the hydraulic and cutter placement software.
Drill bits in mechanical drilling
The factors affecting drill bit selection include the type of geology and the capabilities of the rig. Due to the high number of wells that have been drilled, information from an adjacent well is most often used to make the appropriate selection. Two different types of drill bits exist: fixed cutter and roller cone. A fixed cutter bit is one where there are no moving parts, but drilling occurs due to shearing, scraping or abrasion of the rock. Fixed cutter bits can be either polycrystalline diamond compact (PDC) or grit hot-pressed inserts (GHI) or natural diamond. Roller cone bits can be either tungsten carbide inserts (TCI) for harder formations or milled tooth (MT) for softer rock. The manufacturing process and composites used in each type of drill bit make them ideal for specific drilling situations. Additional enhancements can be made to any bit to increase the effectiveness for almost any drilling situation.
A major factor in drill bit selection is the type of formation to be drilled. The effectiveness of a drill bit varies by formation type. There are three types of formations: soft, medium and hard. A soft formation includes unconsolidated sands, clays, soft limestones, red beds and shale. Medium formations include dolomites, limestones, and hard shale. Hard formations include hard shale, calcites, mudstones, cherty lime stones and hard and abrasive formations.
Until 2006, market share was divided primarily among Hughes Christensen, Security-DBS (Halliburton Drill Bits and Services), Smith Bits (a subsidiary of Schlumberger), and ReedHycalog (acquired by National Oilwell Varco in 2008).
By 2014, Ulterra (then a subsidiary of ESCO Corp.) and Varel International (a subsidiary of Swedish engineering group Sandvik) had together gained nearly 30% of the U.S. bit market and eroded the historical dominance of the Smith, Halliburton, and Baker Hughes. By 2018, Schlumberger, which acquired Smith in 2010, became dominant in international markets thanks to packaging drill bits with their other tools and services, while Ulterra (owned by private equity firms Blackstone Energy Partners and American Securities) continued a Stark growth trend, becoming the market share leader in drill bits in the US according to Spears Research and Kimberlite Research.
Evaluation of the dull bit grading is done by a uniform system promoted by the International Association of Drilling Contractors (IADC). See Society of Petroleum Engineers / IADC Papers SPE 23938 & 23940. See also PDC Bits
See also
Blowout (well drilling)
Borehole
Deep well drilling
Driller (oil)
Drilling mud
Drilling rig
Slickline
Underbalanced drilling
Water well
Manual well drilling methods
Baptist well drilling
Sludging
References
Bibliography
External links
Drilling Equipment
Drilling a Well by Automobile, Popular Science monthly, February 1919, Page 115-116, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PT33
Schlumberger Oilfield Glossary
Oil and gas well drilling, US Department of Labor
Water Well Drilling Ireland
"Mechanical Mole Bores Crooked Wells." Popular Science, June 1942, pp. 94–95
Engineering a new approach to fixed cutter bits
American inventions
Chinese inventions
Drilling technology
Drilling | Well drilling | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,310 | [
"Hydrology",
"Water wells",
"Environmental engineering"
] |
67,514 | https://en.wikipedia.org/wiki/Moscovium | Moscovium is a synthetic chemical element; it has symbol Mc and atomic number 115. It was first synthesized in 2003 by a joint team of Russian and American scientists at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. In December 2015, it was recognized as one of four new elements by the Joint Working Party of international scientific bodies IUPAC and IUPAP. On 28 November 2016, it was officially named after the Moscow Oblast, in which the JINR is situated.
Moscovium is an extremely radioactive element: its most stable known isotope, moscovium-290, has a half-life of only 0.65 seconds. In the periodic table, it is a p-block transactinide element. It is a member of the 7th period and is placed in group 15 as the heaviest pnictogen. Moscovium is calculated to have some properties similar to its lighter homologues, nitrogen, phosphorus, arsenic, antimony, and bismuth, and to be a post-transition metal, although it should also show several major differences from them. In particular, moscovium should also have significant similarities to thallium, as both have one rather loosely bound electron outside a quasi-closed shell. Chemical experimentation on single atoms has confirmed theoretical expectations that moscovium is less reactive than its lighter homologue bismuth. Over a hundred atoms of moscovium have been observed to date, all of which have been shown to have mass numbers from 286 to 290.
Introduction
History
Discovery
The first successful synthesis of moscovium was by a joint team of Russian and American scientists in August 2003 at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. Headed by Russian nuclear physicist Yuri Oganessian, the team included American scientists of the Lawrence Livermore National Laboratory. The researchers on February 2, 2004, stated in Physical Review C that they bombarded americium-243 with calcium-48 ions to produce four atoms of moscovium. These atoms decayed by emission of alpha-particles to nihonium in about 100 milliseconds.
The Dubna–Livermore collaboration strengthened their claim to the discoveries of moscovium and nihonium by conducting chemical experiments on the final decay product 268Db. None of the nuclides in this decay chain were previously known, so existing experimental data was not available to support their claim. In June 2004 and December 2005, the presence of a dubnium isotope was confirmed by extracting the final decay products, measuring spontaneous fission (SF) activities and using chemical identification techniques to confirm that they behave like a group 5 element (as dubnium is known to be in group 5 of the periodic table). Both the half-life and the decay mode were confirmed for the proposed 268Db, lending support to the assignment of the parent nucleus to moscovium. However, in 2011, the IUPAC/IUPAP Joint Working Party (JWP) did not recognize the two elements as having been discovered, because current theory could not distinguish the chemical properties of group 4 and group 5 elements with sufficient confidence. Furthermore, the decay properties of all the nuclei in the decay chain of moscovium had not been previously characterized before the Dubna experiments, a situation which the JWP generally considers "troublesome, but not necessarily exclusive".
Road to confirmation
Two heavier isotopes of moscovium, 289Mc and 290Mc, were discovered in 2009–2010 as daughters of the tennessine isotopes 293Ts and 294Ts; the isotope 289Mc was later also synthesized directly and confirmed to have the same properties as found in the tennessine experiments.
In 2011, the Joint Working Party of international scientific bodies International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP) evaluated the 2004 and 2007 Dubna experiments, and concluded that they did not meet the criteria for discovery. Another evaluation of more recent experiments took place within the next few years, and a claim to the discovery of moscovium was again put forward by Dubna. In August 2013, a team of researchers at Lund University and at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany announced they had repeated the 2004 experiment, confirming Dubna's findings. Simultaneously, the 2004 experiment had been repeated at Dubna, now additionally also creating the isotope 289Mc that could serve as a cross-bombardment for confirming the discovery of the tennessine isotope 293Ts in 2010. Further confirmation was published by the team at the Lawrence Berkeley National Laboratory in 2015.
In December 2015, the IUPAC/IUPAP Joint Working Party recognized the element's discovery and assigned the priority to the Dubna-Livermore collaboration of 2009–2010, giving them the right to suggest a permanent name for it. While they did not recognise the experiments synthesising 287Mc and 288Mc as persuasive due to the lack of a convincing identification of atomic number via cross-reactions, they recognised the 293Ts experiments as persuasive because its daughter 289Mc had been produced independently and found to exhibit the same properties.
In May 2016, Lund University (Lund, Scania, Sweden) and GSI cast some doubt on the syntheses of moscovium and tennessine. The decay chains assigned to 289Mc, the isotope instrumental in the confirmation of the syntheses of moscovium and tennessine, were found based on a new statistical method to be too different to belong to the same nuclide with a reasonably high probability. The reported 293Ts decay chains approved as such by the JWP were found to require splitting into individual data sets assigned to different tennessine isotopes. It was also found that the claimed link between the decay chains reported as from 293Ts and 289Mc probably did not exist. (On the other hand, the chains from the non-approved isotope 294Ts were found to be congruent.) The multiplicity of states found when nuclides that are not even–even undergo alpha decay is not unexpected and contributes to the lack of clarity in the cross-reactions. This study criticized the JWP report for overlooking subtleties associated with this issue, and considered it "problematic" that the only argument for the acceptance of the discoveries of moscovium and tennessine was a link they considered to be doubtful.
On June 8, 2017, two members of the Dubna team published a journal article answering these criticisms, analysing their data on the nuclides 293Ts and 289Mc with widely accepted statistical methods, noted that the 2016 studies indicating non-congruence produced problematic results when applied to radioactive decay: they excluded from the 90% confidence interval both average and extreme decay times, and the decay chains that would be excluded from the 90% confidence interval they chose were more probable to be observed than those that would be included. The 2017 reanalysis concluded that the observed decay chains of 293Ts and 289Mc were consistent with the assumption that only one nuclide was present at each step of the chain, although it would be desirable to be able to directly measure the mass number of the originating nucleus of each chain as well as the excitation function of the 243Am+48Ca reaction.
Naming
Using Mendeleev's nomenclature for unnamed and undiscovered elements, moscovium is sometimes known as eka-bismuth. In 1979, IUPAC recommended that the placeholder systematic element name ununpentium (with the corresponding symbol of Uup) be used until the discovery of the element is confirmed and a permanent name is decided. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 115", with the symbol of E115, (115) or even simply 115.
On 30 December 2015, discovery of the element was recognized by the International Union of Pure and Applied Chemistry (IUPAC). According to IUPAC recommendations, the discoverer(s) of a new element has the right to suggest a name. A suggested name was langevinium, after Paul Langevin. Later, the Dubna team mentioned the name moscovium several times as one among many possibilities, referring to the Moscow Oblast where Dubna is located.
In June 2016, IUPAC endorsed the latter proposal to be formally accepted by the end of the year, which it was on 28 November 2016. The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow.
Other routes of synthesis
In 2024, the team at JINR reported the observation of one decay chain of 289Mc while studying the reaction between 242Pu and 50Ti, aimed at producing more neutron-deficient livermorium isotopes in preparation for synthesis attempts of elements 119 and 120. This was the first successful report of a charged-particle exit channel – the evaporation of a proton and two neutrons, rather than only neutrons, as the compound nucleus de-excites to the ground state – in a hot fusion reaction between an actinide target and a projectile with atomic number greater than or equal to 20. Such reactions have been proposed as a novel synthesis route for yet-undiscovered isotopes of superheavy elements with several neutrons more than the known ones, which may be closer to the theorized island of stability and have longer half-lives. In particular, the isotopes 291Mc–293Mc may be reachable in these types of reactions within current detection limits.
Predicted properties
Other than nuclear properties, no properties of moscovium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Properties of moscovium remain unknown and only predictions are available.
Nuclear stability and isotopes
Moscovium is expected to be within an island of stability centered on copernicium (element 112) and flerovium (element 114). Due to the expected high fission barriers, any nucleus within this island of stability exclusively decays by alpha decay and perhaps some electron capture and beta decay. Although the known isotopes of moscovium do not actually have enough neutrons to be on the island of stability, they can be seen to approach the island as in general, the heavier isotopes are the longer-lived ones.
The hypothetical isotope 291Mc is an especially interesting case as it has only one neutron more than the heaviest known moscovium isotope, 290Mc. It could plausibly be synthesized as the daughter of 295Ts, which in turn could be made from the reaction . Calculations show that it may have a significant electron capture or positron emission decay mode in addition to alpha decay and also have a relatively long half-life of several seconds. This would produce 291Fl, 291Nh, and finally 291Cn which is expected to be in the middle of the island of stability and have a half-life of about 1200 years, affording the most likely hope of reaching the middle of the island using current technology. Possible drawbacks are that the cross section of the production reaction of 295Ts is expected to be low and the decay properties of superheavy nuclei this close to the line of beta stability are largely unexplored. The heavy isotopes from 291Mc to 294Mc might also be produced using charged-particle evaporation, in the 245Cm(48Ca,pxn) and 248Cm(48Ca,pxn) reactions.
The light isotopes 284Mc, 285Mc, and 286Mc could be made from the 241Am+48Ca reaction. They would undergo a chain of alpha decays, ending at transactinide isotopes too light to be made by hot fusion and too heavy to be made by cold fusion. The isotope 286Mc was found in 2021 at Dubna, in the reaction: it decays into the already-known 282Nh and its daughters. The yet lighter 282Mc and 283Mc could be made from 243Am+44Ca, but the cross-section would likely be lower.
Other possibilities to synthesize nuclei on the island of stability include quasifission (partial fusion followed by fission) of a massive nucleus. Such nuclei tend to fission, expelling doubly magic or nearly doubly magic fragments such as calcium-40, tin-132, lead-208, or bismuth-209. It has been shown that the multi-nucleon transfer reactions in collisions of actinide nuclei (such as uranium and curium) might be used to synthesize the neutron-rich superheavy nuclei located at the island of stability, although formation of the lighter elements nobelium or seaborgium is more favored. One last possibility to synthesize isotopes near the island is to use controlled nuclear explosions to create a neutron flux high enough to bypass the gaps of instability at 258–260Fm and at mass number 275 (atomic numbers 104 to 108), mimicking the r-process in which the actinides were first produced in nature and the gap of instability around radon bypassed. Some such isotopes (especially 291Cn and 293Cn) may even have been synthesized in nature, but would have decayed away far too quickly (with half-lives of only thousands of years) and be produced in far too small quantities (about 10−12 the abundance of lead) to be detectable as primordial nuclides today outside cosmic rays.
Physical and atomic
In the periodic table, moscovium is a member of group 15, the pnictogens. It appears below nitrogen, phosphorus, arsenic, antimony, and bismuth. Every previous pnictogen has five electrons in its valence shell, forming a valence electron configuration of ns2np3. In moscovium's case, the trend should be continued and the valence electron configuration is predicted to be 7s27p3; therefore, moscovium will behave similarly to its lighter congeners in many respects. However, notable differences are likely to arise; a largely contributing effect is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. It is especially strong for the superheavy elements, because their electrons move much faster than in lighter atoms, at velocities comparable to the speed of light. In relation to moscovium atoms, it lowers the 7s and the 7p electron energy levels (stabilizing the corresponding electrons), but two of the 7p electron energy levels are stabilized more than the other four. The stabilization of the 7s electrons is called the inert-pair effect, and the effect "tearing" the 7p subshell into the more stabilized and the less stabilized parts is called subshell splitting. Computation chemists see the split as a change of the second (azimuthal) quantum number l from 1 to and for the more stabilized and less stabilized parts of the 7p subshell, respectively. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as 7s7p7p. These effects cause moscovium's chemistry to be somewhat different from that of its lighter congeners.
The valence electrons of moscovium fall into three subshells: 7s (two electrons), 7p1/2 (two electrons), and 7p3/2 (one electron). The first two of these are relativistically stabilized and hence behave as inert pairs, while the last is relativistically destabilized and can easily participate in chemistry. (The 6d electrons are not destabilized enough to participate chemically.) Thus, the +1 oxidation state should be favored, like Tl+, and consistent with this the first ionization potential of moscovium should be around 5.58 eV, continuing the trend towards lower ionization potentials down the pnictogens. Moscovium and nihonium both have one electron outside a quasi-closed shell configuration that can be delocalized in the metallic state: thus they should have similar melting and boiling points (both melting around 400 °C and boiling around 1100 °C) due to the strength of their metallic bonds being similar. Additionally, the predicted ionization potential, ionic radius (1.5 Å for Mc+; 1.0 Å for Mc3+), and polarizability of Mc+ are expected to be more similar to Tl+ than its true congener Bi3+. Moscovium should be a dense metal due to its high atomic weight, with a density around 13.5 g/cm3. The electron of the hydrogen-like moscovium atom (oxidized so that it only has one electron, Mc114+) is expected to move so fast that it has a mass 1.82 times that of a stationary electron, due to relativistic effects. For comparison, the figures for hydrogen-like bismuth and antimony are expected to be 1.25 and 1.077 respectively.
Chemical
Moscovium is predicted to be the third member of the 7p series of chemical elements and the heaviest member of group 15 in the periodic table, below bismuth. Unlike the two previous 7p elements, moscovium is expected to be a good homologue of its lighter congener, in this case bismuth. In this group, each member is known to portray the group oxidation state of +5 but with differing stability. For nitrogen, the +5 state is mostly a formal explanation of molecules like N2O5: it is very difficult to have five covalent bonds to nitrogen due to the inability of the small nitrogen atom to accommodate five ligands. The +5 state is well represented for the essentially non-relativistic typical pnictogens phosphorus, arsenic, and antimony. However, for bismuth it becomes rare due to the relativistic stabilization of the 6s orbitals known as the inert-pair effect, so that the 6s electrons are reluctant to bond chemically. It is expected that moscovium will have an inert-pair effect for both the 7s and the 7p1/2 electrons, as the binding energy of the lone 7p3/2 electron is noticeably lower than that of the 7p1/2 electrons. Nitrogen(I) and bismuth(I) are known but rare and moscovium(I) is likely to show some unique properties, probably behaving more like thallium(I) than bismuth(I). Because of spin-orbit coupling, flerovium may display closed-shell or noble gas-like properties; if this is the case, moscovium will likely be typically monovalent as a result, since the cation Mc+ will have the same electron configuration as flerovium, perhaps giving moscovium some alkali metal character. Calculations predict that moscovium(I) fluoride and chloride would be ionic compounds, with an ionic radius of about 109–114 pm for Mc+, although the 7p1/2 lone pair on the Mc+ ion should be highly polarisable. The Mc3+ cation should behave like its true lighter homolog Bi3+. The 7s electrons are too stabilized to be able to contribute chemically and hence the +5 state should be impossible and moscovium may be considered to have only three valence electrons. Moscovium would be quite a reactive metal, with a standard reduction potential of −1.5 V for the Mc+/Mc couple.
The chemistry of moscovium in aqueous solution should essentially be that of the Mc+ and Mc3+ ions. The former should be easily hydrolyzed and not be easily complexed with halides, cyanide, and ammonia. Moscovium(I) hydroxide (McOH), carbonate (Mc2CO3), oxalate (Mc2C2O4), and fluoride (McF) should be soluble in water; the sulfide (Mc2S) should be insoluble; and the chloride (McCl), bromide (McBr), iodide (McI), and thiocyanate (McSCN) should be only slightly soluble, so that adding excess hydrochloric acid would not noticeably affect the solubility of moscovium(I) chloride. Mc3+ should be about as stable as Tl3+ and hence should also be an important part of moscovium chemistry, although its closest homolog among the elements should be its lighter congener Bi3+. Moscovium(III) fluoride (McF3) and thiozonide (McS3) should be insoluble in water, similar to the corresponding bismuth compounds, while moscovium(III) chloride (McCl3), bromide (McBr3), and iodide (McI3) should be readily soluble and easily hydrolyzed to form oxyhalides such as McOCl and McOBr, again analogous to bismuth. Both moscovium(I) and moscovium(III) should be common oxidation states and their relative stability should depend greatly on what they are complexed with and the likelihood of hydrolysis.
Like its lighter homologues ammonia, phosphine, arsine, stibine, and bismuthine, moscovine (McH3) is expected to have a trigonal pyramidal molecular geometry, with an Mc–H bond length of 195.4 pm and a H–Mc–H bond angle of 91.8° (bismuthine has bond length 181.7 pm and bond angle 91.9°; stibine has bond length 172.3 pm and bond angle 92.0°). In the predicted aromatic pentagonal planar cluster, analogous to pentazolate (), the Mc–Mc bond length is expected to be expanded from the extrapolated value of 312–316 pm to 329 pm due to spin–orbit coupling effects.
Experimental chemistry
The isotopes 288Mc, 289Mc, and 290Mc have half-lives long enough for chemical investigation. A 2024 experiment at the GSI, producing 288Mc via the 243Am+48Ca reaction, studied the adsorption of nihonium and moscovium on SiO2 and gold surfaces. The adsorption enthalpy of moscovium on SiO2 was determined experimentally as (68% confidence interval). Moscovium was determined to be less reactive with the SiO2 surface than its lighter congener bismuth, but more reactive than closed-shell copernicium and flerovium. This arises because of the relativistic stabilisation of the 7p1/2 shell.
See also
Notes
References
Bibliography
External links
Uut and Uup Add Their Atomic Mass to Periodic Table
Superheavy elements
History and etymology
Moscovium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Pnictogens
Substances discovered in the 2000s
Synthetic elements | Moscovium | [
"Physics",
"Chemistry"
] | 4,858 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
1,070,600 | https://en.wikipedia.org/wiki/Metabolism%20%28architecture%29 | was a post-war Japanese biomimetic architectural movement that fused ideas about architectural megastructures with those of organic biological growth. It had its first international exposure during CIAM's 1959 meeting and its ideas were tentatively tested by students from Kenzo Tange's MIT studio.
During the preparation for the 1960 Tokyo World Design Conference a group of young architects and designers, including Kiyonori Kikutake, Kisho Kurokawa and Fumihiko Maki prepared the publication of the Metabolism manifesto. They were influenced by a wide variety of sources including Marxist theories and biological processes. Their manifesto was a series of four essays entitled: Ocean City, Space City, Towards Group Form, and Material and Man, and it also included designs for vast cities that floated on the oceans and plug-in capsule towers that could incorporate organic growth. Although the World Design Conference gave the Metabolists exposure on the international stage, their ideas remained largely theoretical.
Some smaller, individual buildings that employed the principles of Metabolism were built and these included Tange's Yamanashi Press and Broadcaster Centre and Kurokawa's Nakagin Capsule Tower. The greatest concentration of their work was to be found at the 1970 World Exposition in Osaka where Tange was responsible for master planning the whole site whilst Kikutake and Kurokawa designed pavilions. After the 1973 oil crisis, the Metabolists turned their attention away from Japan and toward Africa and the Middle East.
Origins of Metabolism
The Congrès Internationaux d'Architecture Moderne (CIAM) was founded in Switzerland in 1928 as an association of architects who wanted to advance modernism into an international setting. During the early 1930s they promoted the idea (based upon new urban patterns in the United States) that urban development should be guided by CIAM's four functional categories: dwelling, work, transportation, and recreation. By the mid-1930s Le Corbusier and other architects had moulded CIAM into a pseudo-political party with the goal of promoting modern architecture to all. This view gained some traction in the immediate post-war period when Le Corbusier and his colleagues began to design buildings in Chandigarh. By the early 1950s it was felt that CIAM was losing its avant-garde edge so in 1954 a group of younger members called "Team 10" was formed. This included the inner circle Dutch architects Jacob Bakema and Aldo van Eyck, Italian Giancarlo De Carlo, Greek Georges Candilis, the British architects Peter and Alison Smithson and the American Shadrach Woods. The Team 10 architects introduced concepts like "human association", "cluster" and "mobility", with Bakema encouraging the combination of architecture and planning in urban design. This was a rejection of CIAM's older four function mechanical approach, and it would ultimately lead to the break-up and end of CIAM.
Kenzo Tange was invited to the CIAM '59 meeting of the association in Otterlo, Netherlands. In what was to be the last meeting of CIAM, he presented two theoretical projects by the architect Kiyonori Kikutake: the Tower-shaped City and Kikutake's own home, the Sky House. This presentation exposed the fledgling Metabolist movement to its first international audience. Like Team 10's "human association" concepts, Metabolism too was exploring new concepts in urban design.
Tower-shaped City was a 300 metre tall tower that housed the infrastructure for an entire city. It included transportation, services and a manufacturing plant for prefabricated houses. The tower was vertical "artificial land" onto which steel, pre-fabricated dwelling capsules could be attached. Kikutake proposed that these capsules would undergo self-renewal every fifty years, and the city would grow organically like branches of a tree.
Constructed on a hillside, the Sky House is a platform supported on four concrete panels with a hyperbolic paraboloid shell roof. It is a single space divided by storage units with the kitchen and bathroom on the outer edge. These latter two were designed so that they could be moved to suit the use of the house—and, indeed, they have been moved and/or adjusted about seven times over the course of fifty years. At one point a small children's room was attached to the bottom of the main floor with a small child-sized access door between the two rooms.
After the meeting, Tange left for Massachusetts Institute of Technology to begin a four-month period as a visiting professor. It is possible that based upon the reception of Kikutake's projects in Otterlo he decided to set the fifth year project as a design for a residential community of 25,000 inhabitants to be constructed on the water of Boston Bay. Tange felt a natural desire to produce urban designs based upon a new prototype of design, one that could give a more human connection to super-scale cities. He considered the idea of "major" and "minor" city structure and how this could grow in cycles like the trunk and leaves of a tree.
One of the seven projects produced by the students was a perfect example of his vision. The project consisted of two primary residential structures each of which was triangular in section. Lateral movement was provided by motorways and monorail, whilst vertical movement from the parking areas was via elevators. There were open spaces within for community centres, and at every third level there were walkways along which were rows of family houses. The project appeared to be based upon Tange's unrealised competition entry for the World Health Organization headquarters in Geneva and both projects paved the way for his later project, "Plan for Tokyo – 1960". Tange went on to present both the Boston Bay Project and the Tokyo Plan at the Tokyo World Design Conference.
Tokyo World Design Conference, 1960
The conference had its roots with Isamu Konmochi and Sori Yanagi who were representatives of the Japanese Committee on the 1956 International Design Conference in Aspen, Colorado. They suggested that rather than a four yearly conference in Aspen there should be a roving conference with Tokyo as its first setting in 1960. Three Japanese institutional members were responsible for organising the conference, although after the Japan Industrial Design Association pulled out only the Japan Institute of Architects and the Japan Association of Advertising Arts were left. In 1958 they formed a preparation committee led by Junzo Sakakura, Kunio Maekawa and Kenzo Tange. As Tange had just accepted an invitation to be a visiting professor at Massachusetts Institute of Technology he recommended his junior colleague Takashi Asada to replace him in the organisation of the conference programmes.
The young Asada invited two friends to help him: the architectural critic and former editor of the magazine Shinkenchiku, Noboru Kawazoe, and Kisho Kurokawa who was one of Tange's students. In turn these two men scouted for more talented designers to help, including: the architects Masato Otaka and Kiyonori Kikutake and the designers Kenji Ekuan and Kiyoshi Awazu. Kurokawa was selected because he had recently returned from an international student conference in the Soviet Union and was a student of the Marxist architectural theorist Uzō Nishiyama. Ekuan was asked because of his recent participation in a seminar given by Konrad Wachsmann (he arrived at the lecture on a YA-1 motorbike that he had newly designed for Yamaha) and Otaka was a junior associate of Kunio Maekawa and had just completed the Harumi Apartment Building in Tokyo Bay. Fumihiko Maki, a former undergraduate student of Tange also joined the group whilst in Tokyo on a travelling fellowship from the Graham Foundation.
By day Asada canvassed politicians, business leaders and journalists for ideas, by night he met with his young friends to cultivate ideas. Asada was staying at the Ryugetsu Ryokan in Asakusa, Tokyo and he used it as a meeting place for progressive scholars, architects and artists. He often invited people from other professions to give talks and one of these was the atomic physicist, Mitsuo Taketani. Taketani was a scholar who was also interested in Marxist theory and he brought this along with his scientific theories to the group. Taketani's three stage methodology for scientific research influenced Kikutake's own three stage theory: ka (the general system), kata (the abstract image) and katachi (the solution as built), which he used to summarise his own design process from a broad vision to a concrete architectural form.
The group also searched for architectural solutions to Japan's phenomenal urban expansion brought about by its economic growth and how this could be reconciled with its shortage of usable land. They were inspired by examples of circular growth and renewal found in traditional Japanese architecture like the Ise Shrine and Katsura Detached Palace. They worked in coffee shops and Tokyo's International House to produce a compilation of their works that they could publish as a manifesto for the conference.
The conference ran from 11–16 May 1960 and had 227 guests, 84 of whom were international, including the architects Louis Kahn, Ralph Erskine, B. V. Doshi, Jean Prouvé, Paul Rudolph and Peter and Alison Smithson. Japanese participants included Kunio Maekawa, Yoshinobu Ashihara and Kazuo Shinohara.
After his 13 May lecture, Louis Kahn was invited to Kikutake's Sky House and had a long conversation with a number of Japanese architects including the Metabolists. He answered questions until after midnight with Maki acting as translator. Kahn spoke of his universal approach to design and used his own Richards Medical Research Laboratories as an example of how new design solutions can be reached with new thinking about space and movement. A number of the Metabolists were inspired by this.
The Metabolism name
Whilst discussing the organic nature of Kikutake's theoretical Marine City project, Kawazoe used the Japanese word shinchintaisha as being symbolic of the essential exchange of materials and energy between organisms and the exterior world (literally metabolism in a biological sense.) The Japanese meaning of the word has a feeling of replacement of the old with the new and the group further interpreted this to be equivalent to the continuous renewal and organic growth of the city. As the conference was to be a world conference, Kawazoe felt that they should use a more universal word and Kikutake looked up the definition of shinchintaisha in his Japanese-English dictionary. The translation he found was the word Metabolism.
The Metabolism manifesto
The group's manifesto, Metabolism: The Proposals for New Urbanism, was published at the World Design Conference. Two thousand copies of the 90 page book were printed and were sold for ¥500 by Kurokawa and Awazu at the entrance to the venue. The manifesto opened with the following statement:
The publication included projects by each member but a third of the document was dedicated to work by Kikutake who contributed essays and illustrations on the "Ocean City". Kurokawa contributed "Space City", Kawazoe contributed "Material and Man" and Otaka and Maki wrote "Towards the Group Form". Awazu designed the booklet and Kawazoe's wife, Yasuko, edited the layout.
Some of the projects included in the manifesto were subsequently displayed at the Museum of Modern Art's 1960 exhibition entitled Visionary Architecture and exposed the Japanese architects' work to a much wider international audience.
Unlike the more rigid membership structure of Team 10, the Metabolists saw their movement as having organic form with the members being free to come and go. Although the group had cohesion they saw themselves as individuals and their architecture reflected this. This was especially true for Tange who remained a mentor for the group rather than an "official" member.
Ocean City
Kikutake's Ocean City is the first essay in the pamphlet. It covered his two previously published projects "Tower-shaped City" and "Marine City" and included a new project "Ocean City" that was a combination of the first two. The first two of these projects introduced the Metabolist's idea of "artificial land" as well as "major" and "minor" structure. Kawazoe referred to "artificial land" in an article in the magazine Kindai Kenchiku in April 1960. In responding to the scarcity of land in large and expanding cities he proposed creating "artificial land" that would be composed of concrete slabs, oceans or walls (onto which capsules could be plugged). He said that the creation of this "artificial land" would allow people to use other land in a more natural way.
For Marine City, Kikutake proposed a city that would float free in the ocean and would be free of ties to a particular nation and therefore free from the threat of war. The artificial ground of the city would house agriculture, industry and entertainment and the residential towers would descend into the ocean to a depth of 200 metres. The city itself was not tied to the land and was free to float across the ocean and grow organically like an organism. Once it became too aged for habitation it would sink itself.
Ocean City was a combination of both Tower-shaped City and Marine City. It consisted of two rings that were tangent to one another, with housing on the inner ring and production on the outer one. Administrative buildings were found at the tangent point. The population would have been rigidly controlled at an upper limit of 500,000. Kikutake envisaged that the city would expand by multiplying itself as though it was undergoing cell division. This enforced the Metabolist idea that the expansion of cities could be a biological process.
Space City
In his essay "Space City", Kurokawa introduced four projects: Neo-Tokyo Plan, Wall City, Agricultural City and Mushroom-shaped house. In contrast to Tange's linear Tokyo City Bay Project, Kurokawa's Neo-Tokyo Plan proposed that Tokyo be decentralised and organised into cruciform patterns. He arranged Bamboo-shaped Cities along these cruciforms but unlike Kikutake he kept the city towers lower than 31 metres to conform with Tokyo's building code (these height limits were not revised until 1968).
The Wall City considered the problem of the ever-expanding distance between the home and the workplace. He proposed a wall-shaped city that could extend indefinitely. Dwellings would be on one side of the wall and workplaces on the other. The wall itself would contain transportation and services.
Surviving the Ise Bay Typhoon in 1959 inspired Kurokawa to design the Agricultural City. It consisted of a grid-like city supported on 4 metre stilts above the ground. The 500 metres square city sat on concrete slab that placed industry and infrastructure above agriculture and was an attempt to combine rural land and the city into one entity. He envisaged that his Mushroom Houses would sprout through the slab of Agriculture City. These houses were shrouded in a mushroom-like cap that was neither wall nor roof that enclosed a tea room and a living space.
Towards the Group Form
Maki and Otaka's essay on Group Form placed less emphasis on the megastructures of some of the other Metabolists and focused instead on a more flexible form of urban planning that could better accommodate rapid and unpredictable requirements of the city.
Otaka had first thought about the relationship between infrastructure and architecture in his 1949 graduation thesis and he continued to explore ideas about "artificial ground" during his work at Maekawa's office. Likewise, during his travels abroad, Maki was impressed with the grouping and forms of vernacular buildings. The project they included to illustrate their ideas was a scheme for the redevelopment of Shinjuku station which included retail, offices and entertainment on an artificial ground over the station. Although Otaka's forms were heavy and sculptural and Maki's were lightweight with large spans, both contained the homogeneous clusters that were associated with group form.
Material and Man
Kawazoe contributed a brief essay entitled "I want to be a sea-shell, I want to be a mold, I want to be a spirit". The essay reflected Japan's cultural anguish after the Second World War and proposed the unity of man and nature.
Plan for Tokyo, 1960–2025
On 1 January 1961 Kenzo Tange presented his new plan for Tokyo Bay (1960) in a 45-minute television programme on NHK General TV. The design was a radical plan for the reorganization and expansion of the capital in order to cater for a population beyond 10 million. The design was for a linear city that used a series of nine-kilometre modules that stretched 80 km across Tokyo Bay from Ikebukuro in the north west to Kisarazu in the south east. The perimeter of each of the modules was organised into three levels of looping highways, as Tange was adamant that an efficient communication system would be the key to modern living. The modules themselves were organised into building zones and transport hubs and included office, government administration and retail districts as well as a new Tokyo train station and highway links to other parts of Tokyo. Residential areas were to be accommodated on parallel streets that ran perpendicular to the main linear axis and people would build their own houses within giant A-frame structures.
The project was designed by Tange and other members of his studio at Tokyo University, including Kurokawa and Arata Isozaki. Originally it was intended to publish the plan at the World Design Conference (hence its "1960" title) but it was delayed because the same members were working on the Conference organisation. Tange received interest and support from a number of government agencies but the project was never built. Tange went on to expand the idea of the linear city in 1964 with the Tōkaidō Megalopolis Plan. This was an ambitious proposal to extend Tokyo's linear city across the whole of the Tōkaidō region of Japan in order to re-distribute the population.
Both Kikutake and Kurokawa capitalised on the interest in Tange's 1960 plan by producing their own schemes for Tokyo. Kikutake's plan incorporated three elements both on the land and the sea and included a looped highway that connected all the prefectures around the bay. Unlike Tange however its simple presentation graphics put many people off. Kurokawa's plan consisted of helix-shaped megastructures floating inside cells that extended out across the bay. Although the scheme's more convincing graphics were presented as part of a film the project was not built.
With Japan's property boom in the 1980s, both Tange and Kurokawa revisited their earlier ideas: Tange with his Tokyo Plan 1986 and Kurokawa with his New Tokyo Plan 2025. Both projects used land that had been reclaimed from the sea since the 1960s in combination with floating structures.
Plan for Skopje
The reconstruction plan of the capital city of Skopje then part of the Yugoslav Republic of Macedonia (now North Macedonia) following a major earthquake was won by Tange's team. The project was significant because of its international influence as well as an international model case for urban reconstruction. It is a major breakthrough for the Metabolist movement to realise their approach on an international scale.
Selected built projects
Yamanashi Press and Broadcaster Centre
In 1961 Kenzo Tange received a commission from the Yamanashi News Group to design a new office in Kōfu. As well as two news firms and a printing company the building needed to incorporate a cafeteria and shops at ground floor level to interface with the adjoining city. It also needed to be flexible in its design to allow future expansion.
Tange organised the spaces of the three firms by function to allow them to share common facilities. He stacked these functions vertically according to need, for example, the printing plant is on the ground floor to facilitate access to the street for loading and transportation. He then took all the service functions including elevators, toilets and pipes and grouped them into 16 reinforced concrete cylindrical towers, each with an equal 5 metre diameter. These he placed on a grid into which he inserted the functional group facilities and offices. These inserted elements were conceived of as containers that were independent of the structure and could be arranged flexibly as required. This conceived flexibility distinguished Tange's design from other architects' designs with open floor offices and service cores – such as Kahn's Richards Medical Research Laboratories. Tange deliberately finished the cylindrical towers at different heights to imply that there was room for vertical expansion.
Although the building was expanded in 1974 as Tange had originally envisioned, it did not act as a catalyst for the expansion of the building into a megastructure across the rest of the city. The building was criticised for forsaking the human use of the building in preference to the structure and adaptability.
Shizuoka Press and Broadcasting Tower
In 1966 Tange designed the Shizuoka Press and Broadcasting Tower in the Ginza district of Tokyo. This time using only a single core Tange arranged the offices as cantilevered steel and glass boxes. The cantilever is emphasised by punctuating the three-storey blocks with a single-storey glazed balcony. The concrete forms of the building were cast using aluminium formwork and the aluminium has been left on as a cladding. Although conceived as a "core-type" system that was included in Tange's other city proposals, the tower stands alone and is robbed of other connections.
Nakagin Capsule Tower
The icon of Metabolism, Kurokawa's Nakagin Capsule Tower was erected in the Ginza district of Tokyo in 1972 and completed in just 30 days. Prefabricated in Shiga Prefecture in a factory that normally built shipping containers, it is constructed of 140 capsules plugged into two cores that are 11 and 13 stories in height. The capsules contained the latest gadgets of the day and were built to house small offices and pieds-à-terre for Tokyo salarymen.
The capsules were constructed of light steel welded trusses covered with steel sheeting mounted onto the reinforced concrete cores. The capsules were 2.5 metres wide and four metres long with a 1.3 metre diameter window at one end. The units originally contained a bed, storage cabinets, a bathroom, a colour television set, clock, refrigerator and air conditioner, although optional extras such as a stereo were available. Although the capsules were designed with mass production in mind, there was never a demand for them. Nobuo Abe was a senior manager, managing one of the design divisions on the construction of the Nakagin Capsule Tower.
Since 1996, the tower was listed as an architectural heritage by DoCoMoMo. However, in 2007 the residents voted to tear the tower down and build a new 14-story tower. In 2010, some of the remaining habitable pods were converted for use as budget hotel rooms. As of 2017, many capsules had been renovated and were being used as residential and office spaces, while short-stay renting such as Airbnb or other lodging provisions had been banned by the administration of the building. The tower was demolished in April of 2022.
Hillside Terrace, Tokyo
After the World Design Conference Maki began to distance himself from Metabolist movement, although his studies in Group Form continued to be of interest to the Metabolists. In 1964 he published a booklet entitled Investigations in Collective Form in which he investigated three urban forms: Compositional-form, Megastructure and Group Form. The Hillside Terrace is a series of projects commissioned by the Asakura family and undertaken in seven phases from 1967 to 1992. It includes residential, office and cultural buildings as well as the Royal Danish Embassy and is situated on both sides of Kyū-Yamate avenue in the Daikanyama district of Tokyo.
The execution of the designs evolves through the phases with exterior forms becoming more independent of the interior functions and new materials being employed. For example, the first phase has a raised pedestrian deck that gives access to shops and a restaurant and this was designed to be extended in subsequent phases but the idea, along with the original master plan, was discarded in later phases. By the third phase Maki moved away from the Modernist maxim of form follows function and started to design the building exteriors to better match the immediate environment. The project acted as a catalyst to the redevelopment of the whole area around Daikanyama Station.
Metabolism in context
Metabolism developed during the post war period in a Japan that questioned its cultural identity. Initially the group had chosen the name Burnt Ash School to reflect the ruined state of firebombed Japanese cities and the opportunity they presented for radical re-building. Ideas of nuclear physics and biological growth were linked with Buddhist concepts of regeneration. Although Metabolism rejected visual references from the past, they embraced concepts of prefabrication and renewal from traditional Japanese architecture, especially the twenty-year cycle of the rebuilding of the Ise Shrine (to which Tange and Kawazoe were invited in 1953). The sacred rocks onto which the shrine is built were seen by the Metabolists as symbolising a Japanese spirit that predated Imperial aspirations and modernising influences from the West.
In his Investigations in Collective Form Maki coined the term Megastructure to refer structures that house the whole or part of a city in a single structure. He originated the idea from vernacular forms of village architecture that were projected into vast structures with the aid of modern technology. Reyner Banham borrowed Megastructure for the title of his 1976 book which contained numerous built and unbuilt projects. He defined megastructures as modular units (with a short life span) that attached to structural framework (with a longer life span). Maki would later criticise the Megastructure approach to design advocating instead his idea of Group Form which he thought would better accommodate the disorder of the city.
The architect Robin Boyd readily interchanges the word Metabolism with Archigram in his 1968 book New Directions in Japanese Architecture. Indeed, the two groups both emerged in the 1960s and disbanded in the 1970s and used imagery with megastructures and cells, but their urban and architectural proposals were quite different. Although utopian in their ideals, the Metabolists were concerned with improving the social structure of society with their biologically inspired architecture, whereas Archigram were influenced by mechanics, information and electronic media and their architecture was more utopian and less social.
Osaka Expo, 1970
Japan was selected as the site for the 1970 World Exposition and 330 hectares in the Senri Hills in Osaka Prefecture were set aside as the location. Japan had originally wanted to host a World Exposition in 1940 but it was cancelled with the escalation of the war. The one million people who had bought tickets for 1940 were allowed to use them in 1970.
Kenzo Tange joined the Theme Committee for the Expo and along with Uso Nishiyama he had responsibility for master planning the site. The theme for the Expo became "Progress and Harmony for Mankind". Tange invited twelve architects, including Arata Isozaki, Otaka and Kikutake to design individual elements. He also asked Ekuan to oversee the design of the furniture and transportation and Kawazoe to curate the Mid-Air Exhibition which was sited in the huge space-frame roof.
Tange envisioned that the Expo should be primarily conceived as a big festival where human beings could meet. Central to the site he placed the Festival Plaza onto which were connected a number of themed displays, all of which were united under one huge roof. In his Tokyo Bay Project Tange spoke about the living body having two types of information transmission systems: fluid and electronic. That project used the idea of a tree trunk and branches that would carry out those types of transmission in relation to the city. Kawazoe likened the space frame roof of the Festival Plaza to the electronic transmission system and the aerial-themed displays that plugged into it to the hormonal system.
Kawazoe, Maki and Kurokawa had invited a selection of world architects to design displays for the Mid-Air Exhibition that was to be incorporated within the roof. The architects included Moshe Safdie, Yona Friedman, Hans Hollein and Giancarlo De Carlo. Although Tange was obsessed with the theory of flexibility that the space framed provide he did concede that in reality it was not so practical for the actual fixing of the displays. The roof itself was designed by Koji Kamaya and Mamoru Kawaguchi who conceived it as a huge space frame. Kawaguchi invented a welding-free ball joint to safely distribute the load and worked out a method of assembling the frame on the ground before raising it using jacks.
Kikutake's Expo Tower was situated on the highest hill in the grounds and acted as a landmark for visitors. It was built of a vertical ball and joint space onto which was attached a series of cabins. The design was to have been a blueprint for flexible vertical living based upon a 360m3 standard construction cabin clad with a membrane of cast aluminium and glass that could be flexibly arranged anywhere on the tower. This was demonstrated with a variety of cabins that were observation platforms and VIP rooms and one cabin at ground level that became an information booth.
Kurokawa had won commissions for two corporate pavilions: the Takara Beautillion and the Toshiba IHI pavilion. The former of these was composed of capsules plugged onto six point frames and was assembled in just six days; the latter was a space frame composed of tetrahedron modules, based upon his Helix City that could grow in 14 different directions and resemble organic growth.
Expo '70 has been described has the apotheosis of the Metabolist movement. But even before Japan's period of rapid economic growth ended with the world energy crisis, critics were calling the Expo a dystopia that was removed from reality. The energy crisis demonstrated Japan's reliance both on imported oil and led to a re-evaluation of design and planning with architects moving away from utopian projects towards smaller urban interventions.
Later years
After the 1970 Expo, Tange and the Metabolists turned their attention away from Japan towards the Middle East and Africa. These countries were expanding on the back of income from oil and were fascinated by both Japanese culture and the expertise that the Metabolists brought to urban planning. Tange and Kurokawa capitalised on the majority of the commissions, but Kikutake and Maki were involved too.
Tange's projects included a 57,000 seat stadium and sports center in Riyadh for King Faisal, and a sports city for Kuwait for the planned 1974 Pan Arab Games. However, both were put on hold by the outbreak of the Fourth Arab–Israeli War in 1973. Likewise, the plan for a new city center in Tehran was cancelled after the 1979 revolution. He did however complete the Kuwaiti Embassy in Tokyo in 1970 and Kuwait's International Airport, as well as the Presidential Palace in Damascus, Syria.
Kurokawa's work included a competition win for Abu Dhabi's National Theatre (1977), capsule-tower designs for a hotel in Baghdad (1975) and a city in the desert in Libya (1979–1984).
Kikutake's vision for floating towers was partly realised in 1975 when he designed and built the Aquapolis for the Okinawa Ocean Expo. The 100 x 100 meter floating city block contained accommodation that included a banquet hall, offices and residences for 40 staff and it was built in Hiroshima and then towed to Okinawa. Further unbuilt floating city projects were undertaken, including a floating city in Hawaii for ocean research and a plug-in floating A-frame unit containing housing and offices that could have been used to provide mobile homes in the event of a natural disaster.
Footnotes
References
Kikutake Assocs, May–June 1970, "EXPO Tower", The Japan Architect
.
Pflumio, Cyril (2011) Je est une cabane dans le désert. Notes sur l'espace et l'architecture japonaise. (in French) Master's thesis, Strasbourg, Institut national des Sciences appliquées.
Sasaki, Takabumi, May–June 1970, "reportage: A Passage Through the Dys-topia of EXPO'70", The Japan Architect
Tange & Kawazoe, May–June 1970, "Some thoughts about EXPO 70 - Dialogue between Kenzo Tange and Noboru Kawazoe", The Japan Architect
Further reading
Noboru Kawazoe, et al. (1960). Metabolism 1960: The Proposals for a New Urbanism. Bijutsu Shuppan Sha.
Kisho Kurokawa (1977). Metabolism in Architecture. Studio Vista. .
Kisho Kurokawa (1992). From Metabolism to Symbiosis. John Wiley & Sons. .
Architectural styles
Modernist architecture
Architectural history
Architecture in Japan | Metabolism (architecture) | [
"Engineering"
] | 6,720 | [
"Architectural history",
"Architecture"
] |
1,071,088 | https://en.wikipedia.org/wiki/High%20Accuracy%20Radial%20Velocity%20Planet%20Searcher | The High Accuracy Radial Velocity Planet Searcher (HARPS) is a high-precision echelle planet-finding spectrograph installed in 2002 on the ESO's 3.6m telescope at La Silla Observatory in Chile. The first light was achieved in February 2003. HARPS has discovered over 130 exoplanets to date, with the first one in 2004, making it the most successful planet finder behind the Kepler space telescope. It is a second-generation radial-velocity spectrograph, based on experience with the ELODIE and CORALIE instruments.
Characteristics
The HARPS can attain a precision of 0.97 m/s (3.5 km/h), making it one of only two instruments worldwide with such accuracy. This is due to a design in which the target star and a reference spectrum from a thorium lamp are observed simultaneously using two identical optic fibre feeds, and to careful attention to mechanical stability: the instrument sits in a vacuum vessel which is temperature-controlled to within 0.01 kelvins. The precision and sensitivity of the instrument is such that it incidentally produced the best available measurement of the thorium spectrum. Planet-detection is in some cases limited by the seismic pulsations of the star observed rather than by limitations of the instrument.
The principal investigator on the HARPS is Michel Mayor who, along with Didier Queloz and Stéphane Udry, have used the instrument to characterize the Gliese 581 planetary system, home to one of the smallest known exoplanets orbiting a normal star, and two super-Earths whose orbits lie in the star's habitable zone.
It was initially used for a survey of one-thousand stars.
Since October 2012 the HARPS spectrograph has the precision to detect a new category of planets: habitable super-Earths. This sensitivity was expected from simulations of stellar intrinsic signals, and actual observations of planetary systems. Currently, the HARPS can detect habitable super-Earth only around low-mass stars as these are more affected by gravitational tug from planets and have habitable zones close to the host star.
Discoveries
This is an incomplete list of exoplanets discovered by the HARPS. The list is sorted by the date of the discovery's announcement. As of December 2017, the list contains 134 exoplanets.
Gallery
See also
Similar instruments:
HARPS-N is a copy of this instrument installed in the northern hemisphere in 2012.
HARPS3 is an updated design of this instrument that will be installed on an upgraded and roboticised Isaac Newton Telescope, in 2024.
Fiber-optic Improved Next-generation Doppler Search for Exo-Earths, operating at Lick observatory since 2009
Anglo-Australian Planet Search or AAPS is another southern hemisphere planet search program.
ESPRESSO is a new-generation spectrograph for ESO's VLT.
Automated Planet Finder, at the Lick observatory, commissioned in 2013.
CAFE (Calar Alto Fibre-fed Echelle spectrograph) installed on the Calar Alto Observatory's 2.2-metre telescope in 2014, and the CARMENES mounted on the 3.5-metre telescope in 2016.
EXPRES is a third generation radial velocity spectrograph that is planned to be installed on the Lowell Discovery Telescope.
Space based detectors :
CoRoT, spacecraft operating since 2007
Kepler space telescope, operational until 2018
Terrestrial Planet Finder, cancelled
Space Interferometry Mission, construction halted in 2010
Darwin, early studies for a multi-satellite mission
Notes
References
External links
(Contains list of discoveries from 2005 survey.)
Astronomical instruments
Telescope instruments
Exoplanet search projects
Spectrographs
European Southern Observatory
Articles containing video clips | High Accuracy Radial Velocity Planet Searcher | [
"Physics",
"Chemistry",
"Astronomy"
] | 755 | [
"Exoplanet search projects",
"Telescope instruments",
"Spectrum (physical sciences)",
"Spectrographs",
"Astronomical instruments",
"Astronomy projects",
"Spectroscopy"
] |
1,072,213 | https://en.wikipedia.org/wiki/Agricultural%20lime | Agricultural lime, also called aglime, agricultural limestone, garden lime or liming, is a soil additive made from pulverized limestone or chalk. The primary active component is calcium carbonate. Additional chemicals vary depending on the mineral source and may include calcium oxide. Unlike the types of lime called quicklime (calcium oxide) and slaked lime (calcium hydroxide), powdered limestone does not require lime burning in a lime kiln; it only requires milling. All of these types of lime are sometimes used as soil conditioners, with a common theme of providing a base to correct acidity, but lime for farm fields today is often crushed limestone. Historically, liming of farm fields in centuries past was often done with burnt lime; the difference is at least partially explained by the fact that affordable mass-production-scale fine milling of stone and ore relies on technologies developed since the mid-19th century.
Some effects of agricultural lime on soil are:
it increases the pH of acidic soil, reducing soil acidity and increasing alkalinity
it provides a source of calcium for plants
it improves water penetration for acidic soils
it improves the uptake of major plant nutrients (nitrogen, phosphorus, and potassium) of plants growing on acid soils.
Other forms of lime have common applications in agriculture and gardening, including dolomitic lime and hydrated lime. Dolomitic lime may be used as a soil input to provide similar effects as agricultural lime, while supplying magnesium in addition to calcium. In livestock farming, hydrated lime can be used as a disinfectant measure, producing a dry and alkaline environment in which bacteria do not readily multiply. In horticultural farming it can be used as an insect repellent, without causing harm to the pest or plant.
Spinner-style lime spreaders are generally used to spread agricultural lime on fields.
Agricultural lime is injected into coal burners at power plants to reduce the pollutants such as NO2 and SO2 from the emissions.
Determining the need for agricultural lime
Lime can improve crop yield and the root system of plants and grass where soils are acidic. It does this by making the soil more basic, allowing the plants to absorb more nutrients. Lime is not a fertilizer but can be used in combination with fertilizers.
Soils become acidic in several ways. Locations that have high rainfall levels become acidic through leaching. Land used for crop and livestock purposes loses minerals over time by crop removal and becomes acidic. The application of modern chemical fertilizers is a major contributor to soil acid by the process in which the plant nutrients react in the soil.
Aglime can also benefit soils where the land is used for breeding and raising foraging animals. Bone growth is key to a young animal's development, and bones are composed primarily of calcium and phosphorus. Young mammals get their needed calcium through milk, which has calcium as one of its major components. Dairymen frequently apply aglime because it increases milk production.
The best way to determine if the soil is acidic or deficient in calcium or magnesium is with a soil test which a university can provide with an agricultural education department for under $30.00 for United States residents. Farmers typically become interested in soil testing when they notice a decrease in crop response to applied fertilizer.
"Corrected lime potential" is used in soil testing laboratories to indicate whether lime is required.
Quality
The quality of agricultural limestone is determined by the chemical makeup of the limestone and how finely the stone is ground. To aid the farmer in determining the relative value of competing agricultural liming materials, the agricultural extension services of several universities use two rating systems. Calcium Carbonate Equivalent (CCE) and the Effective Calcium Carbonate Equivalent (ECCE) give a numeric value to the effectiveness of different liming materials.
The CCE compares the chemistry of a particular quarry's stone with the neutralizing power of pure calcium carbonate. Because each molecule of magnesium carbonate is lighter than calcium carbonate, limestones containing magnesium carbonate (dolomite) can have a CCE greater than 100 percent.
Because the acids in soil are relatively weak, agricultural limestones must be ground to a small particle size to be effective. The extension service of different states rate the effectiveness of stone size particles slightly differently. They all agree, however, that the smaller the particle size the more effective the stone is at reacting in the soil. Measuring the size of particles is based on the size of a mesh that the limestone would pass through. The mesh size is the number of wires per inch. Stone retained on an 8 mesh will be about the size of BB pellets. Material passing a 60 mesh screen will have the appearance of face powder. Particles larger than 8 mesh are of little or no value, particles between 8 mesh and 60 mesh are somewhat effective and particles smaller than 60 mesh are 100 percent effective.
By combining the chemistry of a particular product (CCE) and its particle size the Effective Calcium Carbonate Equivalent (ECCE) is determined. The ECCE is percentage comparison of a particular agricultural limestone with pure calcium carbonate with all particles smaller than 60 mesh. Typically the aglime materials in commercial use will have ECCE ranging from 45 percent to 110 percent.
Brazil's case
Brazil's vast inland cerrado region was regarded as unfit for farming before the 1960s because the soil was too acidic and poor in nutrients, according to Nobel Peace Prize winner Norman Borlaug, an American plant scientist referred to as the father of the Green Revolution. However, from the 1960s, vast quantities of lime (pulverised chalk or limestone) were poured into the soil to reduce acidity. The effort continued, and in the late 1990s, between 14 million and 16 million tonnes of lime were spread on Brazilian fields each year. The quantity rose to 25 million tonnes in 2003 and 2004, equalling around five tonnes of lime per hectare. As a result, Brazil has become the world's second biggest soybean exporter, and thanks to the boom in animal feed production, Brazil is now the biggest exporter of beef and poultry in the world.
Effect on prehistoric mobility studies
A 2019 study demonstrated that agricultural lime affects strontium-based mobility studies, which attempt to identify where individual prehistoric people lived. Agricultural lime has a significant effect in areas with calcium-poor soils. In a systematic study of a river system in Denmark, the Karup River, more than half of the strontium in the river's catchment area was found to come from runoff of agricultural lime, and not from the surrounding natural environment. Such introduction of agricultural lime has resulted in researchers wrongly concluding that certain prehistoric individuals originated far abroad from their burial sites, because strontium isotopic results measured in their remains and personal effects were compared to burial sites contaminated by agricultural lime.
See also
Marl
Liming (soil)
Soil pH
References
Further reading
Transcription of 1919 text by Alva Agee.
"A Study of the Lime Potential, R.C. Turner, Research Branch, Canadian Department of Agriculture, 1965
Materials
Soil improvers
Limestone industry | Agricultural lime | [
"Physics"
] | 1,428 | [
"Materials",
"Matter"
] |
1,072,857 | https://en.wikipedia.org/wiki/Biosignature | A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life on a planet. Measurable attributes of life include its physical or chemical structures, its use of free energy, and the production of biomass and wastes.
The field of astrobiology uses biosignatures as evidence for the search for past or present extraterrestrial life.
Types
Biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically-formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments.
Atmospheric gases: Gases formed by metabolic processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether an observed feature is a true biosignature is complex. There are three criteria that a potential biosignature must meet to be considered viable for further research: Reliability, survivability, and detectability.
Reliability
A biosignature must be able to dominate over all other processes that may produce similar physical, spectral, and chemical features. When investigating a potential biosignature, scientists must carefully consider all other possible origins of the biosignature in question. Many forms of life are known to mimic geochemical reactions. One of the theories on the origin of life involves molecules developing the ability to catalyse geochemical reactions to exploit the energy being released by them. These are some of the earliest known metabolisms (see methanogenesis). In such case, scientists might search for a disequilibrium in the geochemical cycle, which would point to a reaction happening more or less often than it should. A disequilibrium such as this could be interpreted as an indication of life.
Survivability
A biosignature must be able to last for long enough so that a probe, telescope, or human can be able to detect it. A consequence of a biological organism's use of metabolic reactions for energy is the production of metabolic waste. In addition, the structure of an organism can be preserved as a fossil and we know that some fossils on Earth are as old as 3.5 billion years. These byproducts can make excellent biosignatures since they provide direct evidence for life. However, in order to be a viable biosignature, a byproduct must subsequently remain intact so that scientists may discover it.
Detectability
A biosignature must be detectable with the most latest technology to be relevant in scientific investigation. This seems to be an obvious statement, however, there are many scenarios in which life may be present on a planet yet remain undetectable because of human-caused limitations.
False positives
Every possible biosignature is associated with its own set of unique false positive mechanisms or non-biological processes that can mimic the detectable feature of a biosignature. An important example is using oxygen as a biosignature. On Earth, the majority of life is centred around oxygen. It is a byproduct of photosynthesis and is subsequently used by other life forms to breathe. Oxygen is also readily detectable in spectra, with multiple bands across a relatively wide wavelength range, therefore, it makes a very good biosignature. However, finding oxygen alone in a planet's atmosphere is not enough to confirm a biosignature because of the false-positive mechanisms associated with it. One possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of non-condensable gasses or if the planet loses a lot of water. Finding and distinguishing a biosignature from its potential false-positive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abiotic-biological degeneracy, if nature allows.
False negatives
Opposite to false positives, false negative biosignatures arise in a scenario where life may be present on another planet, but some processes on that planet make potential biosignatures undetectable. This is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres.
Human limitations
There are many ways in which humans may limit the viability of a potential biosignature. The resolution of a telescope becomes important when vetting certain false-positive mechanisms, and many current telescopes do not have the capabilities to observe at the resolution needed to investigate some of these. In addition, probes and telescopes are worked on by huge collaborations of scientists with varying interests. As a result, new probes and telescopes carry a variety of instruments that are a compromise to everyone's unique inputs. For a different type of scientist to detect something unrelated to biosignatures, a sacrifice may have to be made in the capability of an instrument to search for biosignatures.
General examples
Geomicrobiology
The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox-sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements).
For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example is the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. Another example is the presence of straight-chain lipids in the form of alkanes, alcohols, and fatty acids with 20–36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants.
Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct sizes, shapes, and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible.
Morphology
Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 are one of the longest-debated of several potential biosignatures in that specimen. The possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Currently, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection". Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation.
Chemistry
No single compound will prove life once existed. Rather, it will be distinctive patterns present in any organic compounds showing a process of selection. For example, membrane lipids left behind by degraded cells will be concentrated, have a limited size range, and comprise an even number of carbons. Similarly, life only uses left-handed amino acids. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature.
Chemical biosignatures include any suite of complex organic compounds composed of carbon, hydrogen, and other elements or heteroatoms such as oxygen, nitrogen, and sulfur, which are found in crude oils, bitumen, petroleum source rock and eventually show simplification in molecular structure from the parent organic molecules found in all living organisms. They are complex carbon-based molecules derived from formerly living organisms. Each biomarker is quite distinctive when compared to its counterparts, as the time required for organic matter to convert to crude oil is characteristic. Most biomarkers also usually have high molecular mass.
Some examples of biomarkers found in petroleum are pristane, triterpanes, steranes, phytane and porphyrin. Such petroleum biomarkers are produced via chemical synthesis using biochemical compounds as their main constituents. For instance, triterpenes are derived from biochemical compounds found on land angiosperm plants. The abundance of petroleum biomarkers in small amounts in its reservoir or source rock make it necessary to use sensitive and differential approaches to analyze the presence of those compounds. The techniques typically used include gas chromatography and mass spectrometry.
Petroleum biomarkers are highly important in petroleum inspection as they help indicate the depositional territories and determine the geological properties of oils. For instance, they provide more details concerning their maturity and the source material. In addition to that they can also be good parameters of age, hence they are technically referred to as "chemical fossils". The ratio of pristane to phytane (pr:ph) is the geochemical factor that allows petroleum biomarkers to be successful indicators of their depositional environments.
Geologists and geochemists use biomarker traces found in crude oils and their related source rock to unravel the stratigraphic origin and migration patterns of presently existing petroleum deposits. The dispersion of biomarker molecules is also quite distinctive for each type of oil and its source; hence, they display unique fingerprints. Another factor that makes petroleum biomarkers more preferable than their counterparts is that they have a high tolerance to environmental weathering and corrosion. Such biomarkers are very advantageous and often used in the detection of oil spillage in the major waterways. The same biomarkers can also be used to identify contamination in lubricant oils. However, biomarker analysis of untreated rock cuttings can be expected to produce misleading results. This is due to potential hydrocarbon contamination and biodegradation in the rock samples.
Atmospheric
The atmospheric properties of exoplanets are of particular importance, as atmospheres provide the most likely observables for the near future, including habitability indicators and biosignatures. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth.
An exoplanet's color—or reflectance spectrum—can also be used as a biosignature due to the effect of pigments that are uniquely biologic in origin such as the pigments of phototrophic and photosynthetic life forms. Scientists use the Earth as an example of this when looked at from far away (see Pale Blue Dot) as a comparison to worlds observed outside of our solar system. Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths that may be detected by the new generation of space observatories under development.
Some scientists have reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. For example, the presence of oxygen and methane together could indicate the kind of extreme thermochemical disequilibrium generated by life. Two of the top 14,000 proposed atmospheric biosignatures are dimethyl sulfide and chloromethane (). An alternative biosignature is the combination of methane and carbon dioxide.
The detection of phosphine in the atmosphere of Venus is being investigated as a possible biosignature.
Atmospheric disequilibrium
A disequilibrium in the abundance of gas species in an atmosphere can be interpreted as a biosignature. Life has greatly altered the atmosphere on Earth in a way that would be unlikely for any other processes to replicate. Therefore, a departure from equilibrium is evidence for a biosignature. For example, the abundance of methane in the Earth's atmosphere is orders of magnitude above the equilibrium value due to the constant methane flux that life on the surface emits. Depending on the host star, a disequilibrium in the methane abundance on another planet may indicate a biosignature.
Agnostic biosignatures
Because the only form of known life is that on Earth, the search for biosignatures is heavily influenced by the products that life produces on Earth. However, life that is different from life on Earth may still produce biosignatures that are detectable by humans, even though nothing is known about their specific biology. This form of biosignature is called an "agnostic biosignature" because it is independent of the form of life that produces it. It is widely agreed that all life–no matter how different it is from life on Earth–needs a source of energy to thrive. This must involve some sort of chemical disequilibrium, which can be exploited for metabolism. Geological processes are independent of life, and if scientists can constrain the geology well enough on another planet, then they know what the particular geologic equilibrium for that planet should be. A deviation from geological equilibrium can be interpreted as an atmospheric disequilibrium and agnostic biosignature.
Antibiosignatures
In the same way that detecting a biosignature would be a significant discovery about a planet, finding evidence that life is not present can also be an important discovery about a planet. Life relies on redox imbalances to metabolize the resources available into energy. The evidence that nothing on an earth is taking advantage of the "free lunch" available due to an observed redox imbalance is called antibiosignatures.
Polyelectrolytes
The Polyelectrolyte theory of the gene is a proposed generic biosignature. In 2002, Steven A. Benner and Daniel Hutter proposed that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. Benner and others proposed methods for concentrating and analyzing these polyelectrolyte genetic biopolymers on Mars, Enceladus, and Europa.
Specific examples
Methane on Mars
The presence of methane in the atmosphere of Mars is an area of ongoing research and a highly contentious subject. Because of its tendency to be destroyed in the atmosphere by photochemistry, the presence of excess methane on a planet can indicate that there must be an active source. With life being the strongest source of methane on Earth, observing a disequilibrium in the methane abundance on another planet could be a viable biosignature.
Since 2004, there have been several detections of methane in the Mars atmosphere by a variety of instruments onboard orbiters and ground-based landers on the Martian surface as well as Earth-based telescopes. These missions reported values anywhere between a 'background level' ranging between 0.24 and 0.65 parts per billion by volume (p.p.b.v.) to as much as 45 ± 10 p.p.b.v.
However, recent measurements using the ACS and NOMAD instruments on board the ESA-Roscosmos ExoMars Trace Gas Orbiter have failed to detect any methane over a range of latitudes and longitudes on both Martian hemispheres. These highly sensitive instruments were able to put an upper bound on the overall methane abundance at 0.05 p.p.b.v. This nondetection is a major contradiction to what was previously observed with less sensitive instruments and will remain a strong argument in the ongoing debate over the presence of methane in the Martian atmosphere.
Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin, similarly to the use of the δ13C standard for recognizing biogenic methane on Earth.
Martian atmosphere
The Martian atmosphere contains high abundances of photochemically produced CO and H2, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used by a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars.
Missions inside the Solar System
Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined not only by the probability of life creating it but also by the improbability of non-biological (abiotic) processes producing it. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered requires proving that a possible biosignature was produced by the activities or remains of life. As with most scientific discoveries, discovery of a biosignature will require evidence building up until no other explanation exists.
Possible examples of a biosignature include complex organic molecules or structures whose formation is virtually unachievable in the absence of life:
Cellular and extracellular morphologies
Biomolecules in rocks
Bio-organic molecular structures
Chirality
Biogenic minerals
Biogenic isotope patterns in minerals and organic compounds
Atmospheric gases
Photosynthetic pigments
The Viking missions to Mars
The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared inconclusive.
Mars Science Laboratory
The Curiosity rover from the Mars Science Laboratory mission, with its Curiosity rover is currently assessing the potential past and present habitability of the Martian environment and is attempting to detect biosignatures on the surface of Mars. Considering the MSL instrument payload package, the following classes of biosignatures are within the MSL detection window: organism morphologies (cells, body fossils, casts), biofabrics (including microbial mats), diagnostic organic molecules, isotopic signatures, evidence of biomineralization and bioalteration, spatial patterns in chemistry, and biogenic gases. The Curiosity rover targets outcrops to maximize the probability of detecting 'fossilized' organic matter preserved in sedimentary deposits.
ExoMars Orbiter
The 2016 ExoMars Trace Gas Orbiter (TGO) is a Mars telecommunications orbiter and atmospheric gas analyzer mission. It delivered the Schiaparelli EDM lander and then began to settle into its science orbit to map the sources of methane on Mars and other gases, and in doing so, will help select the landing site for the Rosalind Franklin rover to be launched in 2022. The primary objective of the Rosalind Franklin rover mission is the search for biosignatures on the surface and subsurface by using a drill able to collect samples down to a depth of , away from the destructive radiation that bathes the surface.
Mars 2020 Rover
The Mars 2020 rover, which launched in 2020, is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability, the possibility of past life on Mars, and potential for preservation of biosignatures within accessible geological materials. In addition, it will cache the most interesting samples for possible future transport to Earth.
Titan Dragonfly
NASA's Dragonfly lander/aircraft concept is proposed to launch in 2025 and would seek evidence of biosignatures on the organic-rich surface and atmosphere of Titan, as well as study its possible prebiotic primordial soup. Titan is the largest moon of Saturn and is widely believed to have a large subsurface ocean consisting of a salty brine. In addition, scientists believe that Titan may have the conditions necessary to promote prebiotic chemistry, making it a prime candidate for biosignature discovery.
Europa Clipper
NASA's Europa Clipper probe is designed as a flyby mission to Jupiter's smallest Galilean moon, Europa. The mission launched in October 2024 and is set to reach Europa in April 2030, where it will investigate the potential for habitability on Europa. Europa is one of the best candidates for biosignature discovery in the Solar System because of the scientific consensus that it retains a subsurface ocean, with two to three times the volume of water on Earth. Evidence for this subsurface ocean includes:
Voyager 1 (1979): The first close-up photos of Europa are taken. Scientists propose that a subsurface ocean could cause the tectonic-like marks on the surface.
Galileo (1997): The magnetometer aboard this probe detected a subtle change in the magnetic field near Europa. This was later interpreted as a disruption in the expected magnetic field due to the current induction in a conducting layer on Europa. The composition of this conducting layer is consistent with a salty subsurface ocean.
Hubble Space Telescope (2012): An image was taken of Europa which showed evidence for a plume of water vapor coming off the surface.
The Europa Clipper probe includes instruments to help confirm the existence and composition of a subsurface ocean and thick icy layer. In addition, the instruments will be used to map and study surface features that may indicate tectonic activity due to a subsurface ocean.
Enceladus
Although there are no set plans to search for biosignatures on Saturn's sixth-largest moon, Enceladus, the prospects of biosignature discovery there are exciting enough to warrant several mission concepts that may be funded in the future. Similar to Jupiter's moon Europa, there is much evidence for a subsurface ocean to also exist on Enceladus. Plumes of water vapor were first observed in 2005 by the Cassini mission and were later determined to contain salt as well as organic compounds. In 2014, more evidence was presented using gravimetric measurements on Enceladus to conclude that there is in fact a large reservoir of water underneath an icy surface. Mission design concepts include:
Enceladus Life Finder (ELF)
Enceladus Life Signatures and Habitability
Enceladus Organic Analyzer
Enceladus Explorer (En-Ex)
Explorer of Enceladus and Titan (E2T)
Journey to Enceladus and Titan (JET)
Life Investigation For Enceladus (LIFE)
Testing the Habitability of Enceladus's Ocean (THEO)
All of these concept missions have similar science goals: To assess the habitability of Enceladus and search for biosignatures, in line with the strategic map for exploring the ocean-world Enceladus.
Searching outside of the Solar System
At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). It is currently not feasible to send humans or even probes to search for biosignatures outside of the Solar System. The only way to search for biosignatures outside of the Solar System is by observing exoplanets with telescopes.
There have been no plausible or confirmed biosignature detections outside of the Solar System. Despite this, it is a rapidly growing field of research due to the prospects of the next generation of telescopes. The James Webb Space Telescope, which launched in December 2021, will be a promising next step in the search for biosignatures. Although its wavelength range and resolution will not be compatible with some of the more important atmospheric biosignature gas bands like oxygen, it will still be able to detect some evidence for oxygen false positive mechanisms.
The new generation of ground-based 30-meter class telescopes (Thirty Meter Telescope and Extremely Large Telescope) will have the ability to take high-resolution spectra of exoplanet atmospheres at a variety of wavelengths. These telescopes will be capable of distinguishing some of the more difficult false positive mechanisms such as the abiotic buildup of oxygen via photolysis. In addition, their large collecting area will enable high angular resolution, making direct imaging studies more feasible.
See also
Bioindicator
MERMOZ (remote detection of lifeforms)
Taphonomy
Technosignature
References
Astrobiology
Astrochemistry
Bioindicators
Biology terminology
Search for extraterrestrial intelligence
Petroleum geology | Biosignature | [
"Chemistry",
"Astronomy",
"Biology",
"Environmental_science"
] | 5,521 | [
"Bioindicators",
"Origin of life",
"Speculative evolution",
"Environmental chemistry",
"Astrobiology",
"Petroleum",
"Astrochemistry",
"nan",
"Biological hypotheses",
"Petroleum geology",
"Astronomical sub-disciplines"
] |
1,074,464 | https://en.wikipedia.org/wiki/Well%20test | In hydrology, a well test is conducted to evaluate the amount of water that can be pumped from a particular water well. More specifically, a well test will allow prediction of the maximum rate at which water can be pumped from a well, and the distance that the water level in the well will fall for a given pumping rate and duration of pumping.
Well testing differs from aquifer testing in that the behaviour of the well is primarily of concern in the former, while the characteristics of the aquifer (the geological formation or unit that supplies water to the well) are quantified in the latter.
When water is pumped from a well the water level in the well falls. This fall is called drawdown. The amount of water that can be pumped is limited by the drawdown produced. Typically, drawdown also increases with the length of time that the pumping continues.
Well losses vs. aquifer losses
The components of observed drawdown in a pumping well were first described by Jacob (1947), and the test was refined independently by Hantush (1964) and Bierschenk (1963) as consisting of two related components,
,
where s is drawdown (units of length e.g., m), is the pumping rate (units of volume flowrate e.g., m³/day), is the aquifer loss coefficient (which increases with time — as predicted by the Theis solution) and is the well loss coefficient (which is constant for a given flow rate).
The first term of the equation () describes the linear component of the drawdown; i.e., the part in which doubling the pumping rate doubles the drawdown.
The second term () describes what is often called the 'well losses'; the non-linear component of the drawdown. To quantify this it is necessary to pump the well at several different flow rates (commonly called steps). Rorabaugh (1953) added to this analysis by making the exponent an arbitrary power (usually between 1.5 and 3.5).
To analyze this equation, both sides are divided by the discharge rate (), leaving on the left side, which is commonly referred to as specific drawdown. The right hand side of the equation becomes that of a straight line. Plotting the specific drawdown after a set amount of time () since the beginning of each step of the test (since drawdown will continue to increase with time) versus pumping rate should produce a straight line.
Fitting a straight line through the observed data, the slope of the best fit line will be (well losses) and the intercept of this line with will be (aquifer losses). This process is fitting an idealized model to real world data, and seeing what parameters in the model make it fit reality best. The assumption is then made that these fitted parameters best represent reality (given the assumptions that went into the model are true).
The relationship above is for fully penetrating wells in confined aquifers (the same assumptions used in the Theis solution for determining aquifer characteristics in an aquifer test).
Well efficiency
Often the well efficiency is determined from this sort of test, this is a percentage indicating the fraction of total observed drawdown in a pumping well which is due to aquifer losses (as opposed to being due to flow through the well screen and inside the borehole). A perfectly efficient well, with perfect well screen and where the water flows inside the well in a frictionless manner would have 100% efficiency. Unfortunately well efficiency is hard to compare between wells because it depends on the characteristics of the aquifer too (the same amount of well losses compared to a more transmissive aquifer would give a lower efficiency).
Specific capacity
Specific capacity is a quantity that which a water well can produce per unit of drawdown. It is normally obtained from a step drawdown test. Specific capacity is expressed as:
where
is the specific capacity ([L2T−1]; m²/day or USgal/day/ft)
is the pumping rate ([L3T−1]; m³/day or USgal/day), and
is the drawdown ([L]; m or ft)
The specific capacity of a well is also a function of the pumping rate it is determined at. Due to non-linear well losses the specific capacity will decrease with higher pumping rates. This complication makes the absolute value of specific capacity of little use; though it is useful for comparing the efficiency of the same well through time (e.g., to see if the well requires rehabilitation).
References
Bierschenk, William H., 1963. Determining well efficiency by multiple step-drawdown tests. International Association of Scientific Hydrology, 64:493-507.
Hantush, Mahdi S., 1964. Advances in Hydroscience, chapter Hydraulics of Wells, pp 281–442. Academic Press.
Jacob, C.E., 1947. Drawdown test to determine effective radius of artesian well. Transactions, American Society of Civil Engineers, 112(2312):1047-1070.
Rorabaugh, M.I., 1953. Graphical and theoretical analysis of step-drawdown test of artesian wells. Transactions, American Society of Civil Engineers, 79(separate 362):1-23.
Additional references on pumping test analysis methods other than the one described above (typically referred to as the Hantush-Bierschenk method) can be found in the general references on aquifer tests and hydrogeology.
See also
Hydrogeology
Aquifer and Groundwater
Aquifers
Hydrology
Hydraulic engineering
Water wells | Well test | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,164 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Water wells",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering"
] |
2,296,085 | https://en.wikipedia.org/wiki/Rietveld%20refinement | Rietveld refinement is a technique described by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and X-ray diffraction of powder samples results in a pattern characterised by reflections (peaks in intensity) at certain positions. The height, width and position of these reflections can be used to determine many aspects of the material's structure.
The Rietveld method uses a least squares approach to refine a theoretical line profile until it
matches the measured profile. The introduction of this technique was a significant step forward in the
diffraction analysis of powder samples as, unlike other techniques at that time, it was able to deal reliably with strongly overlapping reflections.
The method was first implemented in 1967, and reported in 1969 for the diffraction of monochromatic neutrons where the reflection-position is reported in terms of the Bragg angle, 2θ. This terminology will be used here although the technique is
equally applicable to alternative scales such as x-ray energy or neutron time-of-flight. The only wavelength and technique independent scale is in reciprocal space units or momentum transfer Q, which is historically rarely used in powder diffraction but very common in all other diffraction and optics techniques. The relation is
Introduction
The most common powder X-ray diffraction (XRD) refinement technique used today is based on the method proposed in the 1960s by Hugo Rietveld. The Rietveld method fits a calculated profile (including all structural and instrumental parameters) to experimental data. It employs the non-linear least squares method, and requires the reasonable initial approximation of many free parameters, including peak shape, unit cell dimensions and coordinates of all atoms in the crystal structure. Other parameters can be guessed while still being reasonably refined. In this way one can refine the crystal structure of a powder material from PXRD data. The successful outcome of the refinement is directly related to the quality of the data, the quality of the model (including initial approximations), and the experience of the user.
The Rietveld method is an incredibly powerful technique which began a remarkable era for powder XRD and materials science in general. Powder XRD is at heart a very basic experimental technique with diverse applications and experimental options. Despite being slightly limited by the one-dimensionality of PXRD data and limited resolution, powder XRD's power is astonishing. It is possible to determine the accuracy of a crystal structure model by fitting a profile to a 1D plot of observed intensity vs angle. Rietveld refinement requires a crystal structure model, and offers no way to come up with such a model on its own. However, it can be used to find structural details missing from a partial or complete ab initio structure solution, such as unit cell dimensions, phase quantities, crystallite sizes/shapes, atomic coordinates/bond lengths, micro strain in crystal lattice, texture, and vacancies.
Powder diffraction profiles: peak positions and shapes
Before exploring Rietveld refinement, it is necessary to establish a greater understanding of powder diffraction data and what information is encoded therein in order to establish a notion of how to create a model of a diffraction pattern, which is of course necessary in Rietveld refinement. A typical diffraction pattern can be described by the positions, shapes, and intensities of multiple Bragg reflections. Each of the three mentioned properties encodes some information relating to the crystal structure, the properties of the sample, and the properties of the instrumentation. Some of these contributions are shown in Table 1, below.
The structure of a powder pattern is essentially defined by instrumental parameters and two crystallographic parameters: unit cell dimensions, and atomic content and coordination. So, a powder pattern model can be constructed as follows:
Establish peak positions: Bragg peak positions are established from Bragg's law using the wavelength and d-spacing for a given unit cell.
Determine peak intensity: Intensity depends on the structure factor, and can be calculated from the structural model for individual peaks. This requires knowledge of the specific atomic coordination in the unit cell and geometrical parameters.
Peak shape for individual Bragg peaks: Represented by functions of the FWHM (which vary with Bragg angle) called the peak shape functions. Realistically ab initio modelling is difficult, and so empirically selected peak shape functions and parameters are used for modelling.
Sum: The individual peak shape functions are summed and added to a background function, leaving behind the resultant powder pattern.
It is easy to model a powder pattern given the crystal structure of a material. The opposite, determining the crystal structure from a powder pattern, is much more complicated. A brief explanation of the process follows, though it is not the focus of this article.
To determine structure from a powder diffraction pattern the following steps should be taken. First, Bragg peak positions and intensities should be found by fitting to a peak shape function including background. Next, peak positions should be indexed and used to determine unit cell parameters, symmetry, and content. Third, peak intensities determine space group symmetry and atomic coordination. Finally, the model is used to refine all crystallographic and peak shape function parameters. To do this successfully, there is a requirement for excellent data which means good resolution, low background, and a large angular range.
Peak shape functions
For general application of the Rietveld method, irrespective of the software used, the observed Bragg peaks in a powder diffraction pattern are best described by the so-called peak shape function (PSF). The PSF is a convolution of three functions: the instrumental broadening , wavelength dispersion , and the specimen function , with the addition of a background function, . It is represented as follows:
,
where denotes convolution, which is defined for two functions and as an integral:
The instrumental function depends on the location and geometry of the source, monochromator, and sample. Wavelength function accounts for the distribution of the wavelengths in the source, and varies with the nature of the source and monochromatizing technique. The specimen function depends on several things. First is dynamic scattering, and secondly the physical properties of the sample such as crystallite size, and microstrain.
A short aside: unlike the other contributions, those from the specimen function can be interesting in materials characterization. As such, average crystallite size, , and microstrain, , effects on Bragg peak broadening, (in radians), can be described as follows, where is a constant:
and .
Returning to the peak shape function, the goal is to correctly model the Bragg peaks which exist in the observed powder diffraction data. In the most general form, the intensity, , of the point (, where is the number of measured points) is the sum of the contributions from the overlapped Bragg peaks (), and the background, , and can be described as follows:
where is the intensity of the Bragg peak, and . Since is a multiplier, it is possible to analyze the behaviour of different normalized peak functions independently of peak intensity, under the condition that the integral over infinity of the PSF is unity. There are various functions that can be chosen to do this with varying degrees of complexity. The most basic functions used in this way to represent Bragg reflections are the Gauss, and Lorentzian functions. Most commonly though, is the pseudo-Voigt function, a weighted sum of the former two (the full Voigt profile is a convolution of the two, but is computationally more demanding). The pseudo-Voigt profile is the most common and is the basis for most other PSF's. The pseudo-Voigt function can be represented as:
,
where
and
are the Gaussian and Lorentzian contributions, respectively.
Thus,
where:
and are the full widths at half maximum (FWHM)
is essentially the Bragg angle of the point in the powder pattern with its origin in the position of the peak divided by the peak's FWHM.
, and and are normalization factors such that and respectively.
, known as the Caglioti formula, is the FWHM as a function of for Gauss, and pseudo-Voigt profiles. , , and are free parameters.
is the FWHM vs. for the Lorentz function. and are free variables
, where is the pseudo-Voigt mixing parameter, and are free variables.
The pseudo-Voigt function, like the Gaussian and Lorentz functions, is a centrosymmetric function, and as such does not model asymmetry. This can be problematic for non-ideal powder XRD data, such as those collected at synchrotron radiation sources, which generally exhibit asymmetry due to the use of multiple focusing optics.
The Finger–Cox–Jephcoat function is similar to the pseudo-Voigt, but has better handling of asymmetry, which is treated in terms of axial divergence. The function is a convolution of pseudo-Voigt with the intersection of the diffraction cone and a finite receiving slit length using two geometrical parameters, , and , where and are the sample and the detector slit dimensions in the direction parallel to the goniometer axis, and is the goniometer radius.
Peak shape as described in Rietveld's paper
The shape of a powder diffraction reflection is influenced by the characteristics of the beam, the experimental
arrangement, and the sample size and shape. In the case of monochromatic neutron sources the convolution
of the various effects has been found to result in a reflex almost exactly Gaussian in shape. If this
distribution is assumed then the contribution of a given reflection to the profile at position
is:
where is the full width at half peak height (full-width half-maximum), is the center of the reflex, and is the calculated intensity of the reflex (determined from the structure factor,
the Lorentz factor, and multiplicity of the reflection).
At very low diffraction angles the reflections may acquire an asymmetry due to the vertical divergence of the beam.
Rietveld used a semi-empirical correction factor, to account for this asymmetry:
where is the asymmetry factor and is , , or depending on the difference being positive, zero, or negative respectively.
At a given position more than one diffraction peak may contribute to the profile. The intensity is simply the sum of all reflections contributing at the point .
Integrated intensity
For a Bragg peak , the observed integrated intensity, , as determined from numerical integration is
,
where is the total number of data points in the range of the Bragg peak. The integrated intensity depends on multiple factors, and can be expressed as the following product:
where:
: scale factor
: multiplicity factor, which accounts for symmetrically equivalent points in the reciprocal lattice
: Lorentz multiplier, defined by diffraction geometry
: polarization factor
: absorption multiplier
: preferred orientation factor
: extinction factor (often neglected as it is usually insignificant in powders)
: structure factor as determined by the crystal structure of the material
Peak width as described in Rietveld's paper
The width of the diffraction peaks are found to broaden at higher Bragg angles. This angular dependency was originally represented by
where , , and are the half-width parameters and may be refined during the fit.
Preferred orientation
In powder samples there is a tendency for plate- or rod-like crystallites to align themselves along the axis of a cylindrical sample holder. In solid polycrystalline samples the production of the material may result in greater volume fraction of certain crystal orientations (commonly referred to as texture). In such cases the reflex intensities will vary from that predicted for a completely random distribution. Rietveld allowed for moderate cases of the former by introducing a correction factor:
where is the intensity expected for a random sample, is the preferred orientation parameter and is the acute angle between the scattering vector and the normal of the crystallites.
Refinement
The principle of the Rietveld method is to minimize a function which analyzes the difference between a calculated profile and the observed data . Rietveld defined such an equation as:
where is the statistical weight and is an overall scale factor such that .
Least squares method
The fitting method used in Rietveld refinement is the non-linear least squares approach. A detailed derivation of non-linear least squares fitting will not be given here. Further detail can be found in Chapter 6 of Pecharsky and Zavalij's text 12 . There are a few things to note however. First, non-linear least squares fitting has an iterative nature for which convergence may be difficult to achieve if the initial approximation is too far from correct, or when the minimized function is poorly defined. The latter occurs when correlated parameters are being refined at the same time, which may result in divergence and instability of the minimization. This iterative nature also means that convergence to a solution does not occur immediately for the method is not exact. Each iteration depends on the results of the last which dictate the new set of parameters used for refinement. Thus, multiple refinement iterations are required to eventually converge to a possible solution.
Rietveld method basics
Using non-linear least squares minimization, the following system is solved:
where is the calculated intensity and is the observed intensity of a point in the powder pattern, , is a scale factor, and is the number of measured data points. The minimized function is given by:
where is the weight, and from the previous equation is unity (since is usually absorbed in the phase scale factor). The summation extends to all data points. Considering the peak shape functions and accounting for the overlapping of Bragg peaks because of the one-dimensionality of XRD data, the expanded form of the above equation for the case of a single phase measured with a single wavelength becomes:
where:
is the background at the data point.
is the phase scale factor.
is the number of Bragg reflections contributing to the intensity of the reflection.
is the integrated intensity of the Bragg peak.
is the peak shape function.
For a material that contains several phases (), the contribution from each is accounted for by modifying the above equation as follows:
It can easily be seen from the above equations that experimentally minimizing the background, which holds no useful structural information, is paramount for a successful profile fitting. For a low background, the functions are defined by contributions from the integrated intensities and peak shape parameters. But with a high background, the function being minimized depends on the adequacy of the background and not integrated intensities or peak shapes. Thus, a structure refinement cannot adequately yield structural information in the presence of a large background.
It is also worth noting the increased complexity brought forth by the presence of multiple phases. Each additional phase adds to the fitting, more Bragg peaks, and another scale factor tied to corresponding structural parameters, and peak shape. Mathematically they are easily accounted for, but practically, due to the finite accuracy and limited resolution of experimental data, each new phase can lower the quality and stability of the refinement. It is advantageous to use single phase materials when interested in finding precise structural parameters of a material. However, since the scale factors of each phase are determined independently, Rietveld refinement of multi phase materials can quantitatively examine the mixing ratio of each phase in the material.
Refinement parameters
Background
Generally, the background is calculated as a Chebyshev polynomial. In GSAS and GSAS-II they appear as follows. Again, background is treated as a Chebyshev polynomial of the first kind ("Handbook of Mathematical Functions", M. Abramowitz and IA. Stegun, Ch. 22), with intensity given by:
where are the coefficients of the Chebyshev polynomial taken from Table 22.3, pg. 795 of the Handbook. The coefficients have the form:
and the values for are found in the Handbook. The angular range () is converted to to make the Chebyshev polynomial orthogonal by
And, the orthogonal range for this function is –1 to +1.
Other parameters
Now, given the considerations of background, peak shape functions, integrated intensity, and non-linear least squares minimization, the parameters used in the Rietveld refinement which put these things together can be introduced. Below are the groups of independent least squares parameters generally refined in a Rietveld refinement.
Background parameters: usually 1 to 12 parameters.
Sample displacement: sample transparency, and zero shift corrections. (move peak position)
Multiple peak shape parameters.
FWHM parameters: i.e. Caglioti parameters (see section 3.1.2)
Asymmetry parameters (FCJ parameters)
Unit cell dimensions
one to six parameters (a, b, c, α, β, γ), depending on the crystal family/system, for each present phase.
Preferred orientation, and sometimes absorption, porosity, and extinction coefficients, which can be independent for each phase.
Scale factors (for each phase)
Positional parameters of all independent atoms in the crystal model (generally 0 to 3 per atom).
Population parameters
Occupation of site positions by atoms.
Atomic displacement parameters
Isotropic and anisotropic (temperature) parameters.
Each Rietveld refinement is unique and there is no prescribed sequence of parameters to include in a refinement. It is up to the user to determine and find the best sequence of parameters for refinement. It is worth noting that it is rarely possible to refine all relevant variables simultaneously from the beginning of a refinement, nor near the end since the least squares fitting will be destabilized or lead to a false minimum. It is important for the user to determine a stopping point for a given refinement. Given the complexity of Rietveld refinement it is important to have a clear grasp of the system being studied (sample, and instrumentation) to ensure that results are accurate, realistic, and meaningful. High data quality, a large enough range, and a good model – to serve as the initial approximation in the least squares fitting – are necessary for a successful, reliable, and meaningful Rietveld refinement.
Figures of merit
Since refinement depends on finding the best fit between a calculated and experimental pattern, it is important to have a numerical figure of merit quantifying the quality of the fit. Below are the figures of merit generally used to characterize the quality of a refinement. They provide insight to how well the model fits the observed data.
Profile residual (reliability factor):
Weighted profile residual:
Bragg residual:
Expected profile residual:
Goodness of fit:
It is worth mentioning that all but one () figure of merit include a contribution from the background. There are some concerns about the reliability of these figures, as well there is no threshold or accepted value which dictates what represents a good fit. The most popular and conventional figure of merit used is the goodness of fit which should approach unity given a perfect fit, though this is rarely the case. In practice, the best way to assess quality is a visual analysis of the fit by plotting the difference between the observed and calculated data plotted on the same scale.
References
Notes
Crystallography
Diffraction
Neutron-related techniques
Least squares | Rietveld refinement | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,020 | [
"Spectrum (physical sciences)",
"Materials science",
"Diffraction",
"Crystallography",
"Condensed matter physics",
"Spectroscopy"
] |
2,296,159 | https://en.wikipedia.org/wiki/Powder%20diffraction | Powder diffraction is a scientific technique using X-ray, neutron, or electron diffraction on powder or microcrystalline samples for structural characterization of materials. An instrument dedicated to performing such powder measurements is called a powder diffractometer.
Powder diffraction stands in contrast to single crystal diffraction techniques, which work best with a single, well-ordered crystal.
Explanation
The most common type of powder diffraction is with X-rays, the focus of this article although some aspects of neutron powder diffraction are mentioned. (Powder electron diffraction is more complex due to dynamical diffraction and is not discussed further herein.) Typical diffractometers use electromagnetic radiation (waves) with known wavelength and frequency, which is determined by their source. The source is often X-rays, and neutrons are also common sources, with their frequency determined by their de Broglie wavelength. When these waves reach the sample, the incoming beam is either reflected off the surface, or can enter the lattice and be diffracted by the atoms present in the sample. If the atoms are arranged symmetrically with a separation distance d, these waves will interfere constructively only where the path-length difference 2d sin θ is equal to an integer multiple of the wavelength, producing a diffraction maximum in accordance with Bragg's law. These waves interfere destructively at points between the intersections where the waves are out of phase, and do not lead to bright spots in the diffraction pattern. Because the sample itself is acting as the diffraction grating, this spacing is the atomic spacing.
The distinction between powder and single crystal diffraction is the degree of texturing in the sample. Single crystals have maximal texturing, and are said to be anisotropic. In contrast, in powder diffraction, every possible crystalline orientation is represented equally in a powdered sample, the isotropic case. Powder X-ray diffraction (PXRD) operates under the assumption that the sample is randomly arranged. Therefore, a statistically significant number of each plane of the crystal structure will be in the proper orientation to diffract the X-rays. Therefore, each plane will be represented in the signal. In practice, it is sometimes necessary to rotate the sample orientation to eliminate the effects of texturing and achieve true randomness.
Mathematically, crystals can be described by a Bravais lattice with some regularity in the spacing between atoms. Because of this regularity, we can describe this structure in a different way using the reciprocal lattice, which is related to the original structure by a Fourier transform. This three-dimensional space can be described with reciprocal axes x*, y*, and z* or alternatively in spherical coordinates q, φ*, and χ*. In powder diffraction, intensity is homogeneous over φ* and χ*, and only q remains as an important measurable quantity. This is because orientational averaging causes the three-dimensional reciprocal space that is studied in single crystal diffraction to be projected onto a single dimension.
When the scattered radiation is collected on a flat plate detector, the rotational averaging leads to smooth diffraction rings around the beam axis, rather than the discrete Laue spots observed in single crystal diffraction. The angle between the beam axis and the ring is called the scattering angle and in X-ray crystallography always denoted as 2θ (in scattering of visible light the convention is usually to call it θ). In accordance with Bragg's law, each ring corresponds to a particular reciprocal lattice vector G in the sample crystal. This leads to the definition of the scattering vector as:
In this equation, G is the reciprocal lattice vector, q is the length of the reciprocal lattice vector, k is the momentum transfer vector, θ is half of the scattering angle, and λ is the wavelength of the source. Powder diffraction data are usually presented as a diffractogram in which the diffracted intensity, I, is shown as a function either of the scattering angle 2θ or as a function of the scattering vector length q. The latter variable has the advantage that the diffractogram no longer depends on the value of the wavelength λ. The advent of synchrotron sources has widened the choice of wavelength considerably. To facilitate comparability of data obtained with different wavelengths the use of q is therefore recommended and gaining acceptability.
Uses
Relative to other methods of analysis, powder diffraction allows for rapid, non-destructive analysis of multi-component mixtures without the need for extensive sample preparation. This gives laboratories the ability to quickly analyze unknown materials and perform materials characterization in such fields as metallurgy, mineralogy, chemistry, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences. Identification is performed by comparison of the diffraction pattern to a known standard or to a database such as the International Centre for Diffraction Data's Powder Diffraction File (PDF) or the Cambridge Structural Database (CSD). Advances in hardware and software, particularly improved optics and fast detectors, have dramatically improved the analytical capability of the technique, especially relative to the speed of the analysis. The fundamental physics upon which the technique is based provides high precision and accuracy in the measurement of interplanar spacings, sometimes to fractions of an Ångström, resulting in authoritative identification frequently used in patents, criminal cases and other areas of law enforcement. The ability to analyze multiphase materials also allows analysis of how materials interact in a particular matrix such as a pharmaceutical tablet, a circuit board, a mechanical weld, a geologic core sampling, cement and concrete, or a pigment found in an historic painting. The method has been historically used for the identification and classification of minerals, but it can be used for nearly any material, even amorphous ones, so long as a suitable reference pattern is known or can be constructed.
Phase identification
The most widespread use of powder diffraction is in the identification and characterization of crystalline solids, each of which produces a distinctive diffraction pattern. Both the positions (corresponding to lattice spacings) and the relative intensity of the lines in a diffraction pattern are indicative of a particular phase and material, providing a "fingerprint" for comparison. A multi-phase mixture, e.g. a soil sample, will show more than one pattern superposed, allowing for the determination of the relative concentrations of phases in the mixture.
J.D. Hanawalt, an analytical chemist who worked for Dow Chemical in the 1930s, was the first to realize the analytical potential of creating a database. Today it is represented by the Powder Diffraction File (PDF) of the International Centre for Diffraction Data (formerly Joint Committee for Powder Diffraction Studies). This has been made searchable by computer through the work of global software developers and equipment manufacturers. There are now over 1,047,661 reference materials in the 2021 Powder Diffraction File Databases, and these databases are interfaced to a wide variety of diffraction analysis software and distributed globally. The Powder Diffraction File contains many subfiles, such as minerals, metals and alloys, pharmaceuticals, forensics, excipients, superconductors, semiconductors, etc., with large collections of organic, organometallic and inorganic reference materials.
Crystallinity
In contrast to a crystalline pattern consisting of a series of sharp peaks, amorphous materials (liquids, glasses etc.) produce a broad background signal. Many polymers show semicrystalline behavior, i.e. part of the material forms an ordered crystallite by folding of the molecule. A single polymer molecule may well be folded into two different, adjacent crystallites and thus form a tie between the two. The tie part is prevented from crystallizing. The result is that the crystallinity will never reach 100%. Powder XRD can be used to determine the crystallinity by comparing the integrated intensity of the background pattern to that of the sharp peaks. Values obtained from powder XRD are typically comparable but not quite identical to those obtained from other methods such as DSC.
Lattice parameters
The position of a diffraction peak is independent of the atomic positions within the cell and entirely determined by the size and shape of the unit cell of the crystalline phase. Each peak represents a certain lattice plane and can therefore be characterized by a Miller index. If the symmetry is high, e.g.: cubic or hexagonal it is usually not too hard to identify the index of each peak, even for an unknown phase. This is particularly important in solid-state chemistry, where one is interested in finding and identifying new materials. Once a pattern has been indexed, this characterizes the reaction product and identifies it as a new solid phase. Indexing programs exist to deal with the harder cases, but if the unit cell is very large and the symmetry low (triclinic) success is not always guaranteed.
Expansion tensors, bulk modulus
Cell parameters are somewhat temperature and pressure dependent. Powder diffraction can be combined with in situ temperature and pressure control. As these thermodynamic variables are changed, the observed diffraction peaks will migrate continuously to indicate higher or lower lattice spacings as the unit cell distorts. This allows for measurement of such quantities as the thermal expansion tensor and the isothermal bulk modulus, as well determination of the full equation of state of the material.
Phase transitions
At some critical set of conditions, for example 0 °C for water at 1 atm, a new arrangement of atoms or molecules may become stable, leading to a phase transition. At this point new diffraction peaks will appear or old ones disappear according to the symmetry of the new phase. If the material melts to an isotropic liquid, all sharp lines will disappear and be replaced by a broad amorphous pattern. If the transition produces another crystalline phase, one set of lines will suddenly be replaced by another set. In some cases however lines will split or coalesce, e.g. if the material undergoes a continuous, second order phase transition. In such cases the symmetry may change because the existing structure is distorted rather than replaced by a completely different one. For example, the diffraction peaks for the lattice planes (100) and (001) can be found at two different values of q for a tetragonal phase, but if the symmetry becomes cubic the two peaks will come to coincide.
Crystal structure refinement and determination
Crystal structure determination from powder diffraction data is extremely challenging due to the overlap of reflections in a powder experiment. A number of different methods exist for structural determination, such as simulated annealing and charge flipping. The crystal structures of known materials can be refined, i.e. as a function of temperature or pressure, using the Rietveld method. The Rietveld method is a so-called full pattern analysis technique. A crystal structure, together with instrumental and microstructural information, is used to generate a theoretical diffraction pattern that can be compared to the observed data. A least squares procedure is then used to minimize the difference between the calculated pattern and each point of the observed pattern by adjusting model parameters. Techniques to determine unknown structures from powder data do exist, but are somewhat specialized. A number of programs that can be used in structure determination are TOPAS, Fox, DASH, GSAS-II, EXPO2004, and a few others.
Size and strain broadening
There are many factors that determine the width B of a diffraction peak. These include:
instrumental factors
the presence of defects to the perfect lattice
differences in strain in different grains
the size of the crystallites
It is often possible to separate the effects of size and strain. When size broadening is independent of q (K = 1/d), strain broadening increases with increasing q-values. In most cases there will be both size and strain broadening. It is possible to separate these by combining the two equations in what is known as the Hall–Williamson method:
Thus, when we plot vs. we get a straight line with slope and intercept .
The expression is a combination of the Scherrer equation for size broadening and the Stokes and Wilson expression for strain broadening. The value of η is the strain in the crystallites, the value of D represents the size of the crystallites. The constant k is typically close to unity and ranges from 0.8 to 1.39.
Comparison of X-ray and neutron scattering
X-ray photons scatter by interaction with the electron cloud of the material, neutrons are scattered by the nuclei. This means that, in the presence of heavy atoms with many electrons, it may be difficult to detect light atoms by X-ray diffraction. In contrast, the neutron scattering lengths of most atoms are approximately equal in magnitude. Neutron diffraction techniques may therefore be used to detect light elements such as oxygen or hydrogen in combination with heavy atoms. The neutron diffraction technique therefore has obvious applications to problems such as determining oxygen displacements in materials like high temperature superconductors and ferroelectrics, or to hydrogen bonding in biological systems.
A further complication in the case of neutron scattering from hydrogenous materials is the strong incoherent scattering of hydrogen (80.27(6) barn). This leads to a very high background in neutron diffraction experiments, and may make structural investigations impossible. A common solution is deuteration, i.e., replacing the 1-H atoms in the sample with deuterium (2-H). The incoherent scattering length of deuterium is much smaller (2.05(3) barn) making structural investigations significantly easier. However, in some systems, replacing hydrogen with deuterium may alter the structural and dynamic properties of interest.
As neutrons also have a magnetic moment, they are additionally scattered by any magnetic moments in a sample. In the case of long range magnetic order, this leads to the appearance of new Bragg reflections. In most simple cases, powder diffraction may be used to determine the size of the moments and their spatial orientation.
Aperiodically arranged clusters
Predicting the scattered intensity in powder diffraction patterns from gases, liquids, and randomly distributed nano-clusters in the solid state is (to first order) done rather elegantly with the Debye scattering equation:
where the magnitude of the scattering vector q is in reciprocal lattice distance units, N is the number of atoms, fi(q) is the atomic scattering factor for atom i and scattering vector q, while rij is the distance between atom i and atom j. One can also use this to predict the effect of nano-crystallite shape on detected diffraction peaks, even if in some directions the cluster is only one atom thick.
Semi-quantitative analysis
Semi-quantitative analysis of polycrystalline mixtures can be performed by using traditional single-peaks methods such as the Relative Intensity Ratio (RIR) or whole-pattern methods using Rietveld Refinement or PONCKS (Partial
Or No Known Crystal Structures) method. The use of each method depends on the knowledge on the analyzed system, given that, for instance, Rietveld refinement needs the solved crystal structure of each component of the mixture to be performed. In the last decades, multivariate analysis begun spreading as an alternative method for phase quantification.
Devices
Cameras
The simplest cameras for X-ray powder diffraction consist of a small capillary and either a flat plate detector (originally a piece of X-ray film, now more and more a flat-plate detector or a CCD-camera) or a cylindrical one (originally a piece of film in a cookie-jar, but increasingly bent position sensitive detectors are used). The two types of cameras are known as the Laue and the Debye–Scherrer camera.
In order to ensure complete powder averaging, the capillary is usually spun around its axis.
For neutron diffraction vanadium cylinders are used as sample holders. Vanadium has a negligible absorption and coherent scattering cross section for neutrons and is hence nearly invisible in a powder diffraction experiment. Vanadium does however have a considerable incoherent scattering cross section which may cause problems for more sensitive techniques such as neutron inelastic scattering.
A later development in X-ray cameras is the Guinier camera. It is built around a focusing bent crystal monochromator. The sample is usually placed in the focusing beam, e.g. as a dusting on a piece of sticky tape. A cylindrical piece of film (or electronic multichannel detector) is put on the focusing circle, but the incident beam prevented from reaching the detector to prevent damage from its high intensity.
Cameras based on hybrid photon counting technology, such as the PILATUS detector, are widely used in applications where high data acquisition speeds and increased data quality are required.
Diffractometers
Diffractometers can be operated both in transmission and reflection, but reflection is more common. The powder sample is loaded in a small disc-like container and its surface carefully flattened. The disc is put on one axis of the diffractometer and tilted by an angle θ while a detector (scintillation counter) rotates around it on an arm at twice this angle. This configuration is known under the name Bragg–Brentano θ-2θ.
Another configuration is the Bragg–Brentano θ-θ configuration in which the sample is stationary while the X-ray tube and the detector are rotated around it. The angle formed between the X-ray source and the detector is 2θ. This configuration is most convenient for loose powders.
Diffractometer settings for different experiments can schematically be illustrated by a hemisphere, in which the powder sample resides in the origin. The case of recording a pattern in the Bragg-Brentano θ-θ mode is shown in the figure, where K0 and K stand for the wave vectors of the incoming and diffracted beam that both make up the scattering plane. Various other settings for texture or stress/strain measurements can also be visualized with this graphical approach.
Position-sensitive detectors (PSD) and area detectors, which allow collection from multiple angles at once, are becoming more popular on currently supplied instrumentation.
Neutron diffraction
Sources that produce a neutron beam of suitable intensity and speed for diffraction are only available at a small number of research reactors and spallation sources in the world. Angle dispersive (fixed wavelength) instruments typically have a battery of individual detectors arranged in a cylindrical fashion around the sample holder, and can therefore collect scattered intensity simultaneously on a large 2θ range. Time of flight instruments normally have a small range of banks at different scattering angles which collect data at varying resolutions.
X-ray tubes
Laboratory X-ray diffraction equipment relies on the use of an X-ray tube, which is used to produce the X-rays. The most commonly used laboratory X-ray tube uses a copper anode, but cobalt and molybdenum are also popular. The wavelength in nm varies for each source. The table below shows these wavelengths, determined by Bearden (all values in nm):
According to the last re-examination of Hölzer et al. (1997), and quoted in the International Tables for Crystallography these values are respectively:
Other sources
In-house applications of X-ray diffraction has always been limited to the relatively few wavelengths shown in the table above. The available choice was much needed because the combination of certain wavelengths and certain elements present in a sample can lead to strong fluorescence which increases the background in the diffraction pattern. A notorious example is the presence of iron in a sample when using copper radiation. In general elements just below the anode element in the period system need to be avoided.
Another limitation is that the intensity of traditional generators is relatively low, requiring lengthy exposure times and precluding any time dependent measurement. The advent of synchrotron sources has drastically changed this picture and caused powder diffraction methods to enter a whole new phase of development. Not only is there a much wider choice of wavelengths available, the high brilliance of the synchrotron radiation makes it possible to observe changes in the pattern during chemical reactions, temperature ramps, changes in pressure and the like.
The tunability of the wavelength also makes it possible to observe anomalous scattering effects when the wavelength is chosen close to the absorption edge of one of the elements of the sample.
Neutron diffraction has never been an in house technique because it requires the availability of an intense neutron beam only available at a nuclear reactor or spallation source. Typically the available neutron flux, and the weak interaction between neutrons and matter, require relative large samples.
Advantages and disadvantages
Although it is possible to solve crystal structures from powder X-ray data alone, its single crystal analogue is a far more powerful technique for structure determination. This is directly related to the fact that information is lost by the collapse of the 3D space onto a 1D axis. Nevertheless, powder X-ray diffraction is a powerful and useful technique in its own right. It is mostly used to characterize and identify phases, and to refine details of an already known structure, rather than solving unknown structures.
Advantages of the technique are:
simplicity of sample preparation
rapidity of measurement
the ability to analyze mixed phases, e.g. soil samples
"in situ" structure determination
By contrast growth and mounting of large single crystals is notoriously difficult. In fact there are many materials for which, despite many attempts, it has not proven possible to obtain single crystals. Many materials are readily available with sufficient microcrystallinity for powder diffraction, or samples may be easily ground from larger crystals. In the field of solid-state chemistry that often aims at synthesizing new materials, single crystals thereof are typically not immediately available. Powder diffraction is therefore one of the most powerful methods to identify and characterize new materials in this field.
Particularly for neutron diffraction, which requires larger samples than X-ray diffraction due to a relatively weak scattering cross section, the ability to use large samples can be critical, although newer and more brilliant neutron sources are being built that may change this picture.
Since all possible crystal orientations are measured simultaneously, collection times can be quite short even for small and weakly scattering samples. This is not merely convenient, but can be essential for samples which are unstable either inherently or under X-ray or neutron bombardment, or for time-resolved studies. For the latter it is desirable to have a strong radiation source. The advent of synchrotron radiation and modern neutron sources has therefore done much to revitalize the powder diffraction field because it is now possible to study temperature dependent changes, reaction kinetics and so forth by means of time-resolved powder diffraction.
See also
Bragg diffraction
Condensed matter physics
Crystallographic database
Crystallography
Diffractometer
Electron crystallography
Electron diffraction
Materials science
Metallurgy
Neutron diffraction
Pair distribution function
Solid state chemistry
Texture (crystalline)
Ultrafast x-ray
X-ray crystallography
X-ray scattering techniques
References
Further reading
External links
International Centre for Diffraction Data
Powder Diffraction on the Web
Diffraction
Neutron-related techniques
Synchrotron-related techniques
Diffraction | Powder diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,824 | [
"Spectrum (physical sciences)",
"Diffraction",
"Materials",
"Powders",
"Crystallography",
"Spectroscopy",
"Matter"
] |
2,296,175 | https://en.wikipedia.org/wiki/Reactive%20distillation | Reactive distillation is a process where the chemical reactor is also the still. Separation of the product from the reaction mixture does not need a separate distillation step which saves energy (for heating) and materials. This technique can be useful for equilibrium-limited reactions such as esterification and ester hydrolysis reactions. Conversion can be increased beyond what is expected by the equilibrium due to the continuous removal of reaction products from the reactive zone. This approach can also reduce capital and investment costs.
The conditions in the reactive column are suboptimal both as a chemical reactor and as a distillation column, since the reactive column combines these. The introduction of an in situ separation process in the reaction zone or vice versa leads to complex interactions between vapor–liquid equilibrium, mass transfer rates, diffusion and chemical kinetics, which poses a great challenge for design and synthesis of these systems. Side reactors, where a separate column feeds a reactor and vice versa, are better for some reactions, if the optimal conditions of distillation and reaction differ too much.
Applicable Processes
Reactive distillation can be used with a wide variety of chemistries, including the following:
Acetylation
Aldol condensation
Alkylation
Amination
Dehydration
Esterification
Etherification
Hydrolysis
Isomerization
Neutralization
Oligomerization
Transesterification
Hydrodesulfurization of light oil fractions
Examples
The esterification of acetic acid with alcohols including methanol, n-butanol, ethanol, isobutanol, and amyl alcohol.
Another interesting feature of this system is that it is associated with the formation of a minimum boiling ternary azeotrope of ester, alcohol and water, which is heterogeneous in nature. Hence, in a typical reactive distillation column that consists of both reactive and non-reactive zones, the heterogeneous azeotrope or a composition close to the azeotrope can be obtained as the distillate product. Moreover, the aqueous phase that forms after the condensation of the vapor is almost pure water. Depending on the requirement either of the phases can be withdrawn as a product and the other phase can be recycled back as reflux. The pure ester i.e. butyl acetate, being the least volatile component in the system, is realized as a bottom product.
Removing organic acids from aqueous alcohol (ethanol, isopropanol) in dewatering columns is a simple example. An aqueous base (NaOH, KOH) is added to the top of the column, acid-base reactions occur in the column, and the resulting organic salts and excess base exit the bottom of the column with the separated water.
References
Distillation
de:Destillation#Reaktivdestillation | Reactive distillation | [
"Chemistry"
] | 576 | [
"Distillation",
"Separation processes"
] |
2,297,111 | https://en.wikipedia.org/wiki/Belpaire%20firebox | The Belpaire firebox is a type of firebox used on steam locomotives. It was invented by Alfred Belpaire of Belgium in 1864. Today it generally refers to the shape of the outer shell of the firebox which is approximately flat at the top and square in cross-section, indicated by the longitudinal ridges on the top sides. However, it is the similar square cross-section inner firebox which provides the main advantages of this design i.e. it has a greater surface area at the top of the firebox where the heat is greatest, improving heat transfer and steam production, compared with a round-top shape.
The flat firebox top would make supporting it against pressure more difficult (e.g. by means of girders, or stays) compared to a round-top. However, the use of a similarly shaped square outer boiler shell allows simpler perpendicular stays to be used between the shells. The Belpaire outer firebox is, nevertheless, more complicated and expensive to manufacture than a round-top version.
Due to the increased expense involved in manufacturing this boiler shell, just two major US railroads adopted the Belpaire firebox, the Pennsylvania and the Great Northern. In Britain most locomotives employed the design after the 1920s, except notably those of the LNER.
Description
In steam boilers, the firebox is encased in a water jacket on five sides, (front, back, left, right and top) to ensure maximum heat transfer to the water. Stays are used to support the surfaces against the high pressure between the outside wall and the interior firebox wall, and partially to conduct heat into the boiler interior.
In many boiler designs, the top of the boiler is cylindrical above the firebox, matching the contour of the rest of the boiler and naturally resisting boiler pressure more easily. In the Belpaire design, the outer upper boiler wall sheets are roughly parallel with the flat upper firebox sheets giving it a squarer shape.
The advantage was a greater surface area for evaporation, and less susceptibility to priming (foaming), involving water getting into the cylinders, compared with the narrowing upper space of a classic cylindrical boiler. This allowed G.J. Churchward, the chief mechanical engineer of the Great Western Railway, to dispense with a steam dome to collect steam. Churchward also improved the Belpaire design, maximising the flow of water in a given size of boiler by tapering the firebox and boiler barrel outwards to the area of highest steam production at the front of the firebox.
The shape of the Belpaire firebox also allows easier placement of the boiler stays, because they are at right angles to the sheets.
Despite these claimed advantages, other locomotive boilers such as the LNER Engine Pacifics had flat-topped inner fireboxes with round-topped outer shells and with as good a thermal performance as the Belpaire type, without suffering major problems with staying between shells.
In the USA, the Belpaire firebox was introduced in about 1882 or 83 by R. P. C. Sanderson, who at the time was working for the Shenandoah Valley Railroad (essentially a subsidiary of the Pennsylvania Railroad, since they shared the same financial backing from E. W. Clark & Co.). Sanderson was an Englishman (later naturalized as an American citizen) who had attained his engineering degree from Cassel in Germany in 1875.
Having obtained knowledge of a special form of locomotive boiler (the Belpaire), Sanderson wrote to an old acquaintance from his college days who was working at the Henschel locomotive factory at Cassel. He sent Sanderson a tracing of Henschel's latest Belpaire boiler. When shown the design, Charles Blackwell, Superintendent of Motive Power for the Shenandoah Valley Railroad, was very pleased with the design and placed an order with the Baldwin and Grant Locomotive Works for two passenger engines, afterwards numbered 94 and 95, and five freight engines, afterwards numbered, 56, 57, 58, 59, and 60. That marked the beginning of the use of the Belpaire-type locomotive boiler in the United States. The Pennsylvania Railroad used Belpaire fireboxes on nearly all of its steam locomotives. The distinctive square shape of the boiler cladding at the firebox end of locomotives practically became a "Pennsy" trademark, as otherwise only the Great Northern used Belpaire fireboxes in significant numbers in the USA.
Gallery
See also
Wootten firebox
References
Steam locomotive fireboxes
Steam boilers | Belpaire firebox | [
"Engineering"
] | 924 | [
"Combustion engineering",
"Steam locomotive fireboxes"
] |
2,297,434 | https://en.wikipedia.org/wiki/Polylactic%20acid | Polylactic acid, also known as poly(lactic acid) or polylactide (PLA), is a plastic material. As a thermoplastic polyester (or polyhydroxyalkanoate) it has the backbone formula or . PLA is formally obtained by condensation of lactic acid with loss of water (hence its name). It can also be prepared by ring-opening polymerization of lactide , the cyclic dimer of the basic repeating unit. Often PLA is blended with other polymers. PLA can be biodegradable or long-lasting, depending on the manufacturing process, additives and copolymers.
PLA has become a popular material due to it being economically produced from renewable resources and the possibility to use it for compostable products. In 2022, PLA had the highest consumption volume of any bioplastic of the world, with a share of ca. 26 % of total bioplastic demand. Although its production is growing, PLA is still not as important as traditional commodity polymers like PET or PVC. Its widespread application has been hindered by numerous physical and processing shortcomings. PLA is the most widely used plastic filament material in FDM 3D printing, due to its low melting point, high strength, low thermal expansion, and good layer adhesion, although it possesses poor heat resistance unless annealed.
Although the name "polylactic acid" is widely used, it does not comply with IUPAC standard nomenclature, which is "poly(lactic acid)". The name "polylactic acid" is potentially ambiguous or confusing, because PLA is not a polyacid (polyelectrolyte), but rather a polyester.
Chemical properties
Synthesis
The monomer is typically made from fermented plant starch such as from corn, cassava, sugarcane or sugar beet pulp.
Several industrial routes afford usable (i.e. high molecular weight) PLA. Two main monomers are used: lactic acid, and the cyclic di-ester, lactide. The most common route to PLA is the ring-opening polymerization of lactide with various metal catalysts (typically tin ethylhexanoate) in solution or as a suspension. The metal-catalyzed reaction tends to cause racemization of the PLA, reducing its stereoregularity compared to the starting material (usually corn starch).
The direct condensation of lactic acid monomers can also be used to produce PLA. This process needs to be carried out at less than 200 °C; above that temperature, the entropically favored lactide monomer is generated. This reaction generates one equivalent of water for every condensation (esterification) step. The condensation reaction is reversible and subject to equilibrium, so removal of water is required to generate high molecular weight species. Water removal by application of a vacuum or by azeotropic distillation is required to drive the reaction toward polycondensation. Molecular weights of 130 kDa can be obtained this way. Even higher molecular weights can be attained by carefully crystallizing the crude polymer from the melt. Carboxylic acid and alcohol end groups are thus concentrated in the amorphous region of the solid polymer, and so they can react. Molecular weights of 128–152 kDa are obtainable thus.
Another method devised is by contacting lactic acid with a zeolite. This condensation reaction is a one-step process, and runs about 100 °C lower in temperature.
Stereoisomers
Due to the chiral nature of lactic acid, several distinct forms of polylactide exist: poly-L-lactide (PLLA) is the product resulting from polymerization of L,L-lactide (also known as L-lactide).
Progress in biotechnology has resulted in the development of commercial production of the D enantiomer form.
Polymerization of a racemic mixture of L- and D-lactides usually leads to the synthesis of poly-DL-lactide (PDLLA), which is amorphous. Use of stereospecific catalysts can lead to heterotactic PLA which has been found to show crystallinity. The degree of crystallinity, and hence many important properties, is largely controlled by the ratio of D to L enantiomers used, and to a lesser extent on the type of catalyst used. Apart from lactic acid and lactide, lactic acid O-carboxyanhydride ("lac-OCA"), a five-membered cyclic compound has been used academically as well. This compound is more reactive than lactide, because its polymerization is driven by the loss of one equivalent of carbon dioxide per equivalent of lactic acid. Water is not a co-product.
The direct biosynthesis of PLA, in a manner similar to production of poly(hydroxyalkanoate)s, has been reported.
Physical properties
PLA polymers range from amorphous glassy polymer to semi-crystalline and highly crystalline polymer with a glass transition 60–65 °C, a melting temperature 130-180 °C, and a Young's modulus 2.7–16 GPa. Heat-resistant PLA can withstand temperatures of 110 °C. The basic mechanical properties of PLA are between those of polystyrene and PET. The melting temperature of PLLA can be increased by 40–50 °C and its heat deflection temperature can be increased from approximately 60 °C to up to 190 °C by physically blending the polymer with PDLA (poly-D-lactide). PDLA and PLLA form a highly regular stereocomplex with increased crystallinity. The temperature stability is maximised when a 1:1 blend is used, but even at lower concentrations of 3–10% of PDLA, there is still a substantial improvement. In the latter case, PDLA acts as a nucleating agent, thereby increasing the crystallization rate. Biodegradation of PDLA is slower than for PLA due to the higher crystallinity of PDLA. The flexural modulus of PLA is higher than polystyrene and PLA has good heat sealability.
Although PLA performs mechanically similar to PET for properties of tensile strength and elastic modulus, the material is very brittle and results in less than 10% elongation at break. Furthermore, this limits PLA’s use in applications that require some level of plastic deformation at high stress levels. An effort to increase the elongation at break for PLA has been underway, especially to bolster PLA’s presence as a commodity plastic and improve the bioplastics landscape. For example, PLLA biocomposites have been of interest to improve these mechanical properties. By mixing PLLA with poly (3-hydroxy butyrate) (PHB), cellulose nano crystal (CNC) and a plasticizer (TBC), a drastic improvement of mechanical properties were shown. Using polarized optical microscopy (POM), the PLLA biocomposites had smaller spherulites compared to pure PLLA, indicating improved nucleation density and also contributing to an increase of elongation at break from 6% in pure PLLA to 140-190% in the biocomposites. Biocomposites such as these are of great interest for food packaging because of their improved strength and biodegradability.
Several technologies such as annealing, adding nucleating agents, forming composites with fibers or nano-particles, chain extending and introducing crosslink structures have been used to enhance the mechanical properties of PLA polymers. Annealing has been shown to significantly increase the degree of crystallinity of PLA polymers. In one study, increasing the duration of annealing directly affected thermal conductivity, density, and the glass transition temperature. Structural changes from this treatment further improved characteristics such as compressive strength and rigidity by nearly 80%. Processes such as this may boost PLA’s presence in the plastics market, as improving the mechanical properties will be important to replace current petroleum-derived plastics. It has also been demonstrated that the addition of a PLA-based, cross-linked nucleating agent improved the degree of crystallinity of the final PLA material. Alongside the use of the nucleating agent, annealing was shown to further improve the degree of crystallinity and, therefore, the toughness and flexural modulus of the material. This example reveals the ability to utilize multiple of these processes to reinforce the mechanical properties of PLA. Polylactic acid can be processed like most thermoplastics into fiber (for example, using conventional melt spinning processes) and film. PLA has similar mechanical properties to PETE polymer, but has a significantly lower maximum continuous use temperature.
Backbone architecture of PLA and its effect on crystallization kinetics has also been investigated, specifically to better understand the most suitable processing conditions for PLA. The molecular weight of polymer chains can play a significant role in the mechanical properties. One method of increasing molecular weight is by introducing branches of the same polymer chain onto the backbone. Through characterization of a branched and linear grade PLA, branched PLA leads to faster crystallization. Furthermore, the branched PLA experiences much longer relaxation times at low shear rates, contributing to higher viscosity than the linear grade. This is presumed to be from high molecular weight regions within the branched PLA. However, the branched PLA was observed to shear thin more strongly, leading to a much lower viscosity at high shear rates. Understanding properties such as these are crucial when determining optimal processing conditions for materials, and that simple changes to the structure can alter its behavior dramatically.
Racemic PLA and pure PLLA have low glass transition temperatures, making them undesirable because of low strength and melting point. A stereocomplex of PDLA and PLLA has a higher glass transition temperature, lending it more mechanical strength.
The high surface energy of PLA results in good printability, making it widely used in 3D printing. The tensile strength for 3D printed PLA was previously determined.
Solvents
PLA is soluble in a range of organic solvents. Ethyl acetate is widely used because of its ease of access and low risk. It is useful in 3D printers for cleaning the extruder heads and for removing PLA supports.
Other safe solvents include propylene carbonate, which is safer than ethyl acetate but is difficult to purchase commercially. Pyridine can be used, but it has a distinct fish odor and is less safe than ethyl acetate. PLA is also soluble in hot benzene, tetrahydrofuran, and dioxane.
Fabrication
PLA objects can be fabricated by 3D printing, casting, injection moulding, extrusion, machining, and solvent welding.
PLA is used as a feedstock material in desktop fused filament fabrication by 3D printers, such as RepRap printers.
PLA can be solvent welded using dichloromethane. Acetone also softens the surface of PLA, making it sticky without dissolving it, for welding to another PLA surface.
PLA-printed solids can be encased in plaster-like moulding materials, then burned out in a furnace, so that the resulting void can be filled with molten metal. This is known as "lost PLA casting", a type of investment casting.
Applications
PLA is mainly used for short-lived and disposable packaging. In 2022, of the total PLA production, ca. 35 % was used for flexible packaging (e.g. films, bags, labels) and 30 % for rigid packaging (e.g. bottles, jars, containers).
Consumer goods
PLA is used in a large variety of consumer products such as disposable tableware, cutlery, housings for kitchen appliances and electronics such as laptops and handheld devices, and microwavable trays. (However, PLA is not suitable for microwavable containers because of its low glass transition temperature.) It is used for compost bags, food packaging and loose-fill packaging material that is cast, injection molded, or spun. In the form of a film, it shrinks upon heating, allowing it to be used in shrink tunnels. In the form of fibers, it is used for monofilament fishing line and netting. In the form of nonwoven fabrics, it is used for upholstery, disposable garments, awnings, feminine hygiene products, and diapers.
PLA has applications in engineering plastics, where the stereocomplex is blended with a rubber-like polymer such as ABS. Such blends have good form stability and visual transparency, making them useful in low-end packaging applications.
PLA is used for automotive parts such as floor mats, panels, and covers. Its heat resistance and durability are inferior to the widely used polypropylene (PP), but its properties are improved by means such as capping of the end groups to reduce hydrolysis.
Agricultural
In the form of fibers, PLA is used for monofilament fishing line and netting for vegetation and weed prevention. It is used for sandbags, planting pots, binding tape and ropes .
Medical
PLA can degrade into innocuous lactic acid, making it suitable for use as medical implants in the form of anchors, screws, plates, pins, rods, and mesh. Depending on the type used, it breaks down inside the body within 6 months to 2 years. This gradual degradation is desirable for a support structure, because it gradually transfers the load to the body (e.g., to the bone) as that area heals. The strength characteristics of PLA and PLLA implants are well documented.
Thanks to its bio-compatibility and biodegradability, PLA found interest as a polymeric scaffold for drug delivery purposes.
The composite blend of poly(L-lactide-co-D,L-lactide) (PLDLLA) with tricalcium phosphate (TCP) is used as PLDLLA/TCP scaffolds for bone engineering.
Poly-L-lactic acid (PLLA) is the main ingredient in Sculptra, a facial volume enhancer used for treating lipoatrophy of the cheeks.
PLLA is used to stimulate collagen synthesis in fibroblasts via foreign body reaction in the presence of macrophages. Macrophages act as a stimulant in secretion of cytokines and mediators such as TGF-β, which stimulate the fibroblast to secrete collagen into the surrounding tissue. Therefore, PLLA has potential applications in the dermatological studies.
PLLA is under investigation as a scaffold that can generate a small amount of electric current via the piezoelectric effect that stimulates the growth of mechanically robust cartilage in multiple animal models.
Degradation
PLA is generally considered to be compostable in industrial composting conditions but not in home compost, based off of the results of tests done using EN 13432 and ASTM D6400 standards. However, certain isomers of PLA such as PLLA or PDLA have been shown to have varying rates of degradation.
PLA is degraded abiotically by three mechanisms:
Hydrolysis: The ester groups of the main chain are cleaved, thus reducing molecular weight.
Thermal decomposition: A complex phenomenon leading to the appearance of different compounds such as lighter molecules and linear and cyclic oligomers with different Mw, and lactide.
Photodegradation: UV radiation induces degradation. This is a factor mainly where PLA is exposed to sunlight in its applications in plasticulture, packaging containers and films.
The hydrolytic reaction is:
-COO- + H2O → -COOH + -OH
The degradation rate is very slow in ambient temperatures. A 2017 study found that at in seawater, PLA showed no loss of mass over a year, but the study did not measure breakdown of the polymer chains or water absorption. As a result, it degrades poorly in landfills and household composts, but is effectively digested in hotter industrial composts, usually degrading best at temperatures of over .
Pure PLA foams are selectively hydrolysed in Dulbecco's modified Eagle's medium (DMEM) supplemented with fetal bovine serum (FBS) (a solution mimicking body fluid). After 30 days of submersion in DMEM+FBS, a PLLA scaffold lost about 20% of its weight.
PLA samples of various molecular weights were degraded into methyl lactate (a green solvent) by using a metal complex catalyst.
PLA can also be degraded by some bacteria, such as Amycolatopsis and Saccharothrix. A purified protease from Amycolatopsis sp., PLA depolymerase, can also degrade PLA. Enzymes such as pronase and most effectively proteinase K from Tritirachium album degrade PLA.
End of life
Four possible end-of-life scenarios are the most common:
Recycling: which can be either chemical or mechanical. Currently, the SPI resin identification code 7 ("others") is applicable for PLA. In Belgium, Galactic started the first pilot unit to chemically recycle PLA (Loopla). Unlike mechanical recycling, waste material can hold various contaminants. Polylactic acid can be chemically recycled to monomer by thermal depolymerization or hydrolysis. When purified, the monomer can be used for the manufacturing of virgin PLA with no loss of original properties (cradle-to-cradle recycling). End-of-life PLA can be chemically recycled to methyl lactate by transesterification.
Composting: PLA is biodegradable under industrial composting conditions, starting with chemical hydrolysis process, followed by microbial digestion, to ultimately degrade the PLA. Under industrial composting conditions (), PLA can partly (about half) decompose into water and carbon dioxide in 60 days, after which the remainder decomposes much more slowly, with the rate depending on the material's degree of crystallinity. Environments without the necessary conditions will see very slow decomposition akin to that of non-bioplastics, not fully decomposing for hundreds or thousands of years.
Incineration: PLA can be incinerated without producing chlorine-containing chemicals or heavy metals because it contains only carbon, oxygen, and hydrogen atoms. Since it does not contain chlorine it does not produce dioxins or hydrochloric acid during incineration. PLA can be combusted with no remaining residue. This and other results suggest that incineration is an environmentally friendly disposal of waste PLA. Upon being incinerated, PLA can release carbon dioxide.
Landfill: the least preferable option is landfilling because PLA degrades very slowly in ambient temperatures, often as slowly as other plastics.
See also
Acrylonitrile butadiene styrene (ABS) - also used for 3D printing
Cellophane, polyglycolide, plastarch material, poly-3-hydroxybutyrate – biologically derived polymers
Polilactofate
Polycaprolactone
Zein, shellac – biologically derived coating materials
Poly(methyl methacrylate)
References
External links
"Your plastic pal" | The Economist
Biodegradable plastics
Bioplastics
Polyesters
Synthetic fibers
Transparent materials
Thermoplastics
Articles containing video clips
Fused filament fabrication
Food packaging | Polylactic acid | [
"Physics",
"Chemistry"
] | 4,134 | [
"Physical phenomena",
"Synthetic fibers",
"Synthetic materials",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
2,299,119 | https://en.wikipedia.org/wiki/Decarburization | Decarburization (or decarbonization) is the process of decreasing carbon content, which is the opposite of carburization.
The term is typically used in metallurgy, describing the decrease of the content of carbon in metals (usually steel). Decarburization occurs when the metal is heated to temperatures of 700 °C or above when carbon in the metal reacts with gases containing oxygen or hydrogen. The removal of carbon removes hard carbide phases resulting in a softening of the metal, primarily at the surfaces which are in contact with the decarburizing gas.
Decarburization can be either advantageous or detrimental, depending on the application for which the metal will be used. It is thus both something that can be done intentionally as a step in a manufacturing process, or something that happens as a side effect of a process (such as rolling) and must be either prevented or later reversed (such as via a carburization step).
The decarburization mechanism can be described as three distinct events: the reaction at the steel surface, the interstitial diffusion of carbon atoms and the dissolution of carbides within the steel.
Chemical reactions
The most common reactions are:
C + CO2 <=> 2CO
also called the Boudouard reaction
C + H2O <=> CO + H2
C + 2H2 <=> CH4
Other reactions are
C + 1/2O2 -> CO
C + O2 -> CO2
C + FeO -> CO + Fe
Electrical steel
Electrical steel is one material that uses decarburization in its production. To prevent the atmospheric gases from reacting with the metal itself, electrical steel is annealed in an atmosphere of nitrogen, hydrogen, and water vapor, where oxidation of the iron is specifically prevented by the proportions of hydrogen and water vapor so that the only reacting substance is carbon being oxidized into carbon monoxide (CO).
Stainless steel
Stainless steel contains additives which are highly oxidizable, such as chromium and molybdenum. Such steels can only be decarburized by reacting with dry hydrogen, which has no water content, unlike wet hydrogen, which is produced in a way that includes some water and can otherwise be used for decarburization.
As a secondary effect
Incidental decarburization can be detrimental to surface properties in products (where carbon content is desirable) when done during heat treatment or after rolling or forging, because the material is only affected to a certain depth according to the temperature and duration of heating. This can be prevented by using an inert or reduced-pressure atmosphere, applying resistive heating for a short duration, by limiting the time that the material is submitted to a high heat, as it is done in a walking-beam furnace, or through restorative carburization, which uses a hydrocarbon atmosphere to transfer carbon into the surface of the material during annealing. The decarburized surface of the material can also be removed by grinding.
See also
History of ferrous metallurgy
Steelmaking
References
External links
Protecting Against Decarburization with Cress Furnaces
Corrosion
Metal heat treatments
Steelmaking | Decarburization | [
"Chemistry",
"Materials_science"
] | 653 | [
"Metallurgical processes",
"Metallurgy",
"Steelmaking",
"Corrosion",
"Materials degradation",
"Electrochemistry",
"Metal heat treatments"
] |
2,299,504 | https://en.wikipedia.org/wiki/Steinhart%E2%80%93Hart%20equation | The Steinhart–Hart equation is a model relating the varying electrical resistance of a semiconductor to its varying temperatures. The equation is
where
is the temperature (in kelvins),
is the resistance at (in ohms),
, , and are the Steinhart–Hart coefficients, which are characteristics specific to the bulk semiconductor material over a given temperature range of interest.
Application
When applying a thermistor device to measure temperature, the equation relates a measured resistance to the device temperature, or vice versa.
Finding temperature from resistance and characteristics
The equation model converts the resistance actually measured in a thermistor to its theoretical bulk temperature, with a closer approximation to actual temperature than simpler models, and valid over the entire working temperature range of the sensor. Steinhart–Hart coefficients for specific commercial devices are ordinarily reported by thermistor manufacturers as part of the device characteristics.
Finding characteristics from measurements of resistance at known temperatures
Conversely, when the three Steinhart–Hart coefficients of a specimen device are not known, they can be derived experimentally by a curve fitting procedure applied to three measurements at various known temperatures. Given the three temperature-resistance observations, the coefficients are solved from three simultaneous equations.
Inverse of the equation
To find the resistance of a semiconductor at a given temperature, the inverse of the Steinhart–Hart equation must be used. See the Application Note, "A, B, C Coefficients for Steinhart–Hart Equation".
where
Steinhart–Hart coefficients
To find the coefficients of Steinhart–Hart, we need to know at-least three operating points. For this, we use three values of resistance data for three known temperatures.
With , and values of resistance at the temperatures , and , one can express , and (all calculations):
History
The equation was developed by John S. Steinhart and Stanley R. Hart, who first published it in 1968.
Derivation and alternatives
The most general form of the equation can be derived from extending the B parameter equation to an infinite series:
is a reference (standard) resistance value. The Steinhart–Hart equation assumes is 1 ohm. The curve fit is much less accurate when it is assumed and a different value of such as 1 kΩ is used. However, using the full set of coefficients avoids this problem as it simply results in shifted parameters.
In the original paper, Steinhart and Hart remark that allowing degraded the fit. This is surprising as allowing more freedom would usually improve the fit. It may be because the authors fitted instead of , and thus the error in increased from the extra freedom. Subsequent papers have found great benefit in allowing .
The equation was developed through trial-and-error testing of numerous equations, and selected due to its simple form and good fit. However, in its original form, the Steinhart–Hart equation is not sufficiently accurate for modern scientific measurements. For interpolation using a small number of measurements, the series expansion with has been found to be accurate within 1 mK over the calibrated range. Some authors recommend using . If there are many data points, standard polynomial regression can also generate accurate curve fits. Some manufacturers have begun providing regression coefficients as an alternative to Steinhart–Hart coefficients.
References
External links
Steinhart-Hart Coefficient Calculator Online
Steinhart-Hart Coefficient Calculator Java
Semiconductors | Steinhart–Hart equation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 665 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
5,735,440 | https://en.wikipedia.org/wiki/Virial%20stress | In mechanics, virial stress is a measure of stress on an atomic scale for homogeneous systems. The name is derived : "Virial is then derived from Latin as well, stemming from the word (plural of ) meaning forces." The expression of the (local) virial stress can be derived as the functional derivative of the free energy of a molecular system with respect to the deformation tensor.
Volume averaged Definition
The instantaneous volume averaged virial stress is given by
where
and are atoms in the domain,
is the volume of the domain,
is the mass of atom ,
is the -th component of the velocity of atom ,
is the -th component of the average velocity of atoms in the volume,
is the -th component of the position of atom , and
is the -th component of the force applied on atom by atom .
At zero kelvin, all velocities are zero so we have
This can be thought of as follows. The component of stress is the force in the -direction divided by the area of a plane perpendicular to that direction. Consider two adjacent volumes separated by such a plane. The 11-component of stress on that interface is the sum of all pairwise forces between atoms on the two sides.
The volume averaged virial stress is then the ensemble average of the instantaneous volume averaged virial stress.
In a three dimensional, isotropic system, at equilibrium the "instantaneous" atomic pressure is usually defined as the average over the diagonals of the negative stress tensor:
The pressure then is the ensemble average of the instantaneous pressure
This pressure is the average pressure in the volume .
Equivalent Definition
It's worth noting that some articles and textbook use a slightly different but equivalent version of the equation
where is the -th component of the vector oriented from the -th atoms to the -th calculated via the difference
Both equation being strictly equivalent, the definition of the vector can still lead to confusion.
Derivation
The virial pressure can be derived, using the virial theorem and splitting forces between particles and the container or, alternatively, via direct application of the defining equation and using scaled coordinates in the calculation.
Inhomogeneous Systems
If the system is not homogeneous in a given volume the above (volume averaged) pressure is not a good measure for the pressure. In inhomogeneous systems the pressure depends on the position and orientation of the surface on which the pressure acts. Therefore, in inhomogeneous systems a definition of a local pressure is needed. As a general example for a system with inhomogeneous pressure you can think of the pressure in the atmosphere of the earth which varies with height.
Instantaneous local virial stress
The (local) instantaneous virial stress is given by:
Measuring the virial pressure in molecular simulations
The virial pressure can be measured via the formulas above or using volume rescaling trial moves.
See also
Virial theorem
References
External links
Physical Interpretation of the volume averaged Virial Stress
2017 edition (second):
Python and Fortran code examples for Computer Simulation of Liquids
Continuum mechanics | Virial stress | [
"Physics"
] | 612 | [
"Classical mechanics",
"Continuum mechanics"
] |
5,735,510 | https://en.wikipedia.org/wiki/Hydrostatic%20stress | In continuum mechanics, hydrostatic stress, also known as isotropic stress or volumetric stress, is a component of stress which contains uniaxial stresses, but not shear stresses. A specialized case of hydrostatic stress contains isotropic compressive stress, which changes only in volume, but not in shape. Pure hydrostatic stress can be experienced by a point in a fluid such as water. It is often used interchangeably with "mechanical pressure" and is also known as confining stress, particularly in the field of geomechanics.
Hydrostatic stress is equivalent to the average of the uniaxial stresses along three orthogonal axes, so it is one third of the first invariant of the stress tensor (i.e. the trace of the stress tensor):
For example in cartesian coordinates (x,y,z) the hydrostatic stress is simply:
Hydrostatic stress and thermodynamic pressure
In the particular case of an incompressible fluid,
the thermodynamic pressure coincides with the mechanical pressure (i.e. the opposite of the hydrostatic stress):
In the general case of a compressible fluid, the thermodynamic pressure p is no more proportional to the isotropic stress term (the mechanical pressure), since there is an additional term dependent on the trace of the strain rate tensor:
where the coefficient is the bulk viscosity. The trace of the strain rate tensor corresponds to the flow compression (the divergence of the flow velocity):
So the expression for the thermodynamic pressure is usually expressed as:
where the mechanical pressure has been denoted with .
In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure as stated above.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect.
Potential external field in a fluid
Its magnitude in a fluid, , can be given by Stevin's Law:
where
is an index denoting each distinct layer of material above the point of interest;
is the density of each layer;
is the gravitational acceleration (assumed constant here; this can be substituted with any acceleration that is important in defining weight);
is the height (or thickness) of each given layer of material.
For example, the magnitude of the hydrostatic stress felt at a point under ten meters of fresh water would be
where the index indicates "water".
Because the hydrostatic stress is isotropic, it acts equally in all directions. In tensor form, the hydrostatic stress is equal to
where is the 3-by-3 identity matrix.
Hydrostatic compressive stress is used for the determination of the bulk modulus for materials.
Notes
References
Stress tensor
Volumetric strain
Deviatoric stress tensor
Flow velocity
Pressure
Bulk viscosity
Isotropy
Continuum mechanics
Orientation (geometry) | Hydrostatic stress | [
"Physics",
"Mathematics"
] | 676 | [
"Continuum mechanics",
"Classical mechanics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
5,742,067 | https://en.wikipedia.org/wiki/Biohydrogen | Biohydrogen is H2 that is produced biologically. Interest is high in this technology because H2 is a clean fuel and can be readily produced from certain kinds of biomass, including biological waste. Furthermore some photosynthetic microorganisms are capable to produce H2 directly from water splitting using light as energy source.
Besides the promising possibilities of biological hydrogen production, many challenges characterize this technology. First challenges include those intrinsic to H2, such as storage and transportation of an explosive noncondensible gas. Additionally, hydrogen producing organisms are poisoned by O2 and yields of H2 are often low.
Biochemical principles
The main reactions driving hydrogen formation involve the oxidation of substrates to obtain electrons. Then, these electrons are transferred to free protons to form molecular hydrogen. This proton reduction reaction is normally performed by an enzyme family known as hydrogenases.
In heterotrophic organisms, electrons are produced during the fermentation of sugars. Hydrogen gas is produced in many types of fermentation as a way to regenerate NAD+ from NADH. Electrons are transferred to ferredoxin, or can be directly accepted from NADH by a hydrogenase, producing H2. Because of this most of the reactions start with glucose, which is converted to acetic acid.
C6H12O6 + 2 H2O -> 2 CH3COOH + 2 CO2 + 4 H2
A related reaction gives formate instead of carbon dioxide:
C6H12O6 + 2 H2O -> 2 CH3COOH + 2 HCOOH + 2 H2
These reactions are exergonic by 216 and 209 kcal/mol, respectively.
It has been estimated that 99% of all organisms utilize or produce dihydrogen (H2). Most of these species are microbes and their ability to use or produce H2 as a metabolite arises from the expression of H2 metalloenzymes known as hydrogenases. Enzymes within this widely diverse family are commonly sub-classified into three different types based on the active site metal content: [FeFe]-hydrogenases (iron-iron), [NiFe]-hydrogenases (nickel-iron) hydrogenases, and [Fe]-hydrogenases (iron-only). Many organisms express these enzymes. Notable examples are members of the genera Clostridium, Desulfovibrio, Ralstonia or the pathogen Helicobacter, being most of them strict-anaerobes or facultative microorganisms. Other microorganisms such green algae also express highly active hydrogenases, as it is the case for members of the genera Chlamydomonas.Due to the extreme diversity of hydrogenase enzymes, on-going efforts are focused on screening for novel enzymes with improved features, as well as engineering already characterized hydrogenases to confer them more desirable characteristics.
Production by algae
The biological hydrogen production with algae is a method of photobiological water splitting which is done in a closed photobioreactor based on the production of hydrogen as a solar fuel by algae. Algae produce hydrogen under certain conditions. In 2000 it was discovered that if C. reinhardtii algae are deprived of sulfur they will switch from the production of oxygen, as in normal photosynthesis, to the production of hydrogen.
Green algae express [FeFe] hydrogenases, being some of them considered the most efficient hydrogenases with turnover rates superior to 104 s−1. This remarkable catalytic efficiency is nonetheless shadowed by its extreme sensitivity to oxygen, being irreversibly inactivated by O2. When the cells are deprived from sulfur, oxygen evolution stops due to photo-damage of photosystem II, in this state the cells start consuming O2 and provide the ideal anaerobic environment for the native [FeFe] hydrogenases to catalyze H2 production.
Photosynthesis
Photosynthesis in cyanobacteria and green algae splits water into hydrogen ions and electrons. The electrons are transported over ferredoxins. Fe-Fe-hydrogenases (enzymes) combine them into hydrogen gas. In Chlamydomonas reinhardtii Photosystem II produces in direct conversion of sunlight 80% of the electrons that end up in the hydrogen gas.
In 2020 scientists reported the development of algal-cell based micro-emulsion for multicellular spheroid microbial reactors capable of producing hydrogen alongside either oxygen or CO2 via photosynthesis in daylight under air. Enclosing the microreactors with synergistic bacteria was shown to increase levels of hydrogen production via reduction of O2 concentrations.
Improving production by light harvesting antenna reduction
The chlorophyll (Chl) antenna size in green algae is minimized, or truncated, to maximize photobiological solar conversion efficiency and H2 production. It has been shown that Light-harvesting complex photosystem II light-harvesting protein LHCBM9 promotes efficient light energy dissipation. The truncated Chl antenna size minimizes absorption and wasteful dissipation of sunlight by individual cells, resulting in better light utilization efficiency and greater photosynthetic efficiency when the green alga are grown as a mass culture in bioreactors.
Economics
With current reports for algae-based biohydrogen, it would take about 25,000 square kilometre algal farming to produce biohydrogen equivalent to the energy provided by gasoline in the US alone. This area represents approximately 10% of the area devoted to growing soya in the US.
Bioreactor design issues
Restriction of photosynthetic hydrogen production by accumulation of a proton gradient.
Competitive inhibition of photosynthetic hydrogen production by carbon dioxide.
Requirement for bicarbonate binding at photosystem II (PSII) for efficient photosynthetic activity.
Competitive drainage of electrons by oxygen in algal hydrogen production.
Economics must reach competitive price to other sources of energy and the economics are dependent on several parameters.
A major technical obstacle is the efficiency in converting solar energy into chemical energy stored in molecular hydrogen.
Attempts are in progress to solve these problems via bioengineering.
Production by cyanobacteria
Biological hydrogen production is also observed in nitrogen-fixing cyanobacteria. This microorganisms can grow forming filaments. Under nitrogen-limited conditions some cells can specialize and form heterocysts, which ensures an anaerobic intracellular space to ease N2 fixation by the nitrogenase enzyme expressed also inside.
Under nitrogen-fixation conditions, the nitrogenase enzyme accepts electrons and consume ATP to break the triple dinitrogen bond and reduce it to ammonia. During the catalytic cycle of the nitrogenase enzyme, molecular hydrogen is also produced.
N2 + 8 H+ + 8NAD(P)H + 16 ATP-> 2 NH3 + H2 + 16 ADP + 16 Pi + 8 NAD(P)+
Nevertheless, since the production of H2 is an important loss of energy for the cells, most of nitrogen fixing cyanobacteria also feature at least one uptake hydrogenase. Uptake hydrogenases exhibit a catalytic bias towards oxygen oxidation, thus can assimilate the produced H2 as a way to recover part of the energy invested during the nitrogen fixation process.
History
In 1933, Marjory Stephenson and her student Stickland reported that cell suspensions catalysed the reduction of methylene blue with H2. Six years later, Hans Gaffron observed that the green photosynthetic alga Chlamydomonas reinhardtii, would sometimes produce hydrogen. In the late 1990s Anastasios Melis discovered that deprivation of sulfur induces the alga to switch from the production of oxygen (normal photosynthesis) to the production of hydrogen. He found that the enzyme responsible for this reaction is hydrogenase, but that the hydrogenase lost this function in the presence of oxygen. Melis also discovered that depleting the amount of sulfur available to the algae interrupted their internal oxygen flow, allowing the hydrogenase an environment in which it can react, causing the algae to produce hydrogen. Chlamydomonas moewusii is also a promising strain for the production of hydrogen.
Industrial hydrogen
Competing for biohydrogen, at least for commercial applications, are many mature industrial processes. Steam reforming of natural gas - sometimes referred to as steam methane reforming (SMR) - is the most common method of producing bulk hydrogen at about 95% of the world production.
CH4 + H2O <-> CO + 3 H2
See also
References
External links
DOE - A Prospectus for Biological Production of Hydrogen
FAO
Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures
DIY Algae/Hydrogen Bioreactor 2004
EERE-CYCLIC PHOTOBIOLOGICAL ALGAL H2-PRODUCTION
Anaerobic digestion
Biodegradable waste management
Biodegradation
Biofuels
Biotechnology products
Fuel gas
Fuels
Hydrogen
Hydrogen biology
Hydrogen economy
Hydrogen production
Waste management | Biohydrogen | [
"Chemistry",
"Engineering",
"Biology"
] | 1,843 | [
"Biotechnology products",
"Chemical energy sources",
"Biodegradable waste management",
"Biodegradation",
"Anaerobic digestion",
"Fuels",
"Environmental engineering",
"Water technology"
] |
11,172,013 | https://en.wikipedia.org/wiki/Watershed%20management | Watershed management is the study of the relevant characteristics of a watershed aimed at the sustainable distribution of its resources and the process of creating and implementing plans, programs and projects to sustain and enhance watershed functions that affect the plant, animal, and human communities within the watershed boundary. Features of a watershed that agencies seek to manage to include water supply, water quality, drainage, stormwater runoff, water rights and the overall planning and utilization of watersheds. Landowners, land use agencies, stormwater management experts, environmental specialists, water use surveyors and communities all play an integral part in watershed management.
Controlling pollution
In agricultural systems, common practices include the use of buffer strips, grassed waterways, the re-establishment of wetlands, and forms of sustainable agriculture practices such as conservation tillage, crop rotation and inter-cropping. After certain practices are installed, it is important to continuously monitor these systems to ensure that they are working properly in terms of improving environmental quality.
In urban settings, managing areas to prevent soil loss and control stormwater flow are a few of the areas that receive attention. A few practices that are used to manage stormwater before it reaches a channel are retention ponds, filtering systems and wetlands. It is important that storm-water is given an opportunity to infiltrate so that the soil and vegetation can act as a "filter" before the water reaches nearby streams or lakes. In the case of soil erosion prevention, a few common practices include the use of silt fences, landscape fabric with grass seed and hydroseeding. The main objective in all cases is to slow water movement to prevent soil transport.
Governance
The 2nd World Water Forum held in The Hague in March 2000 raised some controversies that exposed the multilateral nature and imbalance the demand and supply management of freshwater. While donor organizations, private and government institutions backed by the World Bank, believe that freshwater should be governed as an economic good by appropriate pricing, NGOs however, held that freshwater resources should be seen as a social good. The concept of network governance where all stakeholders form partnerships and voluntarily share ideas towards forging a common vision can be used to resolve this clash of opinion in freshwater management. Also, the implementation of any common vision presents a new role for NGOs because of their unique capabilities in local community coordination, thus making them a valuable partner in network governance.
Watersheds replicate this multilateral terrain with private industries and local communities interconnected by a common watershed. Although these groups share a common ecological space that could transcend state borders, their interests, knowledge and use of resources within the watershed are mostly disproportionate and divergent, resulting to the activities of a specific group adversely impacting on other groups. Examples being the Minamata Bay poisoning that occurred from 1932 to 1968, killing over 1,784 individuals and the Wabigoon River incidence of 1962. Furthermore, while some knowledgeable groups are shifting from efficient water resource exploitation to efficient utilization, net gain for the watershed ecology could be lost when other groups seize the opportunity to exploit more resources.
Moreover, the need to create partnerships between donor organizations, private and government institutions and community representatives like NGOs in watersheds is to enhance an "organizational society" among stakeholders.
Several riparian states have adopted this concept in managing the increasingly scarce resources of watersheds. These include the nine Rhine states, with a common vision of pollution control, the Lake Chad and river Nile Basins, whose common vision is to ensure environmental sustainability. As a partner in the commonly shared vision, NGOs has adopted a new role in operationalizing the implementation of regional watershed management policies at the local level. For instance, essential local coordination and education are areas where the services of NGOs have been effective. This makes NGOs the "nuclei" for successful watershed management. Recently, artificial Intelligence techniques such as neural networks have been utilized to address the problem of watershed management.
Environmental law
Environmental laws often dictate the planning and actions that agencies take to manage watersheds. Some laws require that planning be done, others can be used to make a plan legally enforceable and others set out the ground rules for what can and cannot be done in development and planning. Most countries and states have their own laws regarding watershed management.
Those concerned about aquatic habitat protection have a right to participate in the laws and planning processes that affect aquatic habitats. By having a clear understanding of whom to speak to and how to present the case for keeping our waterways clean a member of the public can become an effective watershed protection advocate.
See also
Biomanipulation
Integrated landscape management
Integrated water resources management
Source water protection
References
Further reading
Erickson, J.D., Messner, F. and I. Ring, eds. (2007). Economics of Sustainable Watershed Management. Elsevier, Amsterdam, the Netherlands.
Sabatier, P. A. (2005). Swimming Upstream: Collaborative Approaches to Watershed Management. The MIT Press.
Wagner, W., Gawel, J., Furumai, H., De Souza, M. P., Teixeira, D., Rios, L., et al. (2002). Sustainable watershed management: an international multi-watershed case study. Ambio, 2–13.
Freshwater ecology
Hydrology
Water and the environment
Fisheries protection
Natural resource management | Watershed management | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,061 | [
"Hydrology",
"Environmental engineering"
] |
11,172,217 | https://en.wikipedia.org/wiki/Protein%20A/G | Protein A/G is a recombinant fusion protein that combines IgG binding domains of both protein A and protein G. Protein A/G contains four Fc binding domains from protein A and two from protein G, yielding a final mass of 50,460 daltons. The binding of protein A/G is less pH-dependent than protein A, but otherwise has the additive properties of protein A and G.
Protein A/G binds to all subclasses of human IgG, making it useful for purifying polyclonal or monoclonal IgG antibodies whose subclasses have not been determined. In addition, it binds to IgA, IgE, IgM and (to a lesser extent) IgD. Protein A/G also binds to all subclasses of mouse IgG but does not bind mouse IgA, IgM or serum albumin. This allows Protein A/G to be used for purification and detection of mouse monoclonal IgG antibodies, without interference from IgA, IgM and serum albumin. Mouse monoclonal antibodies commonly have a stronger affinity to the chimeric protein A/G than to either protein A or protein G. Protein A/G also has been used for purification of macaque IgG.
Other antibody binding proteins
In addition to protein A/G, other immunoglobulin-binding bacterial proteins such as protein A, protein G and protein L are all commonly used to purify, immobilize or detect immunoglobulins. Each of these immunoglobulin-binding proteins has a different antibody binding profile in terms of the portion of the antibody that is recognized and the species and type of antibodies.
References
Proteins | Protein A/G | [
"Chemistry"
] | 354 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
11,172,587 | https://en.wikipedia.org/wiki/HEPPS%20%28buffer%29 | HEPPS (EPPS) is a buffering agent used in biology and biochemistry. The pKa of HEPPS is 8.00. It is ones of Good's buffers.
Research on mice with Alzheimer's disease-like amyloid beta plaques has shown that HEPPS can cause the plaques to break up, reversing some of the symptoms in the mice. HEPPS was reported to dissociate amyloid beta oligomers in patients' plasma samples enabling blood diagnosis of Alzheimer's disease.
See also
CAPSO
CHES
HEPES
References
Buffer solutions
Sulfonic acids
Piperazines
Primary alcohols | HEPPS (buffer) | [
"Chemistry",
"Biology"
] | 128 | [
"Buffer solutions",
"Biotechnology stubs",
"Functional groups",
"Biochemistry stubs",
"Sulfonic acids",
"Biochemistry"
] |
11,172,708 | https://en.wikipedia.org/wiki/Nanometrology | Nanometrology is a subfield of metrology, concerned with the science of measurement at the nanoscale level. Nanometrology has a crucial role in order to produce nanomaterials and devices with a high degree of accuracy and reliability in nanomanufacturing.
A challenge in this field is to develop or create new measurement techniques and standards to meet the needs of next-generation advanced manufacturing, which will rely on nanometer scale materials and technologies. The needs for measurement and characterization of new sample structures and characteristics far exceed the capabilities of current measurement science. Anticipated advances in emerging U.S. nanotechnology industries will require revolutionary metrology with higher resolution and accuracy than has previously been envisioned.
Introduction
Control of the critical dimensions are the most important factors in nanotechnology. Nanometrology today, is to a large extent based on the development in semiconductor technology. Nanometrology is the science of measurement at the nanoscale level. Nanometer or nm is equivalent to 10^-9 m. In Nanotechnology accurate control of dimensions of objects is important. Typical dimensions of nanosystems vary from 10 nm to a few hundred nm and while fabricating such systems measurement up to 0.1 nm is required.
At nanoscale due to the small dimensions various new physical phenomena can be observed. For example, when the crystal size is smaller than the electron mean free path the conductivity of the crystal changes. Another example is the discretization of stresses in the system. It becomes important to measure the physical parameters so as to apply these phenomena into engineering of nanosystems and manufacturing them. The measurement of length or size, force, mass, electrical and other properties is included in Nanometrology.
The problem is how to measure these with reliability and accuracy. The measurement techniques used for macro systems cannot be directly used for measurement of parameters in nanosystems. Various techniques based on physical phenomena have been developed which can be used for measure or determine the parameters for nanostructures and nanomaterials. Some of the popular ones are X-Ray diffraction, transmission electron microscopy, High Resolution Transmission Electron Microscopy, atomic force microscopy, scanning electron microscopy, field emission scanning electron microscopy and Brunauer, Emmett, Teller method to determine specific surface.
Nanotechnology is an important field because of the large number of applications it has and it has become necessary to develop more precise techniques of measurement and globally accepted standards. Hence progress is required in the field of Nanometrology.
Development needs
Nanotechnology can be divided into two branches. The first being molecular nanotechnology which involves bottom up manufacturing and the second is engineering nanotechnology which involve the development and processing of materials and systems at nanoscale. The measurement and manufacturing tools and techniques required for the two branches are slightly different.
Furthermore, Nanometrology requirements are different for the industry and research institutions. Nanometrology of research has progressed faster than that for industry mainly because implementing nanometrology for industry is difficult. In research oriented nanometrology resolution is important whereas in industrial nanometrology accuracy is given precedence over resolution. Further, due to economic reasons it is important to have low time costs in industrial nanometrology, whereas it is not important for research nanometrology. The various measurement techniques available today require a controlled environment like in vacuum, vibration and noise free environment. Also, in industrial nanometrology requires that the measurements be more quantitative with minimum number of parameters.
Standards
International standards
Metrology standards are objects or ideas that are designated as being authoritative for some accepted reason. Whatever value they possess is useful for comparison to unknowns for the purpose of establishing or confirming an assigned value based on the standard. The execution of measurement comparisons for the purpose of establishing the relationship between a standard and some other measuring device is calibration. The ideal standard is independently reproducible without uncertainty. The worldwide market for products with nanotechnology applications is projected to be at least a couple of hundred billion dollars in the near future. Until recently, there almost no established internationally accepted standards for nanotechnology related field. The International Organization for Standardization TC-229 Technical Committee on Nanotechnology recently published few standards for terminology, characterization of nanomaterials and nanoparticles using measurement tools like AFM, SEM, Interferometers, optoacoustic tools, gas adsorption methods etc. Certain standards for standardization of measurements for electrical properties have been published by the International Electrotechnical Commission.
Some important standards which are yet to be established are standards for measuring thickness of thin films or layers, characterization of surface features, standards for force measurement at nanoscale, standards for characterization of critical dimensions of nanoparticles and nanostructures and also Standards for measurement for physical properties like conductivity, elasticity etc.
National standards
Because of the importance of nanotechnology in the future, countries around the world have programmes to establish national standards for nanometrology and nanotechnology. These programmes are run by the national standard agencies of the respective countries. In the United States, National Institute of Standards and Technology has been working on developing new techniques for measurement at nanoscale and has also established some national standards for nanotechnology. These standards are for nanoparticle characterization, Roughness Characterization, magnification standard, calibration standards etc.
Calibration
It is difficult to provide samples using which precision instruments can be calibrated at nanoscale. Reference or calibration standards are important for repeatability to be ensured. But there are no international standards for calibration and the calibration artefacts provided by the company along with their equipment is only good for calibrating that particular equipment. Hence it is difficult to select a universal calibration artefact using which we can achieve repeatability at nanoscale. At nanoscale while calibrating care needs to be taken for influence of external factors like vibration, noise, motions caused by thermal drift and creep, nonlinear behaviour and hysteresis of piezoscanner and internal factors like the interaction between the artefact and the equipment which can cause significant deviations.
Measurement techniques
In the last 70 years various techniques for measuring at nanoscale have been developed. Most of them based on some physical phenomena observed on particle interactions or forces at nanoscale. Some of the most commonly used techniques are Atomic Force Microscopy, X-Ray Diffraction, Scanning Electron Microscopy, Transmission Electron Microscopy, High Resolution Transmission Electron Microscopy, and Field Emission Scanning Electron Microscopy.
Atomic force microscopy (AFM) is one of the most common measurement techniques. It can be used to measure topology, grain size, frictional characteristics and different forces. It consists of a silicon cantilever with a sharp tip with a radius of curvature of a few nanometers. The tip is used as a probe on the specimen to be measured. The forces acting at the atomic level between the tip and the surface of the specimen cause the tip to deflect and this deflection is detected using a laser spot which is reflected to an array of photodiodes.
Scanning tunneling microscopy (STM) is another instrument commonly used. It is used to measure 3-D topology of the specimen. The STM is based on the concept of quantum tunneling. When a conducting tip is brought very near to the surface to be examined, a bias (voltage difference) applied between the two can allow electrons to tunnel through the vacuum between them. Measurements are made by monitoring the current as the tip's position scans across the surface, which can then be used to display an image.
Another commonly used instrument is the scanning electron microscopy (SEM) which apart from measuring the shape and size of the particles and topography of the surface can be used to determine the composition of elements and compounds the sample is composed of. In SEM the specimen surface is scanned with a high energy electron beam. The electrons in the beam interact with atoms in the specimen and interactions are detected using detectors. The interactions produced are back scattering of electrons, transmission of electrons, secondary electrons etc. To remove high angle electrons magnetics lenses are used.
The instruments mentioned above produce realistic pictures of the surface are excellent measuring tools for research. Industrial applications of nanotechnology require the measurements to be produced need to be more quantitative. The requirement in industrial nanometrology is for higher accuracy than resolution as compared to research nanometrology.
Nano coordinate measuring machine
A coordinate measuring machine (CMM) that works at the nanoscale would have a smaller frame than the CMM used for macroscale objects. This is so because it may provide the necessary stiffness and stability to achieve nanoscale uncertainties in x,y and z directions. The probes for such a machine need to be small to enable a 3-D measurement of nanometre features from the sides and from inside like nanoholes. Also for accuracy laser interferometers need to be used. NIST has developed a surface measuring instrument, called the Molecular Measuring Machine. This instrument is basically an STM. The x- and y-axes are read out by laser interferometers. The molecules on the surface area can be identified individually and at the same time the distance between any two molecules can be determined. For measuring with molecular resolution, the measuring times become very large for even a very small surface area. Ilmenau Machine is another nanomeasuring machine developed by researchers at the Ilmenau University of Technology.
The components of a nano CMM include nanoprobes, control hardware, 3D-nanopositioning platform, and instruments with high resolution and accuracy for linear and angular measurement.
List of some of the measurement techniques
Traceability
In metrology at macro scale achieving traceability is quite easy and artefacts like scales, laser interferometers, step gauges, and straight edges are used. At nanoscale a crystalline highly oriented pyrolytic graphite (HOPG), mica or silicon surface is considered suitable used as calibration artefact for achieving traceability. But it is not always possible to ensure traceability. Like what is a straight edge at nanoscale and even if take the same standard as that for macroscale there is no way to calibrate it accurately at nanoscale. This so because the requisite internationally or nationally accepted reference standards are not always there. Also the measurement equipment required to ensure traceability has not been developed. The generally used for traceability are miniaturisation of traditional metrology standards hence there is a need for establishing nanoscale standards. Also there is a need to establish some kind of uncertainty estimation model. Traceability is one of the fundamental requirements for manufacturing and assembly of products when multiple producers are there.
Tolerance
Tolerance is the permissible limit or limits of variation in dimensions, properties, or conditions without significantly affecting functioning of equipment or a process. Tolerances are specified to allow reasonable leeway for imperfections and inherent variability without compromising performance. In nanotechnology the systems have dimensions in the range of nanometers. Defining tolerances at nanoscale with suitable calibration standards for traceability is difficult for different nanomanufacturing methods. There are various integration techniques developed in the semiconductor industry that are used in nanomanufacturing.
Integration techniques
In hetero integration direct fabrication of nanosystems from compound substrates is done. Geometric tolerances are required to achieve the functionality of the assembly.
In hybrid integration nanocomponents are placed or assembled on a substrate fabricating functioning nanosystems. In this technique, the most important control parameter is the positional accuracy of the components on the substrate.
In monolithic integration all the fabrication process steps are integrated on a single substrate and hence no mating of components or assembly is required. The advantage of this technique is that the geometric measurements are no longer of primary importance for achieving functionality of nanosystem or control of the fabrication process.
Classification of nanostructures
There are a variety of nanostructures like nanocomposites, nanowires, nanopowders, nanotubes, fullerenes nanofibers, nanocages, nanocrystallites, nanoneedles, nanofoams, nanomeshes, nanoparticles, nanopillars, thin films, nanorods, nanofabrics, quantumdots etc. The most common way to classify nano structures is by their dimensions.
Dimensional classification
Classification of grain structure
Nanostructures can be classified on the basis of the grain structure and size there are made up of. This is applicable in the cas of 2-dimensional and 3-Dimensional Nanostructurs.
Surface area measurement
For nanopowder to determine the specific surface area the B.E.T. method is commonly used. The drop of pressure of nitrogen in a closed container due to adsorption of the nitrogen molecules to the surface of the material inserted in the container is measured. Also, the shape of the nanopowder particles is assumed to be spherical.
D = 6/(ρ*A)
Where "D" is the effective diameter, "ρ" is the density and "A" is the surface area found from the B.E.T. method.
See also
Characterization of nanoparticles
References
General references
Nanotechnology
Metrology | Nanometrology | [
"Materials_science",
"Engineering"
] | 2,725 | [
"Nanotechnology",
"Materials science"
] |
11,172,872 | https://en.wikipedia.org/wiki/Linear%20stage | A linear stage or translation stage is a component of a precise motion system used to restrict an object to a single axis of motion. The term linear slide is often used interchangeably with "linear stage", though technically "linear slide" refers to a linear motion bearing, which is only a component of a linear stage. All linear stages consist of a platform and a base, joined by some form of guide or linear bearing in such a way that the platform is restricted to linear motion with respect to the base. In common usage, the term linear stage may or may not also include the mechanism by which the position of the platform is controlled relative to the base.
Principle of operation
In three-dimensional space, an object may either rotate about, or translate along any of three axes. Thus the object is said to have six degrees of freedom (3 rotational and 3 translational). A linear stage exhibits only one degree of freedom (translation along one axis). In other words, linear stages operate by physically restricting 3 axes of rotation and 2 axes of translation thus allowing for motion on only one translational axis.
Guide types
Linear stages consist of a platform that moves relative to a base. The platform and base are joined by some form of guide which restricts motion of the platform to only one dimension. A variety of different styles of guides are used, each with benefits and drawbacks making each guide type more appropriate for some applications than for others.
Rollers
Benefits Inexpensive.
Drawbacks low load capacity, poor accuracy, short lifetime.
Applications Optics lab stages, drawer slides.
Recirculating ball bearing
Benefits Unlimited travel, relatively inexpensive.
Drawbacks Low load capacity, quick to wear, oscillating positioning load as bearings recirculate.
Applications
Flexure
Benefits Excellent accuracy, no backlash, no wear (infinite lifetime).
Drawbacks Short travel (limited by flexure range), low load capacity, expensive.
Applications Optic fiber alignment.
Cylindrical sleeve
Benefits High load capacity, unlimited travel, inexpensive.
Drawbacks Susceptible to binding if bending moments are present.
Applications Radial arm saws, scanners, printers.
Dovetail
Benefits Highest load capacity, unlimited travel, long lifetime, inexpensive.
Drawbacks High positioning force required, susceptible to binding if bending moments are present, high backlash.
Applications Machine shop equipment (ex. mill and lathe tables).
Position control methods
The position of the moving platform relative to the fixed base is typically controlled by a linear actuator of some form, whether manual, motorized, or hydraulic/pneumatic. The most common method is to incorporate a lead screw passing through a lead nut in the platform. The rotation of such a lead screw may be controlled either manually or by a motor.
Manual
In manual linear stages, a control knob attached to a lead screw is typically used. The knob may be indexed to indicate its angular position. The linear displacement of the stage is related to the angular displacement of the knob by the lead screw pitch. For example if the lead screw pitch is 0.5 mm then one full revolution of the knob will move the stage platform 0.5 mm relative to the stage base. If the knob has 50 index marks around its circumference, then each index division is equivalent to 0.01 mm of linear motion of the stage platform.
Precision stages such as those used for optics do not use a lead screw, but instead use a fine-pitch screw or a micrometer which presses on a hardened metal pad on the stage platform. Rotating the screw or micrometer pushes the platform forward. A spring provides restoring force to keep the platform in contact with the actuator. This provides more precise motion of the stage. Stages designed to be mounted vertically use a slightly different arrangement, where the actuator is attached to the movable platform and its tip rests on a metal pad on the fixed base. This allows the weight of the platform and its load to be supported by the actuator rather than the spring.
Stepper motor
In some automated stages a stepper motor may be used in place of, or in addition to a manual knob. A stepper motor moves in fixed increments called steps. In this sense it behaves very much like an indexed knob. If the lead screw pitch is 0.5 mm and the stepper motor has 200 steps per revolution (as is common), then each revolution of the motor will result in 0.5 mm of linear motion of the stage platform, and each step will result in 0.0025 mm of linear motion.
DC motor with encoder
In other automated stages a DC motor may be used in place of a manual control knob. A DC motor does not move in fixed increments. Therefore an alternate means is required to determine stage position. A scale may be attached to the internals of the stage and an encoder used to measure the position of the stage relative to the scale and report this to the motor controller, allowing a motion controller to reliably and repeatably move the stage to set positions.
Multiple axis stage configurations
For position control in more than one direction, multiple linear stages may be used together. A "two-axis" or "X-Y" stage can be assembled from two linear stages, one mounted to the platform of the other such that the axis of motion of the second stage is perpendicular to that of the first. A two-axis stage with which many people are familiar is a microscope stage, used to position a slide under a lens. A "three-axis" or "X-Y-Z" stage is composed of three linear stages mounted to each other (often with the use of an additional angle bracket) such that the axes of motion of all stages are orthogonal. Some two-axis and three-axis stages are integrated designs rather than being assembled from separate single-axis stages. Some multiple-axis stages also include rotary or tilt elements such as rotary stages or positioning goniometers. By combining linear and rotary elements in various ways, four-axis, five-axis, and six-axis stages are also possible. Linear stages take an advanced form of high performance positioning systems in applications which require a combination of high speed, high precision and high force.
Application
Semiconductor manufacturing
Linear stages are used in semiconductor devices fabrication process for precise linear positioning of wafers of the purposes of wafer mapping dielectric, characterization, and epitaxial layer monitoring where positioning speed and precision are critical.
Variations
Linear Slide
Nano Linear Stage
Nano Positioning Linear Stage
Ultra Precision Machining Linear Stage
References
Positioning instruments
Systems engineering
Optomechanics | Linear stage | [
"Engineering"
] | 1,330 | [
"Systems engineering"
] |
11,174,253 | https://en.wikipedia.org/wiki/Jacques%20Antoine%20Charles%20Bresse | Jacques Antoine Charles Bresse (9 October 1822, in Vienne, Isère – 22 May 1883) was a French civil engineer who specialized in the design and use of hydraulic motors.
Bresse graduated from the École Polytechnique in 1843 and received his formal education in engineering at the École des Ponts et Chaussées. He returned to the École des Ponts et Chaussées in 1848 as an instructor for applied mechanics courses and in 1853 gained his professorship in applied mechanics, after which he taught at the school until his death in 1883.
His name is one of the 72 names inscribed on the Eiffel Tower.
Publications
Bresse, Jacques Antoine Charles, Water-wheels; Or, Hydraulic Motors, John Wiley & Sons, New York 1869.
Notes
French civil engineers
École Polytechnique alumni
École des Ponts ParisTech alumni
Corps des ponts
1822 births
1883 deaths
People from Vienne, Isère
Members of the French Academy of Sciences | Jacques Antoine Charles Bresse | [
"Engineering"
] | 197 | [
"Civil engineering",
"Civil engineering stubs"
] |
11,174,725 | https://en.wikipedia.org/wiki/Volumetric%20concrete%20mixer | A volumetric concrete mixer (also known as volumetric mobile mixer) is a concrete mixer mounted on a truck or trailer that contains separate compartments for sand, stone, cement and water.
On arrival at the job site, the machine mixes the materials to produce the exact amount of concrete needed.
How It Works
Volumetric mixers batch, measure, mix and dispense all from one unit. Volumetric concrete mixers can produce exactly the amount of concrete needed when it is needed at any time. Some concrete suppliers offer general purpose concrete batched in a volumetric mixer as a practical alternative to ready-mix if quantities and schedules are not fully known, to eliminate waste and prevent premature stiffening of the mix.
The volumetric mixer varies in capacity size up to 12 m3 and has a production rate of around 60m3 an hour depending on the mix design. Many volumetric concrete mixer manufacturers have innovated the mixer in capacity and design, as well as added features including color, multiple admixes, fiber systems, and the ability to do gunite or shotcrete.
The advantages of a volumetric mixer include:
Reduces waste and associated costs by providing exact quantities.
No risk of premature stiffening of concrete if delays are encountered.
Permits delivery of smaller quantities of concrete.
Night time works do not require the re-opening of a concrete batch plant.
Flexibility to alternate between multiple concrete mixes as required for the application.
Ability to do continuous concrete pours
Sustainability
Less waste - exact amount of concrete is poured
Less water - volumetric concrete mixers use on average 8-10 gallons of water to clean out versus 200 gallons for a traditional barrel truck
Less emissions - the truck does not need to idle while waiting to pour concrete
History
In the mid-1960's, companies such as Cemen Tech, Reimer Mixers (manufactured under the name ProAll circa 2016), and Zimmerman began building their own versions of volumetric concrete mixers .
In 1999, equipment manufacturers created a trade association, Volumetric Mixer Manufacturers Bureau (VMMB). It had six charter members: Cemen Tech, Inc., Zimmerman Ind, Inc., ProAll Reimer, Bay-Lynx, Custom-Crete, and Elkin. Currently its members include (in alphabetical order): Bay-Lynx, Cemen Tech, Holcombe Mixers, ProAll Reimer Mixers, and Zimmerman Ind, Inc.
References
External links
National Ready Mixed Concrete Association Bureaus
Engineering vehicles | Volumetric concrete mixer | [
"Engineering"
] | 501 | [
"Engineering vehicles"
] |
11,177,823 | https://en.wikipedia.org/wiki/Multi-link%20trunking | Multi-link trunking (MLT) is a link aggregation technology developed at Nortel in 1999. It allows grouping several physical Ethernet links into one logical Ethernet link to provide fault-tolerance and high-speed links between routers, switches, and servers.
MLT allows the use of several links (from 2 up to 8) and combines them to create a single fault-tolerant link with increased bandwidth. This produces server-to-switch or switch-to-switch connections that are up to 8 times faster. Prior to MLT and other aggregation techniques, parallel links were underutilized due to Spanning Tree Protocol’s loop protection.
Fault-tolerant design is an important aspect of Multi-Link Trunking technology. Should any one or more than one link fail, the MLT technology will automatically redistribute traffic across the remaining links. This automatic redistribution is accomplished in less than half a second (typically less than 100 millisecond) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining MLT technology with Distributed Split Multi-Link Trunking (DSMLT), Split multi-link trunking (SMLT), and R-SMLT technologies create networks that support the most critical applications.
A general limitation of standard MLT is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and R-SMLT technologies removes this limitation by allowing the physical ports to be split between two switches.
Split multi-link trunking
Split multi-link trunking (SMLT) is a Layer-2 link aggregation technology in computer networking originally developed by Nortel as an enhancement to standard multi-link trunking (MLT) as defined in IEEE 802.3ad.
Link aggregation or MLT allows multiple physical network links between two network switches and another device (which could be another switch or a network device such as a server) to be treated as a single logical link and load balance the traffic across all available links. For each packet that needs to be transmitted, one of the physical links is selected based on a load-balancing algorithm (usually involving a hash function operating on the source and destination MAC address information). For real-world network traffic this generally results in an effective bandwidth for the logical link equal to the sum of the bandwidth of the individual physical links. Redundant links that were once unused due to Spanning Tree’s loop protection can now be used to their full potential.
A general limitation of standard link aggregation, MLT or EtherChannel is that all the physical ports in the link aggregation group must reside on the same switch. The SMLT, DSMLT and RSMLT protocols remove this limitation by allowing the physical ports to be split between two switches, allowing for the creation of Active load sharing high availability network designs that meet five nines availability requirements.
SMLT topologies
The two switches between which the SMLT is split are known as aggregation switches and form a logical cluster which appears to the other end of the SMLT link as a single switch.
The split may be at one or at both ends of the MLT. If both ends of the link are split, the resulting topology is referred to as an "SMLT square" when there is no cross-connect between diagonally opposite aggregation switches, or an "SMLT mesh" when each aggregation switch has a SMLT connection with both aggregation switches in the other pair. If only one end is split, the topology is referred to as an SMLT triangle.
In an SMLT triangle, the end of the link which is not split does not need to support SMLT. This allows non-Avaya devices including third-party switches and servers to benefit from SMLT. The only requirement is that IEEE 802.3ad static mode must be supported.
Operation
The key to the operation of SMLT is the Inter-Switch Trunk (IST). The IST is a (standard) MLT connection between the aggregation switches which allows the exchange of information regarding traffic forwarding and the status of individual SMLT links.
For each SMLT connection, the aggregation switches have a standard MLT or individual port with which an SMLT identifier is associated. For a given SMLT connection, the same SMLT ID must be configured on each of the peer aggregation switches.
For example, when one switch receives a response to an ARP request from an end station on a port that is part of an SMLT, it will inform its peer switch across the IST and request the peer to update its own ARP table with a record pointing to its own connection with the corresponding SMLT ID.
In general, normal network traffic does not traverse the IST unless this is the only path to reach a host which is connected only to the peer switch. By ensuring all devices have SMLT connections to the aggregation switches, traffic never needs to traverse the IST and the total forwarding capacity of the switches in the cluster is also aggregated.
The communication between peer switches across the IST allows both unicast and multicast routing information to be exchanged allowing protocols such as Open Shortest Path First (OSPF) and Protocol Independent Multicast-Sparse Mode (PIM-SM) to operate correctly.
Failure scenarios
The use of SMLT not only allows traffic to be load-balanced across all the links in an aggregation group but also allows traffic to be redistributed very quickly in the event of link or switch failure. In general the failure of any one component results in a traffic disruption lasting less than half a second (normal less than 100 millisecond) making SMLT appropriate in environments running time- and loss-sensitive applications such as voice and video.
In a network using SMLT, it is often no longer necessary to run a spanning tree protocol of any kind since there are no logical bridging loops introduced by the presence of the IST. This eliminates the need for spanning tree reconvergence or root-bridge failovers in failure scenarios which causes interruptions in network traffic longer than time-sensitive applications are able to cater for.
Product support
SMLT is supported within the following Avaya Ethernet Routing Switch (ERS) and Virtual Services Platform (VSP) Product Families: ERS 1600, ERS 5500, ERS 5600, ERS 7000, ERS 8300, ERS 8800, ERS 8600, MERS 8600, VSP 9000
SMLT is fully interoperable with devices supporting standard MLT (IEEE 802.3ad static mode).
R-SMLT
Routed-SMLT (R-SMLT) is a computer networking protocol developed at Nortel as an enhancement to split multi-link trunking (SMLT) enabling the exchange of Layer 3 information between peer nodes in a switch cluster for resiliency and simplicity for both L3 and L2.
In many cases, core network convergence time after a failure is dependent on the length of time a routing protocol requires to successfully converge (change or re-route traffic around the fault). Depending on the specific routing protocol, this convergence time can cause network interruptions ranging from seconds to minutes. The R-SMLT protocol works with SMLT and distributed Split Multi-Link Trunking (DSMLT) technologies to provide sub-second failover (normally less than 100 milliseconds) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks.
RSMLT routing topologies providing an active-active router concept to core SMLT networks. The protocol supports networks designed with SMLT or DSMLT triangles, squares, and SMLT or DSMLT full mesh topologies, with routing enabled on the core VLANs. R-SMLT takes care of packet forwarding in core router failures and works with any of the following protocol types: IP Unicast Static Routes, RIP1, RIP2, OSPF, BGP and IPX RIP.
Product support
R-SMLT is supported on Avaya's Ethernet Routing Switch ERS 8600, ERS 8800, VSP9000, ERS 8300 and MERS 8600 products.
Distributed multi-link trunking
Distributed multi-link trunking (DMLT) or distributed MLT is a proprietary computer networking protocol designed by Nortel Networks, and now owned by Extreme Networks, used to load balance the network traffic across connections and also across multiple switches or modules in a chassis. The protocol is an enhancement to the Multi-Link Trunking (MLT) protocol.
DMLT allows the ports in a trunk (MLT) to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or a card in a chassis fails.
DMLT is described in an expired United States Patent.
Distributed split multi-link trunking
Distributed split multi-link trunking (DSMLT) or Distributed SMLT is a computer networking technology developed at Nortel to enhance the Split Multi-Link Trunking (SMLT) protocol. DSMLT allows the ports in a trunk to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or one card in a chassis fails.
Fault-tolerance is a very important aspect of Distributed Split Multi-Link Trunking (DSMLT) technology. Should any one switch, port, or more than one link fail, the DSMLT technology will automatically redistribute traffic across the remaining links. Automatic redistribution is accomplished in less than half a second (typically less than 100 milliseconds) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining Multi-Link Trunking (MLT), DMLT, SMLT, DSMLT and R-SMLT technologies create networks that support the most critical networks.
Product support
SMLT is supported on Avaya's Ethernet Routing Switch 1600, 5500, 8300, ERS 8600, MERS 8600, VSP-7000 and VSP-9000 products.
References
US 7173934 Lapuh, Roger & Yili Zhao "System, device, and method for improving communication network reliability using trunk splitting"; (SMLT) issued 2007-02-06
Further reading
Technical Brief Split Multi-Link Trunking Ethernet Routing Switch 8600
Desktop Connectivity
Using Distributed Multi-Link Trunking
Distributed multi-link trunking method and apparatus Google Patents
Distributed multi-link trunking method and apparatus Patent Genius
Distributed multi-link trunking method and apparatus Patent Storm
External links
Tolly Benchmarks -Retrieved 29 July 2011
See IEEE.org for info on 802.3ad standard -Retrieved 29 July 2011
Avaya
Communication circuits
Ethernet
Link protocols
Network topology
Nortel protocols
Bonding protocols
Nortel
Network architecture
Reliability engineering
Articles containing video clips | Multi-link trunking | [
"Mathematics",
"Engineering"
] | 2,276 | [
"Systems engineering",
"Telecommunications engineering",
"Reliability engineering",
"Network topology",
"Network architecture",
"Computer networks engineering",
"Topology",
"Communication circuits"
] |
11,178,099 | https://en.wikipedia.org/wiki/Rata%20Die | Rata Die (R.D.) is a system for assigning numbers to calendar days (optionally with time of day), independent of any calendar, for the purposes of calendrical calculations. It was named (after the Latin ablative feminine singular for "from a fixed date") by Howard Jacobson.
Rata Die is somewhat similar to Julian Dates (JD), in that the values are plain real numbers that increase by 1 each day. The systems differ principally in that JD takes on a particular value at a particular absolute time, and is the same in all contexts, whereas R.D. values may be relative to time zone, depending on the implementation. This makes R.D. more suitable for work on calendar dates, whereas JD is more suitable for work on time per se. The systems also differ trivially by having different epochs: R.D. is 1 at midnight (00:00) local time on January 1, AD 1 in the proleptic Gregorian calendar, JD is 0 at noon (12:00) Universal Time on January 1, 4713 BC in the proleptic Julian calendar.
Forms
There are three distinct forms of R.D., heretofore defined using Julian Dates.
Dershowitz and Reingold do not explicitly distinguish between these three forms, using the abbreviation "R.D." for all of them.
Dershowitz and Reingold do not say that the RD is based on Greenwich time, but page 10 state that an R.D. with a decimal fraction is called a moment, with the function moment-from-jd taking the floating point R.D. as an argument and returns the argument -1721424.5. Consequently, there is no requirement or opportunity to supply a time zone offset.
Fractional days
The first form of R.D. is a continuously-increasing fractional number, taking integer values at midnight local time. It is defined as:
RD = JD − 1,721,424.5
Midnight local time on December 31, year 0 (1 BC) in the proleptic Gregorian calendar corresponds to Julian Date 1,721,424.5 and hence RD 0.
Day Number
In the second form, R.D. is an integer that labels an entire day, from midnight to midnight local time. This is the result of rounding the first form of R.D. downwards (towards negative infinity). It is the same as the relation between Julian Date and Julian Day Number (JDN). Thus:
RD = floor( JD − 1,721,424.5 )
Noon Number
In the third form, the R.D. is an integer labeling noon time, and incapable of labeling any other time of day. This is defined as
RD = JD − 1,721,425
where the R.D. value must be an integer, thus constraining the choice of JD. This form of R.D. is used by Dershowitz and Reingold for conversion of calendar dates between calendars that separate days on different boundaries.
See also
Decimal time#Fractional days
Julian date
Lilian date
References
Applied mathematics
Calendars
Calendaring standards | Rata Die | [
"Physics",
"Mathematics"
] | 652 | [
"Calendars",
"Physical quantities",
"Time",
"Applied mathematics",
"Spacetime"
] |
11,178,898 | https://en.wikipedia.org/wiki/Nullator | In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time.
Inserting a nullator in a circuit schematic imposes a mathematical constraint on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements needed to meet the condition. For example, the inputs of an ideal operational amplifier (with negative feedback) behave like a nullator, as they draw no current and have no voltage across them, and these conditions are used to analyze the circuitry surrounding the operational amplifier.
A nullator is normally paired with a norator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage).
References
External links
Nullator article from Analog Insydes reference
Electrical components
Control theory
Signal processing
Analog circuits
Electronic design | Nullator | [
"Mathematics",
"Technology",
"Engineering"
] | 246 | [
"Electrical components",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Electronic design",
"Applied mathematics",
"Control theory",
"Analog circuits",
"Electronic engineering",
"Electrical engineering",
"Design",
"Components",
"Dynamical systems"
] |
13,726,331 | https://en.wikipedia.org/wiki/Inertia%20wheel%20pendulum | An inertia wheel pendulum is a pendulum with an inertia wheel attached. It can be used as a pedagogical problem in control theory. This type of pendulum is often confused with the gyroscopic effect, which has completely different physical nature.
See also
Inverted pendulum
Robotic unicycle
Spinning top
References
Mark W. Spong, Peter Corke, Rogelio Lozano. Nonlinear Control of the Gyroscopic Pendulum.
Pendulums
Control engineering | Inertia wheel pendulum | [
"Physics",
"Engineering"
] | 98 | [
"Control engineering",
"Classical mechanics stubs",
"Classical mechanics"
] |
13,727,381 | https://en.wikipedia.org/wiki/Solids%20with%20icosahedral%20symmetry |
Solids with full icosahedral symmetry
Platonic solids - regular polyhedra (all faces of the same type)
Archimedean solids - polyhedra with more than one polygon face type.
Catalan solids - duals of the Archimedean solids.
Platonic solids
Achiral Archimedean solids
Achiral Catalan solids
Kepler-Poinsot solids
Achiral nonconvex uniform polyhedra
Chiral Archimedean and Catalan solids
Archimedean solids:
Catalan solids:
Chiral nonconvex uniform polyhedra
See also
The Fifty Nine Icosahedra
Rotational symmetry | Solids with icosahedral symmetry | [
"Physics"
] | 128 | [
"Symmetry",
"Rotational symmetry"
] |
13,731,710 | https://en.wikipedia.org/wiki/Windbelt | The Windbelt is a wind power harvesting device invented by Shawn Frayne in 2004 for converting wind power to electricity. It consists of a flexible polymer ribbon stretched between supports transverse to the wind direction, with magnets glued to it. When the wind blows across it, the ribbon vibrates due to vortex shedding, similar to the action of an aeolian harp. The vibrating movement of the magnets induces current in nearby pickup coils by electromagnetic induction.
One prototype has powered two LEDs, a radio, and a clock (separately) using wind generated from a household fan. The cost of the materials was well under US$10. $2–$5 for 40 mW is a cost of $50–$125 per watt.
There are three sizes in development:
The microBelt, a 12 cm version. This could be put into production in around six months. Its expected to produce 1 milliwatt average. To charge a pair of ideal rechargeable AA cells (2.5Ah 1.2v) this would take 6000 hours, or 250 days.
The Windcell, a 1-metre version that could be used to power meshed WiFi repeaters, charge cellphones, or run LED lights. This could go into production within 18 to 24 months. It is hoped that a square metre panel at 6 m/s average windspeed can generate 10 W average.
An experimental 10-metre model that has no production date.
The Windbelt's inventor, Shawn Frayne, was a winner of the 2007 Breakthrough Award from the publishers of the magazine, Popular Mechanics. He is trying to make the Windbelt cheaper.
The inventor's claims that the device is 10–30 times more efficient than small wind turbines have been refuted by tests. The microWindbelt could generate 0.2 mW at a wind speed of 3.5 m/s and 5 mW at 7.5 m/s, which represent efficiencies (ηCp) of 0.21 and 0.53 respectively. Wind turbines typically have efficiencies of 1% to 10%. Since the Windbelt a number of other "flutter" wind harvester devices have been designed, but like the Windbelt almost all have efficiencies below turbine machines.
Footnotes
References
Instructions for building a proof-of-concept windbelt-powered lamp with parts recovered from an old hard drive
Windbelt cheap micro wind generator REUK.co.uk 17 October 2007
Windbelt - reinventing wind power physics.org 22 April 2010
External links
Humdinger Wind, company founded by the inventor, and holder of the patents.
Energy Harvesting Journal, 30 Mar 2010 Wind energy harvester from Humdinger
Wind power
Electrical generators
Energy harvesting | Windbelt | [
"Physics",
"Technology"
] | 559 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
13,733,666 | https://en.wikipedia.org/wiki/Chronozone | A chronozone or chron is a unit in chronostratigraphy, defined by events such as
geomagnetic reversals (magnetozones), or based on the presence of specific fossils (biozone or biochronozone).
According to the International Commission on Stratigraphy, the term "chronozone" refers to the rocks formed during a particular time period, while "chron" refers to that time period.
Although non-hierarchical, chronozones have been recognized as useful markers or benchmarks of time in the rock record. Chronozones are non-hierarchical in that chronozones do not need to correspond across geographic or geologic boundaries, nor be equal in length. Although a former, early constraint required that a chronozone be defined as smaller than a geological stage. Another early use was hierarchical in that Harland et al. (1989) used "chronozone" for the slice of time smaller than a faunal stage defined in biostratigraphy.
The ICS superseded these earlier usages in 1994.
The key factor in designating an internationally acceptable chronozone is whether the overall fossil column is clear, unambiguous, and widespread. Some accepted chronozones contain others, and certain larger chronozones have been designated which span whole defined geological time units, both large and small.
For example, the chronozone Pliocene is a subset of the chronozone Neogene, and the chronozone Pleistocene is a subset of the chronozone Quaternary.
See also
Body form
Chronology (geology)
European Mammal Neogene
Geologic time scale
North American Land Mammal Age
Type locality (geology)
List of GSSPs
References
Hedberg, H.D., (editor), International stratigraphic guide: A guide to stratigraphic classification, terminology, and procedure, New York, John Wiley and Sons, 1976
External links
International Stratigraphic Chart from the International Commission on Stratigraphy
USA National Park Service
Washington State University
Web Geological Time Machine
,
The Global Boundary Stratotype Section and Point (GSSP): overview
Chart of The Global Boundary Stratotype Sections and Points (GSSP): chart
Geotime chart displaying geologic time periods compared to the fossil record
Chronostratigraphy
Geochronology
Geologic time scales
Geology terminology
Geological units
Historical geology
Paleogeography
Paleobiology
Stratigraphy
Units of time | Chronozone | [
"Physics",
"Mathematics",
"Biology"
] | 507 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Paleobiology",
"Spacetime",
"Units of measurement"
] |
13,733,769 | https://en.wikipedia.org/wiki/Darcy%20friction%20factor%20formulae | In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow.
The Darcy friction factor is also known as the Darcy–Weisbach friction factor, resistance coefficient or simply friction factor; by definition it is four times larger than the Fanning friction factor.
Notation
In this article, the following conventions and definitions are to be understood:
The Reynolds number Re is taken to be Re = V D / ν, where V is the mean velocity of fluid flow, D is the pipe diameter, and where ν is the kinematic viscosity μ / ρ, with μ the fluid's Dynamic viscosity, and ρ the fluid's density.
The pipe's relative roughness ε / D, where ε is the pipe's effective roughness height and D the pipe (inside) diameter.
f stands for the Darcy friction factor. Its value depends on the flow's Reynolds number Re and on the pipe's relative roughness ε / D.
The log function is understood to be base-10 (as is customary in engineering fields): if x = log(y), then y = 10x.
The ln function is understood to be base-e: if x = ln(y), then y = ex.
Flow regime
Which friction factor formula may be applicable depends upon the type of flow that exists:
Laminar flow
Transition between laminar and turbulent flow
Fully turbulent flow in smooth conduits
Fully turbulent flow in rough conduits
Free surface flow.
Transition flow
Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime.
Turbulent flow in smooth conduits
The Blasius correlation is the simplest equation for computing the Darcy friction
factor. Because the Blasius correlation has no term for pipe roughness, it
is valid only to smooth pipes. However, the Blasius correlation is sometimes
used in rough pipes because of its simplicity. The Blasius correlation is valid
up to the Reynolds number 100000.
Turbulent flow in rough conduits
The Darcy friction factor for fully turbulent flow (Reynolds number greater than 4000) in rough conduits can be modeled by the Colebrook–White equation.
Free surface flow
The last formula in the Colebrook equation section of this article is for free surface flow. The approximations elsewhere in this article are not applicable for this type of flow.
Choosing a formula
Before choosing a formula it is worth knowing that in the paper on the Moody chart, Moody stated the accuracy is about ±5% for smooth pipes and ±10% for rough pipes. If more than one formula is applicable in the flow regime under consideration, the choice of formula may be influenced by one or more of the following:
Required accuracy
Speed of computation required
Available computational technology:
calculator (minimize keystrokes)
spreadsheet (single-cell formula)
programming/scripting language (subroutine).
Colebrook–White equation
The phenomenological Colebrook–White equation (or Colebrook equation) expresses the Darcy friction factor f as a function of Reynolds number Re and pipe relative roughness ε / Dh, fitting the data of experimental studies of turbulent flow in smooth and rough pipes.
The equation can be used to (iteratively) solve for the Darcy–Weisbach friction factor f.
For a conduit flowing completely full of fluid at Reynolds numbers greater than 4000, it is expressed as:
or
where:
Hydraulic diameter, (m, ft) – For fluid-filled, circular conduits, = D = inside diameter
Hydraulic radius, (m, ft) – For fluid-filled, circular conduits, = D/4 = (inside diameter)/4
Note: Some sources use a constant of 3.71 in the denominator for the roughness term in the first equation above.
Solving
The Colebrook equation is usually solved numerically due to its implicit nature. Recently, the Lambert W function has been employed to obtain an exact solution in an explicit reformulation of the Colebrook equation.
or
will get:
then:
Expanded forms
Additional, mathematically equivalent forms of the Colebrook equation are:
where:
1.7384... = 2 log (2 × 3.7) = 2 log (7.4)
18.574 = 2.51 × 3.7 × 2
and
or
where:
1.1364... = 1.7384... − 2 log (2) = 2 log (7.4) − 2 log (2) = 2 log (3.7)
9.287 = 18.574 / 2 = 2.51 × 3.7.
The additional equivalent forms above assume that the constants 3.7 and 2.51 in the formula at the top of this section are exact. The constants are probably values which were rounded by Colebrook during his curve fitting; but they are effectively treated as exact when comparing (to several decimal places) results from explicit formulae (such as those found elsewhere in this article) to the friction factor computed via Colebrook's implicit equation.
Equations similar to the additional forms above (with the constants rounded to fewer decimal places, or perhaps shifted slightly to minimize overall rounding errors) may be found in various references. It may be helpful to note that they are essentially the same equation.
Free surface flow
Another form of the Colebrook-White equation exists for free surfaces. Such a condition may exist in a pipe that is flowing partially full of fluid. For free surface flow:
The above equation is valid only for turbulent flow. Another approach for estimating f in free surface flows, which is valid under all the flow regimes (laminar, transition and turbulent) is the following:
where a is:
and b is:
where Reh is Reynolds number where h is the characteristic hydraulic length (hydraulic radius for 1D flows or water depth for 2D flows) and Rh is the hydraulic radius (for 1D flows) or the water depth (for 2D flows). The Lambert W function can be calculated as follows:
Approximations of the Colebrook equation
Haaland equation
The Haaland equation was proposed in 1983 by Professor S.E. Haaland of the Norwegian Institute of Technology. It is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation, but the discrepancy from experimental data is well within the accuracy of the data.
The Haaland equation is expressed:
Swamee–Jain equation
The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation.
Serghides's solution
Serghides's solution is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. It was derived using Steffensen's method.
The solution involves calculating three intermediate values and then substituting those values into a final equation.
The equation was found to match the Colebrook–White equation within 0.0023% for a test set with a 70-point matrix consisting of ten relative roughness values (in the range 0.00004 to 0.05) by seven Reynolds numbers (2500 to 108).
Goudar–Sonnad equation
Goudar equation is the most accurate approximation to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Equation has the following form
Brkić solution
Brkić shows one approximation of the Colebrook equation based on the Lambert W-function
The equation was found to match the Colebrook–White equation within 3.15%.
Brkić-Praks solution
Brkić and Praks show one approximation of the Colebrook equation based on the Wright -function, a cognate of the Lambert W-function
, , , and
The equation was found to match the Colebrook–White equation within 0.0497%.
Praks-Brkić solution
Praks and Brkić show one approximation of the Colebrook equation based on the Wright -function, a cognate of the Lambert W-function
, , , and
The equation was found to match the Colebrook–White equation within 0.0012%.
Niazkar's solution
Since Serghides's solution was found to be one of the most accurate approximation of the implicit Colebrook–White equation, Niazkar modified the Serghides's solution to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe.
Niazkar's solution is shown in the following:
Niazkar's solution was found to be the most accurate correlation based on a comparative analysis conducted in the literature among 42 different explicit equations for estimating Colebrook friction factor.
Blasius correlations
Early approximations for smooth pipes by Paul Richard Heinrich Blasius in terms of the Darcy–Weisbach friction factor are given in one article of 1913:
.
Johann Nikuradse in 1932 proposed that this corresponds to a power law correlation for the fluid velocity profile.
Mishra and Gupta in 1979 proposed a correction for curved or helically coiled tubes, taking into account the equivalent curve radius, Rc:
,
with,
where f is a function of:
Pipe diameter, D (m, ft)
Curve radius, R (m, ft)
Helicoidal pitch, H (m, ft)
Reynolds number, Re (dimensionless)
valid for:
Retr < Re < 105
6.7 < 2Rc/D < 346.0
0 < H/D < 25.4
Swamee equation
The Swamee equation is used to solve directly for the Darcy–Weisbach friction factor (f) for a full-flowing circular pipe for all flow regimes (laminar, transitional, turbulent). It is an exact solution for the Hagen–Poiseuille equation in the laminar flow regime and an approximation of the implicit Colebrook–White equation in the turbulent regime with a maximum deviation of less than 2.38% over the specified range. Additionally, it provides a smooth transition between the laminar and turbulent regimes to be valid as a full-range equation, 0 < Re < 108.
Table of Approximations
The following table lists historical approximations to the Colebrook–White relation for pressure-driven flow. Churchill equation (1977) is the only equation that can be evaluated for very slow flow (Reynolds number < 1), but the Cheng (2008), and Bellos et al. (2018) equations also return an approximately correct value for friction factor in the laminar flow region (Reynolds number < 2300). All of the others are for transitional and turbulent flow only.
References
Further reading
Brkić, Dejan; Praks, Pavel (2019). "Accurate and efficient explicit approximations of the Colebrook flow friction equation based on the Wright ω-function". Mathematics 7 (1): article 34. https://doi.org/10.3390/math7010034. ISSN 2227-7390
Praks, Pavel; Brkić, Dejan (2020). "Review of new flow friction equations: Constructing Colebrook’s explicit correlations accurately". Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería 36 (3): article 41. https://doi.org/10.23967/j.rimni.2020.09.001. ISSN 1886-158X (online version) - ISSN 0213-1315 (printed version)
External links
Web-based calculator of Darcy friction factors by Serghides' solution.
Open source pipe friction calculator.
Equations of fluid dynamics
Piping
Fluid mechanics
fr:Équation de Darcy-Weisbach
it:Equazione di Colebrook
pt:Equações explícitas para o fator de atrito de Darcy-Weisbach | Darcy friction factor formulae | [
"Physics",
"Chemistry",
"Engineering"
] | 2,551 | [
"Equations of fluid dynamics",
"Equations of physics",
"Building engineering",
"Chemical engineering",
"Civil engineering",
"Mechanical engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
13,735,033 | https://en.wikipedia.org/wiki/Helium%20atom%20scattering | Helium atom scattering (HAS) is a surface analysis technique used in materials science. It provides information about the surface structure and lattice dynamics of a material by measuring the diffracted atoms from a monochromatic helium beam incident on the sample.
History
The first recorded helium diffraction experiment was completed in 1930 by Immanuel Estermann and Otto Stern on the (100) crystal face of lithium fluoride. This experimentally established the feasibility of atom diffraction when the de Broglie wavelength, λ, of the impinging atoms is on the order of the interatomic spacing of the material. At the time, the major limit to the experimental resolution of this method was due to the large velocity spread of the helium beam. It wasn't until the development of high pressure nozzle sources capable of producing intense and strongly monochromatic beams in the 1970s that HAS gained popularity for probing surface structure. Interest in studying the collision of rarefied gases with solid surfaces was helped by a connection with aeronautics and space problems of the time. Plenty of studies showing the fine structures in the diffraction pattern of materials using helium atom scattering were published in the 1970s. However, it wasn't until a third generation of nozzle beam sources was developed, around 1980, that studies of surface phonons could be made by helium atom scattering. These nozzle beam sources were capable of producing helium atom beams with an energy resolution of less than 1meV, making it possible to explicitly resolve the very small energy changes resulting from the inelastic collision of a helium atom with the vibrational modes of a solid surface, so HAS could now be used to probe lattice dynamics. The first measurement of such a surface phonon dispersion curve was reported in 1981, leading to a renewed interest in helium atom scattering applications, particularly for the study of surface dynamics.
Basic principles
Surface sensitivity
Generally speaking, surface bonding is different from the bonding within the bulk of a material. In order to accurately model and describe the surface characteristics and properties of a material, it is necessary to understand the specific bonding mechanisms at work at the surface. To do this, one must employ a technique that is able to probe only the surface, we call such a technique "surface-sensitive". That is, the 'observing' particle (whether it be an electron, a neutron, or an atom) needs to be able to only 'see' (gather information from) the surface. If the penetration depth of the incident particle is too deep into the sample, the information it carries out of the sample for detection will contain contributions not only from the surface, but also from the bulk material. While there are several techniques that probe only the first few monolayers of a material, such as low-energy electron diffraction (LEED), helium atom scattering is unique in that it does not penetrate the surface of the sample at all! In fact, the scattering 'turnaround' point of the helium atom is 3-4 angstroms above the surface plane of atoms on the material. Therefore, the information carried out in the scattered helium atom comes solely from the very surface of the sample.
A visual comparison of helium scattering and electron scattering is shown below:
Helium at thermal energies can be modeled classically as scattering from a hard potential wall, with the location of scattering points representing a constant electron density surface. Since single scattering dominates the helium-surface interactions, the collected helium signal easily gives information on the surface structure without the complications of considering multiple electron scattering events (such as in LEED).
Scattering mechanism
A qualitative sketch of the elastic one-dimensional interaction potential between the incident helium atom and an atom on the surface of the sample is shown here:
This potential can be broken down into an attractive portion due to Van der Waals forces, which dominates over large separation distances, and a steep repulsive force due to electrostatic repulsion of the positive nuclei, which dominates the short distances. To modify the potential for a two-dimensional surface, a function is added to describe the surface atomic corrugations of the sample. The resulting three-dimensional potential can be modeled as a corrugated Morse potential as:
The first term is for the laterally-averaged surface potential - a potential well with a depth D at the minimum of z = zm and a fitting parameter α, and the second term is the repulsive potential modified by the corrugation function, ξ(x,y), with the same periodicity as the surface and fitting parameter β.
Helium atoms, in general, can be scattered either elastically (with no energy transfer to or from the crystal surface) or inelastically through excitation or deexcitation of the surface vibrational modes (phonon creation or annihilation). Each of these scattering results can be used in order to study different properties of a material's surface.
Why use helium atoms?
There are several advantages to using helium atoms as compared with x-rays, neutrons, and electrons to probe a surface and study its structures and phonon dynamics. As mentioned previously, the lightweight helium atoms at thermal energies do not penetrate into the bulk of the material being studied. This means that in addition to being strictly surface-sensitive they are truly non-destructive to the sample. Their de Broglie wavelength is also on the order of the interatomic spacing of materials, making them ideal probing particles. Since they are neutral, helium atoms are insensitive to surface charges. As a noble gas, the helium atoms are chemically inert. When used at thermal energies, as is the usual scenario, the helium atomic beam is an inert probe (chemically, electrically, magnetically, and mechanically). It is therefore capable of studying the surface structure and dynamics of a wide variety of materials, including those with reactive or metastable surfaces. A helium atom beam can even probe surfaces in the presence of electromagnetic fields and during ultra-high vacuum surface processing without interfering with the ongoing process. Because of this, helium atoms can be useful to make measurements of sputtering or annealing, and adsorbate layer depositions. Finally, because the thermal helium atom has no rotational and vibrational degrees of freedom and no available electronic transitions, only the translational kinetic energy of the incident and scattered beam need be analyzed in order to extract information about the surface.
Instrumentation
The accompanying figure is a general schematic of a helium atom scattering experimental setup. It consists of a nozzle beam source, an ultra high vacuum scattering chamber with a crystal manipulator, and a detector chamber. Every system can have a different particular arrangement and setup, but most will have this basic structure.
Sources
The helium atom beam, with a very narrow energy spread of less than 1meV, is created through free adiabatic expansion of helium at a pressure of ~200bar into a low-vacuum chamber through a small ~5-10μm nozzle. Depending on the system operating temperature range, typical helium atom energies produced can be 5-200meV. A conical aperture between A and B called the skimmer extracts the center portion of the helium beam. At this point, the atoms of the helium beam should be moving with nearly uniform velocity. Also contained in section B is a chopper system, which is responsible for creating the beam pulses needed to generate the time of flight measurements to be discussed later.
Scattering chamber
The scattering chamber, area C, generally contains the crystal manipulator and any other analytical instruments that can be used to characterize the crystal surface. Equipment that can be included in the main scattering chamber includes a LEED screen (to make complementary measurements of the surface structure), an Auger analysis system (to determine the contamination level of the surface), a mass spectrometer (to monitor the vacuum quality and residual gas composition), and, for working with metal surfaces, an ion gun (for sputter cleaning of the sample surface). In order to maintain clean surfaces, the pressure in the scattering chamber needs to be in the range of 10−8 to 10−9 Pa. This requires the use of turbomolecular or cryogenic vacuum pumps.
Crystal manipulator
The crystal manipulator allows for at least three different motions of the sample. The azimuthal rotation allows the crystal to change the direction of the surface atoms, the tilt angle is used to set the normal of the crystal to be in the scattering plane, and the rotation of the manipulator around the z-axis alters the beam incidence angle. The crystal manipulator should also incorporate a system to control the temperature of the crystal.
Detector
After the beam scatters off the crystal surface, it goes into the detector area D. The most commonly used detector setup is an electron bombardment ion source followed by a mass filter and an electron multiplier. The beam is directed through a series of differential pumping stages that reduce the noise-to-signal ratio before hitting the detector. A time-of-flight analyzer can follow the detector to take energy loss measurements.
Elastic measurements
Under conditions for which elastic diffractive scattering dominates, the relative angular positions of the diffraction peaks reflect the geometric properties of the surface being examined. That is, the locations of the diffraction peaks reveal the symmetry of the two-dimensional space group that characterizes the observed surface of the crystal. The width of the diffraction peaks reflects the energy spread of the beam. The elastic scattering is governed by two kinematic conditions - conservation of energy and the energy of the momentum component parallel to the crystal:
Here G is a reciprocal lattice vector, kG and ki are the final and initial (incident) wave vectors of the helium atom. The Ewald sphere construction will determine the diffracted beams to be seen and the scattering angles at which they will appear. A characteristic diffraction pattern will appear, determined by the periodicity of the surface, in a similar manner to that seen for Bragg scattering in electron and x-ray diffraction. Most helium atom scattering studies will scan the detector in a plane defined by the incoming atomic beam direction and the surface normal, reducing the Ewald sphere to a circle of radius R=k0 intersecting only reciprocal lattice rods that lie in the scattering plane as shown here:
The intensities of the diffraction peaks provide information about the static gas-surface interaction potentials. Measuring the diffraction peak intensities under different incident beam conditions can reveal the surface corrugation (the surface electron density) of the outermost atoms on the surface.
Note that the detection of the helium atoms is much less efficient than for electrons, so the scattered intensity can only be determined for one point in k-space at a time. For an ideal surface, there should be no elastic scattering intensity between the observed diffraction peaks. If there is intensity seen here, it is due to a surface imperfection, such as steps or adatoms. From the angular position, width and intensity of the peaks, information is gained regarding the surface structure and symmetry, and the ordering of surface features.
Inelastic measurements
The inelastic scattering of the helium atom beam reveals the surface phonon dispersion for a material. At scattering angles far away from the specular or diffraction angles, the scattering intensity of the ordered surface is dominated by inelastic collisions.
In order to study the inelastic scattering of the helium atom beam due only to single-phonon contributions, an energy analysis needs to be made of the scattered atoms. The most popular way to do this is through the use of time-of-flight (TOF) analysis. The TOF analysis requires the beam to be pulsed through the mechanical chopper, producing collimated beam 'packets' that have a 'time-of-flight' (TOF) to travel from the chopper to the detector. The beams that scatter inelastically will lose some energy in their encounter with the surface and therefore have a different velocity after scattering than they were incident with. The creation or annihilation of surface phonons can be measured, therefore, by the shifts in the energy of the scattered beam. By changing the scattering angles or incident beam energy, it is possible to sample inelastic scattering at different values of energy and momentum transfer, mapping out the dispersion relations for the surface modes. Analyzing the dispersion curves reveals sought-after information about the surface structure and bonding. A TOF analysis plot would show intensity peaks as a function of time. The main peak (with the highest intensity) is that for the unscattered helium beam 'packet'. A peak to the left is that for the annihilation of a phonon. If a phonon creation process occurred, it would appear as a peak to the right:
The qualitative sketch above shows what a time-of-flight plot might look like near a diffraction angle. However, as the crystal rotates away from the diffraction angle, the elastic (main) peak drops in intensity. The intensity never shrinks to zero even far from diffraction conditions, however, due to incoherent elastic scattering from surface defects. The intensity of the incoherent elastic peak and its dependence on scattering angle can therefore provide useful information about surface imperfections present on the crystal.
The kinematics of the phonon annihilation or creation process are extremely simple - conservation of energy and momentum can be combined to yield an equation for the energy exchange and momentum exchange during the collision process. This inelastic scattering process is described as a phonon of energy and wavevector . The vibrational modes of the lattice can then be described by the dispersion relations , which give the possible phonon frequencies ω as a function of the phonon wavevector .
In addition to detecting surface phonons, because of the low energy of the helium beam, low-frequency vibrations of adsorbates can be detected as well, leading to the determination of their potential energy.
References
Other
E. Hulpke (Ed.), Helium Atom Scattering from Surfaces, Springer Series in Surface Sciences 27 (1992)
G. Scoles (Ed.), Atomic and Molecular Beam Methods, Vol. 2, Oxford University Press, New York (1992)
J. B. Hudson, Surface Science - An Introduction, John Wiley & Sons, Inc, New York (1998)
Materials science
Scientific techniques | Helium atom scattering | [
"Physics",
"Materials_science",
"Engineering"
] | 2,964 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
13,737,048 | https://en.wikipedia.org/wiki/Bioretention | Bioretention is the process in which contaminants and sedimentation are removed from stormwater runoff. The main objective of the bioretention cell is to attenuate peak runoff as well as to remove stormwater runoff pollutants.
Construction of a bioretention area
Stormwater is firstly directed into the designed treatment area, which conventionally consists of a sand bed (which serves as a transition to the actual soil), a filter media layer (which consists of layered materials of various composition), and plants atop the filter media. Various soil amendment such as water treatment residue (WTR), Coconut husk, biochar etc have been proposed over the years. These materials were reported to have enhanced performance in terms of pollutant removal. Runoff passes first over or through a sand bed, which slows the runoff's velocity, distributes it evenly along the length of the ponding area, which consists of a surface organic layer and/or groundcover and the underlying planting soil. Stored water in the bioretention area planting soil exfiltrates over a period of days into the underlying soils.
Filtration
Each of the components of the bioretention area is designed to perform a specific function. The grass buffer strip reduces incoming runoff velocity and filters particulates from the runoff. The sand bed also reduces the velocity, filters particulates, and spreads flow over the length of the bioretention area. Aeration and drainage of the planting soil are provided by the deep sand bed. The ponding area provides a temporary storage location for runoff prior to its evaporation or infiltration. Some particulates not filtered out by the grass filter strip or the sand bed settle within the ponding area.
The organic or mulch layer also filters pollutants and provides an environment conducive to the growth of microorganisms, which degrade petroleum-based products and other organic material. This layer acts in a similar way to the leaf litter in a forest and prevents the erosion and drying of underlying soils. Planted groundcover reduces the potential for erosion as well, slightly more effectively than mulch. The maximum sheet flow velocity prior to erosive conditions is 0.3 meters per second (1 foot per second) for planted groundcover and 0.9 meters per second (3 feet per second) for mulch.
The clay in the planting soil provides adsorption sites for hydrocarbons, heavy metals, nutrients and other pollutants. Stormwater storage is also provided by the voids in the planting soil. The stored water and nutrients in the water and soil are then available to the plants for uptake. The layout of the bioretention area is determined after site constraints such as location of utilities, underlying soils, existing vegetation, and drainage are considered. Sites with loamy sand soils are especially appropriate for bioretention because the excavated soil can be backfilled and used as the planting soil, thus eliminating the cost of importing planting soil. An unstable surrounding soil stratum and soils with a clay content greater than 25 percent may preclude the use of bioretention, as would a site with slopes greater than 20 percent or a site with mature trees that would be removed during construction of the best management practices.
Heavy metal remediation
Contaminant trace metals such as zinc, lead, and copper are found in stormwater runoff from impervious surfaces (e.g. roadways and sidewalks). Treatment systems such as rain gardens and stormwater planters utilize a bioretention layer to remove heavy metals in stormwater runoff. Dissolved forms of heavy metals may bind to sediment particles in the roadway that are then captured by the bioretention system. Additionally, heavy metals may adsorb to soil particles in the bioretention media as the runoff filters through. In laboratory experiments, bioretention cells removed 94%, 88%, 95%, and >95% of zinc, copper, lead, and cadmium, respectively from water with metal concentrations typical of stormwater runoff. While this is a great benefit for water quality improvement, bioretention systems have a finite capacity for heavy metal removal. This will ultimately control the lifetime of bioretention systems, especially in areas with high heavy metal loads.
Metal removal by bioretention cells in cold climates was similar or slightly lower than that in warmer environments. Plants are less active in colder seasons, suggesting that most of the heavy metals remain in the bioretention media rather than being taken up by plant roots. Therefore, removal and replacement of the bioretention layer will become necessary in areas with heavy metal pollutants in stormwater runoff to extend the life of the treatment system.
See also
Bioswale
Constructed wetland
Groundwater recharge
Infiltration basin
Organisms involved in water purification
Phytoremediation
Rain garden
Tree box filter
Urban runoff
References
-
-
Sustainable design
Environmental engineering
Environmental soil science
Hydrology and urban planning
Landscape architecture
Sustainable gardening
Stormwater management
Water conservation | Bioretention | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,034 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Phytoremediation plants",
"Landscape architecture",
"Water pollution",
"Biodegradation",
"Ecological techniques",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Bioremediatio... |
4,305,411 | https://en.wikipedia.org/wiki/Vasopressin%20receptor%201B | Vasopressin V1b receptor (V1BR) also known as vasopressin 3 receptor (VPR3) or antidiuretic hormone receptor 1B is a protein that in humans is encoded by the AVPR1B (arginine vasopressin receptor 1B) gene.
V1BR acts as a receptor for vasopressin. AVPR1B belongs to the subfamily of G protein-coupled receptors. Its activity is mediated by G proteins which stimulate a phosphatidylinositol-calcium second messenger system. It is a major contributor to homeostasis and the control of water, glucose, and salts in the blood. Arginine vasopressin has four receptors, each of which are located in different tissues and have specific functions. AVPR1b is a G protein-coupled pituitary receptor that has only recently been characterized because of its rarity.
It has been found that the 420-amino-acid sequence of the AVPR1B receptor shared the most overall similarities with the AVP1A, AVP2 and oxytocin receptors. AVPR1B maps to chromosome region 1q32 and is a member of the vasopressin/oxytocin family subfamily.
Tissue distribution
AVPR1B was initially described as a novel vasopressin receptor located in the anterior pituitary, where it stimulates ACTH release. Subsequent studies have shown that it is also present in the brain and some peripheral tissues.
Clinical significance
Behavioral
Inactivation of the Avpr1b gene in mice (knockout) produces mice with greatly reduced aggression and a reduced ability to recognize recently investigated mice. Defensive behaviour and predatory behaviours appear normal in these knockout mice, but there is evidence that social motivation or awareness is reduced. The AVPR1B antagonist, SSR149415, has been shown to have anti-aggressive actions in hamsters and anti-depressant- and anxiety (anxiolytic)-like behaviors in rats. A single nucleotide polymorphism (SNP) has been associated with susceptibility to depression in humans.
Metabolic
Various stress-induced elevations of ACTH are blunted in the Avpr1b knockout mouse.
Oncology
AVPR1B is expressed at high levels in ACTH-secreting pituitary adenomas as well as in bronchial carcinoids responsible for the ectopic ACTH syndrome.
Ligands
Nelivaptan (SSR149415) and D-[Leu4-Lys8]-vasopressin are a specific antagonist and agonist for the vasopressin 1b receptor, respectively.
Function
AVPR1B is found in different parts of the body and thus has several influences and regulatory actions. Arginine vasopressin influences several symptoms related to affective disorders including significant memory processes, pain sensitivity, synchronization of biological rhythms and the timing and quality of REM sleep. Studies have shown that AVPR1B deficiencies produce behavioral changes that can be reversed when the peptide is replaced. These effects are expressed through contact with specific plasma membrane receptors. AVPR1B is responsible for fueling the effects of vasopressin on ACTH release. This interaction takes place as Arginine Vasopressin works with corticotropin releasing hormone to stimulate the pituitary gland to secrete ACTH. AVPR1b is then responsible for mediating the stimulatory effect of vasopressin on ACTH release.
Several G proteins are also involved in the signal transduction pathways linked with AVPR1B. These relationships depend on the level of receptor expression and concentration of vasopressin. For example, AVPR1B causes secretion of ACTH from the anterior pituitary cells in a dose-dependent relationship by activating protein kinase C via the Gq/11 protein.
Application
There have been several experiments which have studied these interactions further and revealed AVPR1B's role in psychological disorders and regulatory functions. Haplotypes of AVPR1B are associated with increased protective effects to recurrent major depression. AVPR1B has also been associated with higher cortisol responses to psychosocial stress in children with psychiatric disorders compared with carriers of glucocorticoid receptor gene. AVPR1b has also shown involvement in regulation of brain water content and cerebral edema. This was revealed as increased levels of AVPR1B mRNA on the choroid plexus were discovered as a result of increased plasma osmolality. The increase after a reduction of brain water content from salt water loading indicated AVPR1B's role in the neuroendocrine feedback loop in maintaining normal central nervous system fluid balance.
References
External links
G protein-coupled receptors | Vasopressin receptor 1B | [
"Chemistry"
] | 989 | [
"G protein-coupled receptors",
"Signal transduction"
] |
4,311,169 | https://en.wikipedia.org/wiki/Verifiable%20secret%20sharing | In cryptography, a secret sharing scheme is verifiable if auxiliary information is included that allows players to verify their shares as consistent. More formally, verifiable secret sharing ensures that even if the dealer is malicious there is a well-defined secret that the players can later reconstruct. (In standard secret sharing, the dealer is assumed to be honest.)
The concept of verifiable secret sharing (VSS) was first introduced in 1985 by Benny Chor, Shafi Goldwasser, Silvio Micali and Baruch Awerbuch.
In a VSS protocol a distinguished player who wants to share the secret is referred to as the dealer. The protocol consists of two phases: a sharing phase and a reconstruction phase.
Sharing: Initially the dealer holds secret as input and each player holds an independent random input. The sharing phase may consist of several rounds. At each round each player can privately send messages to other players and can also broadcast a message. Each message sent or broadcast by a player is determined by its input, its random input and messages received from other players in previous rounds.
Reconstruction: In this phase each player provides its entire view from the sharing phase and a reconstruction function is applied and is taken as the protocol's output.
An alternative definition given by Oded Goldreich defines VSS as a secure multi-party protocol for computing the randomized functionality corresponding to some (non-verifiable) secret sharing scheme. This definition is stronger than that of the other definitions and is very convenient to use in the context of general secure multi-party computation.
Verifiable secret sharing is important for secure multiparty computation. Multiparty computation is typically accomplished by making secret shares of the inputs, and manipulating the shares to compute some function. To handle "active" adversaries (that is, adversaries that corrupt nodes and then make them deviate from the protocol), the secret sharing scheme needs to be verifiable to prevent the deviating nodes from throwing off the protocol.
Feldman's scheme
A commonly used example of a simple VSS scheme is the protocol by Paul Feldman, which is based on Shamir's secret sharing scheme combined with any encryption scheme which satisfies a specific homomorphic property (that is not necessarily satisfied by all homomorphic encryption schemes).
The following description gives the general idea, but is not secure as written. (Note, in particular, that the published value gs leaks information about the dealer's secret s.)
First, a cyclic group G of prime order q, along with a generator g of G, is chosen publicly as a system parameter. The group G must be chosen such that computing discrete logarithms is hard in this group. (Typically, one takes an order-q subgroup of (Z/pZ)×, where q is a prime dividing .)
The dealer then computes (and keeps secret) a random polynomial P of degree t with coefficients in Zq, such that , where s is the secret. Each of the n share holders will receive a value P(1), ..., P(n) modulo q. Any share holders can recover the secret s by using polynomial interpolation modulo q, but any set of at most t share holders cannot. (In fact, at this point any set of at most t share holders has no information about s.)
So far, this is exactly Shamir's scheme. To make these shares verifiable, the dealer distributes commitments to the coefficients of P modulo q. If P(x) = s + a1x + ... + atxt, then the commitments that must be given are:
c0 = gs,
c1 = ga1,
...
ct = gat.
Once these are given, any party can verify their share. For instance, to verify that modulo q, party i can check that
.
This scheme is, at best, secure against computationally bounded adversaries, namely the intractability of computing discrete logarithms. Pedersen proposed later a scheme where no information about the secret is revealed even with a dealer with unlimited computing power.
Benaloh's scheme
Once n shares are distributed to their holders, each holder should be able to verify that all shares are collectively t-consistent (i.e., any subset t of n shares will yield the same, correct, polynomial without exposing the secret).
In Shamir's secret sharing scheme the shares are t-consistent if and only if the interpolation of the points yields a polynomial degree at most .
Based on that observation, Benaloh's protocol can be shown to allow the share holders to perform the required validation while also verifying the dealer's authenticity and integrity.
A second observation is that given the degree of the sum of two polynomials F and G is less than or equal to t, either the degrees of both F and G are less than or equal to t, or both the degrees of F and G are greater than t. This claim is evident due to Polynomial function's Homomorphic property, examples:
case 1:
, ,
case 2:
, ,
the case that we canceled:
, ,
Interactive proof: The following 5 steps verify the integrity of the dealer to the Share holders:
Dealer chooses polynomial P, distributes shares (as per Shamir's secret sharing scheme).
Dealer constructs a very large number (k) of random polynomials of degree t, and distributes shares.
Share-holder chooses a random subset of polynomials.
Dealer reveals shares of the m chosen polynomials and sums of remaining sums then shares the result as well.
Each share-holder or verifier ascertains that all revealed polynomials are degree-t, and corresponds to its own known share.
The secret s remains safe and unexposed.
These 5 steps will be done in small number of iterations to achieve high probability result about the dealer integrity.
Diagnosis 1: Because the degree of polynomial is less than or equal to t and because the Dealer reveals the other polynomials (step 4), the degree of the polynomial P must be less than or equal to t (second observation case 1, with height probability when these steps are repeated in different iterations).
Diagnosis 2: One of the parameters for the problem was to avoid exposing the secret which we are attempting to verify. This property is kept through the use of Algebra homomorphism to perform validation. (A set of random polynomials of degree at most t together with a set of sums of P and other polynomials of degree at most t gives no useful information about P.)
Secret ballot elections
Verifiable secret sharing can be used to build end-to-end auditable voting systems.
Using the technique of verifiable secret sharing one can satisfy the election problem that will be described here.
In the election problem each voter can vote either 0 (to oppose) or 1 (for support), and the sum of all votes will determine election's result. For the election to execute, it is necessary to make sure that the following conditions are fulfilled:
The voters' privacy should not be compromised.
The election administrator must verify that no voter committed fraud.
If using verifiable secret sharing, n tellers will replace the single election administrator. Each voter will distribute one share of its secret vote to every one of the n tellers. This way the privacy of the voter is preserved and the first condition is satisfied.
Reconstruction of the election's result is easy, if there exist enough tellers to discover polynomial P.
The interactive proof can be generalized slightly to allow verification of the vote shares. Each voter will prove (in the distribution of the secret share phase) to the tellers that his vote is legitimate using the five steps of the interactive proof.
Round-optimal and efficient verifiable secret sharing
The round complexity of a VSS protocol is defined as the number of communication rounds in its sharing phase; reconstruction can always be done in a single round. There is no 1-round VSS with , regardless of the number of players. The bounds on perfectly secure (not relying on a computational hardness assumption) and efficient (polynomial time) VSS protocols is given below.
See also
Secret sharing
Secure multiparty computation
Publicly Verifiable Secret Sharing
Verifiable computing
Bibliography
T. Rabin and M. Ben-Or, Verifiable secret sharing and multiparty protocols with honest majority. In Proceedings of the Twenty-First Annual ACM Symposium on theory of Computing (Seattle, Washington, United States, May 14–17, 1989).
Rosario Gennaro, Yuval Ishai, Eyal Kushilevitz, Tal Rabin, The Round Complexity of Verifiable Secret Sharing and Secure Multicast. In Proceedings of the thirty-third annual ACM symposium on Theory of computing ( Hersonissos, Greece, Pages: 580 - 589, 2001 )
Matthias Fitzi, Juan Garay, Shyamnath Gollakota, C. Pandu Rangan, and Kannan Srinathan, Round-Optimal and Efficient Verifiable Secret Sharing. Theory of Cryptography, Third Theory of Cryptography Conference, TCC 2006, ( New York, NY, USA, March 4–7, 2006 )
Oded Goldreich, Secure multi-party computation
Josh Cohen Benaloh, Secret Sharing Homomorphisms: Keeping Shares of a Secret. Proceedings on Advances in cryptology, CRYPTO '86. pp. 251–260, 1987
References
Notes
Cryptography | Verifiable secret sharing | [
"Mathematics",
"Engineering"
] | 1,952 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
9,701,718 | https://en.wikipedia.org/wiki/Dice-S%C3%B8rensen%20coefficient | The Dice-Sørensen coefficient (see below for other names) is a statistic used to gauge the similarity of two samples. It was independently developed by the botanists Lee Raymond Dice and Thorvald Sørensen, who published in 1945 and 1948 respectively.
Name
The index is known by several other names, especially Sørensen–Dice index, Sørensen index and Dice's coefficient. Other variations include the "similarity coefficient" or "index", such as Dice similarity coefficient (DSC). Common alternate spellings for Sørensen are Sorenson, Soerenson and Sörenson, and all three can also be seen with the –sen ending (the Danish letter ø is phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII).
Other names include:
F1 score
Czekanowski's binary (non-quantitative) index
Measure of genetic similarity
Zijdenbos similarity index, referring to a 1994 paper of Zijdenbos et al.
Formula
Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as
where |X| and |Y| are the cardinalities of the two sets (i.e. the number of elements in each set).
The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets.
When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as
.
It is different from the Jaccard index which only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1. It can be viewed as a similarity measure over sets.
Similarly to the Jaccard index, the set operations can be expressed in terms of vector operations over binary vectors a and b:
which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms.
For sets X and Y of keywords used in information retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :
When taken as a string similarity measure, the coefficient may be calculated for two strings, x and y using bigrams as follows:
where nt is the number of character bigrams found in both strings, nx is the number of bigrams in string x and ny is the number of bigrams in string y. For example, to calculate the similarity between:
night
nacht
We would find the set of bigrams in each word:
{ni,ig,gh,ht}
{na,ac,ch,ht}
Each set has four elements, and the intersection of these two sets has only one element: ht.
Inserting these numbers into the formula, we calculate, s = (2 · 1) / (4 + 4) = 0.25.
Continuous Dice Coefficient
Source:
For a discrete (binary) ground truth and continuous measures in the interval [0,1], the following formula can be used:
Where and
c can be computed as follows:
If which means no overlap between A and B, c is set to 1 arbitrarily.
Difference from Jaccard
This coefficient is not very different in form from the Jaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficient , one can calculate the respective Jaccard index value and vice versa, using the equations and .
Since the Sørensen–Dice coefficient does not satisfy the triangle inequality, it can be considered a semimetric version of the Jaccard index.
The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function
is not a proper distance metric as it does not satisfy the triangle inequality. The simplest counterexample of this is given by the three sets , and . We have and . To satisfy the triangle inequality, the sum of any two sides must be greater than or equal to that of the remaining side. However, .
Applications
The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of two fuzzy sets). As compared to Euclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers. Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computer lexicography for measuring the lexical association score of two given words.
logDice is also used as part of the Mash Distance for genome and metagenome distance estimation
Finally, Dice is used in image segmentation, in particular for comparing algorithm output against reference masks in medical applications.
Abundance version
The expression is easily extended to abundance instead of presence/absence of species. This quantitative version is known by several names:
Quantitative Sørensen–Dice index
Quantitative Sørensen index
Quantitative Dice index
Bray–Curtis similarity (1 minus the Bray-Curtis dissimilarity)
Czekanowski's quantitative index
Steinhaus index
Pielou's percentage similarity
1 minus the Hellinger distance
Proportion of specific agreement or positive agreement
See also
Correlation
F1 score
Jaccard index
Hamming distance
Mantel test
Morisita's overlap index
Overlap coefficient
Renkonen similarity index
Tversky index
Universal adaptive strategy theory (UAST)
References
External links
Information retrieval evaluation
String metrics
Measure theory
Similarity measures | Dice-Sørensen coefficient | [
"Physics"
] | 1,221 | [
"Similarity measures",
"Physical quantities",
"Distance"
] |
9,702,340 | https://en.wikipedia.org/wiki/Salt%20fingering | Salt fingering is a mixing process, example of double diffusive instability, that occurs when relatively warm, salty water overlies relatively colder, fresher water. It is driven by the fact that heated water diffuses more readily than salty water. A small parcel of warm, salty water sinking downwards into a colder, fresher region will lose its heat before losing its salt, making the parcel of water increasingly denser than the water around it and sinking further. Likewise, a small parcel of colder, fresher water will be displaced upwards and gain heat by diffusion from surrounding water, which will then make it lighter than the surrounding waters, and cause it to rise further. Paradoxically, the fact that salinity diffuses less readily than temperature means that salinity mixes more efficiently than temperature due to the turbulence caused by salt fingers.
Salt fingering was first described mathematically by Professor Melvin Stern of Florida State University in 1960 and important field measurements of the process have been made by Raymond Schmitt of the Woods Hole Oceanographic Institution and Mike Gregg and Eric Kunze of the University of Washington, Seattle. A particularly interesting area for salt fingering is found in the Caribbean Sea, where it is responsible for producing a "staircase" of well-mixed layers a few metres in thickness that extend for hundreds of kilometres.
Pre-dating the work of Stern, a paper by the American oceanographer Henry Stommel discussed the creation of a large-scale salt finger in which a column of water would be surrounded by a membrane that would allow diffusion of temperature but not salinity. Once primed by the upward movement of the colder and fresher intermediate water, the resultant "perpetual salt fountain" would be able to draw energy (heat) from the local ocean water stratification.
References
External links
Salt Fingering
Physical oceanography | Salt fingering | [
"Physics"
] | 372 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
9,702,754 | https://en.wikipedia.org/wiki/Motion%20ratio | The motion ratio of a mechanism is the ratio of the displacement of the point of interest to that of another point.
The most common example is in a vehicle's suspension, where it is used to describe the displacement and forces in the springs and shock absorbers. The force in the spring is (roughly) the vertical force at the contact patch divided by the motion ratio, and the spring rate is the wheel rate divided by the motion ratio squared.
This is described as the Installation Ratio in the reference. Motion ratio is the more common term in the industry, but sometimes is used to mean the inverse of the above definition.
Motion ratio in suspension of a vehicle describes the amount of shock travel for a given amount of wheel travel. Mathematically, it is the ratio of shock travel and wheel travel. The amount of force transmitted to the vehicle chassis reduces with increase in motion ratio. A motion ratio close to one is desired in the vehicle for better ride and comfort. One should know the desired wheel travel of the vehicle before calculating motion ratio, which depends much on the type of track the vehicle will run upon.
Selecting the appropriate ratio depends on multiple factors:
Bending moment: To reduce the bending moment the strut point should be close to the wheel.
Suspension stiffness: Suspensions tends to stiffen when the inclination of the shock absorber to horizontal tends to 90 degrees.
Half-shafts: In suspensions of driven wheels, wheel travel is in many cases constrained by the universal joints of the half shafts. Design the motion ratio such that at maximum bounce and rebound shocks are the first components that bottom out by hitting bump stops.
References
Automotive suspension technologies
Engineering ratios | Motion ratio | [
"Mathematics",
"Engineering"
] | 334 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
9,703,026 | https://en.wikipedia.org/wiki/Indium%20%28111In%29%20capromab%20pendetide | {{DISPLAYTITLE:Indium (111In) capromab pendetide}}
Indium (111In) capromab pendetide (trade name Prostascint) is used to image the extent of prostate cancer. Capromab is a mouse monoclonal antibody which recognizes a protein found on both prostate cancer cells and normal prostate tissue. It is linked to pendetide, a derivative of DTPA. Pendetide acts as a chelating agent for the radionuclide indium-111. Following an intravenous injection of Prostascint, imaging is performed using single-photon emission computed tomography (SPECT).
Early trials with yttrium (90Y) capromab pendetide were also conducted.
References
Radiopharmaceuticals
Antibody-drug conjugates
Indium compounds | Indium (111In) capromab pendetide | [
"Chemistry",
"Biology"
] | 182 | [
"Antibody-drug conjugates",
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
9,703,555 | https://en.wikipedia.org/wiki/Indium%20%28111In%29%20satumomab%20pendetide | {{DISPLAYTITLE:Indium (111In) satumomab pendetide}}
Indium (111In) satumomab pendetide (trade name OncoScint CR103) is a mouse monoclonal antibody which is used for cancer diagnosis. The antibody, satumomab, is linked to pendetide, a derivative of DTPA. Pendetide acts as a chelating agent for the radionuclide indium-111.
References
External links
Drugs.com Satumomab pendetide
Radiopharmaceuticals
Antibody-drug conjugates
Indium compounds | Indium (111In) satumomab pendetide | [
"Chemistry",
"Biology"
] | 137 | [
"Antibody-drug conjugates",
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
9,704,027 | https://en.wikipedia.org/wiki/Ongoing%20reliability%20test | The ongoing reliability test (ORT) is a hardware test process usually used in manufacturing to ensure that quality of the products is still of the same specifications as the day it first went to production or general availability.
The products currently in the manufacturing line are randomly picked every day with a predefined percentage or numbers and then put in a control drop tower or an environmental chamber. Control drop simulates physical interactions on the product, while environmental chamber simulates the stress profile of thermal cycling, elevated temperature, or combined environmental stresses to induce fatigue damage. The profile should stimulate the precipitation of latent defects that may be introduced from the manufacturing process but not remove significant life from the product or introduce flaws to risk failure during its intended mission. highly accelerated stress test is a Ongoing Reliability Test that uses the empirical operational limits as the reference for the combined vibration, thermal cycling, and other stress applied to find latent defects.
Quality of the products is then measured with the results of this test. If a unit fails, it goes under investigation to see what caused the failure and then remove the cause whether it came from an assembly process or from a component being incorrectly manufactured, or any other cause. If it is proven that a real failure occurs, the batch of units that were produced along with the failed unit, is then tagged for re-test or repair to either verify or fix the problem.
External links
OPS A La Carte's Reliability Services in the Manufacturing Phase On-Going [sic] Reliability Testing (ORT)
Sample OEM contracts with contract manufacturers (CM) which specifies ORT to be a standard process (see section 7.8)
Accelerated Reliability Engineering: HALT and HASS,. Gregg K. Hobbs, John Wiley & Sons Ltd., 2000.
Statistical process control
Hardware testing | Ongoing reliability test | [
"Engineering"
] | 361 | [
"Statistical process control",
"Engineering statistics"
] |
9,704,668 | https://en.wikipedia.org/wiki/Decision-to-decision%20path | A decision-to-decision path, or DD-path, is a path of execution (usually through a flow graph representing a program, such as a flow chart) between two decisions. More recent versions of the concept also include the decisions themselves in their own DD-paths.
Definition
In Huang's 1975 paper, a decision-to-decision path is defined as path in a program's flowchart such that all the following hold (quoting from the paper):
its first constituent edge emanates either from an entry node or a decision box;
its last constituent edge terminates either at a decision box or at an exit node; and
there are no decision boxes on the path except those at both ends
Jorgensen's more recent textbooks restate it in terms of a program's flow graph (called a "program graph" in that textbook). First define some preliminary notions: chain and a maximal chain. A chain is defined as a path in which:
initial and terminal nodes are distinct, and
all interior nodes have in-degree = 1 and out-degree = 1.
A maximal chain is a chain that is not part of a bigger chain.
A DD-path is a set of nodes in a program graph such that one of the following holds (quoting and keeping Jorgensen's numbering, with comments added in parentheses):
It consists of a single node with in-degree = 0 (initial node)
It consists of a single node with out-degree = 0 (terminal node)
It consists of a single node with in-degree ≥ 2 or out-degree ≥ 2 (decision/merge points)
It consists of a single node with in-degree = 1 and out-degree = 1
It is a maximal chain of length ≥ 1.
According to Jorgensen (2013), in Great Britain and ISTQB literature, the same notion is called linear code sequence and jump (LCSAJ).
Properties
From the latter definition (of Jorgensen) we can conclude the following:
Every node on a flow graph of a program belongs to one DD-path.
If the first node on a DD-path is traversed, then all other nodes on that path will also be traversed.
The DD path graph is used to find independent path for testing.
Every statement in the program has been executed at least once.
DD-path testing
According to Jorgensen's 2013 textbook, DD-path testing is the best known code-based testing method, incorporated in numerous commercial tools.
DD-path testing is also called C2 testing or branch coverage.
See also
Basic block
Basis path testing and its ancillary articles
Cyclomatic complexity
Essential complexity
Code coverage
White-box testing
References
External links
http://www.eecs.yorku.ca/course_archive/2011-12/W/4313/slides/11-Paths.pdf
Software testing | Decision-to-decision path | [
"Engineering"
] | 583 | [
"Software engineering",
"Software engineering stubs",
"Software testing"
] |
9,710,396 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Hilbert%20problem | In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others.
The Riemann problem
Suppose that is a smooth, simple, closed contour in the complex plane. Divide the plane into two parts denoted by (the inside) and (the outside), determined by the index of the contour with respect to a point. The classical problem, considered in Riemann's PhD dissertation, was that of finding a function
analytic inside , such that the boundary values of along satisfy the equation
for , where , and are given real-valued functions. For example, in the special case where and is a circle, the problem reduces to deriving the Poisson formula.
By the Riemann mapping theorem, it suffices to consider the case when is the circle group . In this case, one may seek along with its Schwarz reflection
For , one has and so
Hence the problem reduces to finding a pair of analytic functions and on the inside and outside, respectively, of the unit disk, so that on the unit circle
and, moreover, so that the condition at infinity holds:
The Hilbert problem
Hilbert's generalization of the problem attempted to find a pair of analytic functions and on the inside and outside, respectively, of the curve , such that for one has
where , and are given complex-valued functions (no longer just complex conjugates).
Riemann–Hilbert problems
In the Riemann problem as well as Hilbert's generalization, the contour was simple. A full Riemann–Hilbert problem allows that the contour may be composed of a union of several oriented smooth curves, with no intersections. The "+" and "−" sides of the "contour" may then be determined according to the index of a point with respect to . The Riemann–Hilbert problem is to find a pair of analytic functions and on the "+" and "−" side of , respectively, such that for one has
where , and are given complex-valued functions.
Matrix Riemann–Hilbert problems
Given an oriented contour (technically: an oriented union of smooth curves without points of infinite self-intersection in the complex plane), a Riemann–Hilbert factorization problem is the following.
Given a matrix function defined on the contour , find a holomorphic matrix function defined on the complement of , such that the following two conditions are satisfied
If and denote the non-tangential limits of as we approach , then , at all points of non-intersection in .
tends to the identity matrix as along any direction outside .
In the simplest case is smooth and integrable. In more complicated cases it could have singularities. The limits and could be classical and continuous or they could be taken in the -sense.
At end-points or intersection points of the contour , the jump condition is not defined; constraints on the growth of near those points have to be posed to ensure uniqueness (see the scalar problem below).
Example: Scalar Riemann–Hilbert factorization problem
Suppose and . Assuming is bounded, what is the solution ?
To solve this, let's take the logarithm of equation .
Since tends to , as .
A standard fact about the Cauchy transform is that where and are the limits of the Cauchy transform from above and below ; therefore, we get
when . Because the solution of a Riemann–Hilbert factorization problem is unique (an easy application of Liouville's theorem (complex analysis)), the Sokhotski–Plemelj theorem gives the solution. We get
and therefore
which has a branch cut at contour .
Check:
therefore,
CAVEAT 1: If the problem is not scalar one cannot easily take logarithms. In general explicit solutions are very rare.
CAVEAT 2: The boundedness (or at least a constraint on the blow-up) of near the special points and is crucial. Otherwise any function of the form
is also a solution. In general, conditions on growth are necessary at special points (the end-points of the jump contour or intersection point) to ensure that the problem is well-posed.
Generalizations
DBAR problem
Suppose is some simply connected domain of the complex plane. Then the scalar equation
is a generalization of a Riemann-Hilbert problem, called the DBAR problem (or problem). It is the complex form of the nonhomogeneous Cauchy-Riemann equations. To show this, let
with , , and all real-valued functions of real variables and . Then, using
the DBAR problem yields
As such, if is holomorphic for , then the Cauchy-Riemann equations must be satisfied.
In case as and , the solution of the DBAR problem is
integrated over the entire complex plane; denoted by , and where the wedge product is defined as
Generalized analytic functions
If a function is holomorphic in some complex region , then
in . For generalized analytic functions, this equation is replaced by
in a region , where is the complex conjugate of and and are functions of and .
Generalized analytic functions have applications in differential geometry, in solving certain type of multidimensional nonlinear partial differential equations and multidimensional inverse scattering.
Applications to integrability theory
Riemann–Hilbert problems have applications to several related classes of problems.
A. Integrable models The inverse scattering or inverse spectral problem associated to the Cauchy problems for 1+1 dimensional partial differential equations on the line, or to periodic problems, or even to initial-boundary value problems (), can be stated as a Riemann–Hilbert problem. Likewise the inverse monodromy problem for Painlevé equations can be stated as a Riemann–Hilbert problem.
B. Orthogonal polynomials, Random matrices Given a weight on a contour, the corresponding orthogonal polynomials can be computed via the solution of a Riemann–Hilbert factorization problem (). Furthermore, the distribution of eigenvalues of random matrices in several classical ensembles is reduced to computations involving orthogonal polynomials (see e.g. ).
C. Combinatorial probability The most celebrated example is the theorem of on the distribution of the length of the longest increasing subsequence of a random permutation. Together with the study of B above, it is one of the original rigorous investigations of so-called "integrable probability". But the connection between the theory of integrability and various classical ensembles of random matrices goes back to the work of Dyson (see e.g. ).
D. Connection to Donaldson-Thomas theory The work of Bridgeland studies a class of Riemann-Hilbert problems coming from Donaldson-Thomas theory and makes connections with Gromov-Witten theory and exact WKB.
The numerical analysis of Riemann–Hilbert problems can provide an effective way for numerically solving integrable PDEs (see e.g. ).
Use for asymptotics
In particular, Riemann–Hilbert factorization problems are used to extract asymptotic values for the three problems above (say, as time goes to infinity, or as the dispersion coefficient goes to zero, or as the polynomial degree goes to infinity, or as the size of the permutation goes to infinity). There exists a method for extracting the asymptotic behavior of solutions of Riemann–Hilbert problems, analogous to the method of stationary phase and the method of steepest descent applicable to exponential integrals.
By analogy with the classical asymptotic methods, one "deforms" Riemann–Hilbert problems which are not explicitly solvable to problems that are. The so-called "nonlinear" method of stationary phase is due to , expanding on a previous idea by and and using technical background results from and . A crucial ingredient of the Deift–Zhou analysis is the asymptotic analysis of singular integrals on contours. The relevant kernel is the standard Cauchy kernel (see ; also cf. the scalar example below).
An essential extension of the nonlinear method of stationary phase has been the introduction of the so-called finite gap g-function transformation by , which has been crucial in most applications. This was inspired by work of Lax, Levermore and Venakides, who reduced the analysis of the small dispersion limit of the KdV equation to the analysis of a maximization problem for a logarithmic potential under some external field: a variational problem of "electrostatic" type (see ). The g-function is the logarithmic transform of the maximizing "equilibrium" measure. The analysis of the small dispersion limit of KdV equation has in fact provided the basis for the analysis of most of the work concerning "real" orthogonal polynomials (i.e. with the orthogonality condition defined on the real line) and Hermitian random matrices.
Perhaps the most sophisticated extension of the theory so far is the one applied to the "non self-adjoint" case, i.e. when the underlying Lax operator (the first component of the Lax pair) is not self-adjoint, by . In that case, actual "steepest descent contours" are defined and computed. The corresponding variational problem is a max-min problem: one looks for a contour that minimizes the "equilibrium" measure. The study of the variational problem and the proof of existence of a regular solution, under some conditions on the external field, was done in ; the contour arising is an "S-curve", as defined and studied in the 1980s by Herbert R. Stahl, Andrei A. Gonchar and Evguenii A Rakhmanov.
An alternative asymptotic analysis of Riemann–Hilbert factorization problems is provided in , especially convenient when jump matrices do not have analytic extensions. Their method is based on the analysis of d-bar problems, rather than the asymptotic analysis of singular integrals on contours. An alternative way of dealing with jump matrices with no analytic extensions was introduced in .
Another extension of the theory appears in where the underlying space of the Riemann–Hilbert problem is a compact hyperelliptic Riemann surface. The correct factorization problem is no more holomorphic, but rather meromorphic, by reason of the Riemann–Roch theorem. The related singular kernel is not the usual Cauchy kernel, but rather a more general kernel involving meromorphic differentials defined naturally on the surface (see e.g. the appendix in ). The Riemann–Hilbert problem deformation theory is applied to the problem of stability of the infinite periodic Toda lattice under a "short range" perturbation (for example a perturbation of a finite number of particles).
Most Riemann–Hilbert factorization problems studied in the literature are 2-dimensional, i.e., the unknown matrices are of dimension 2. Higher-dimensional problems have been studied by Arno Kuijlaars and collaborators, see e.g. .
See also
Riemann–Hilbert correspondence
Wiener-Hopf method
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Complex analysis
Exactly solvable models
Integrable systems
Solitons
Scattering theory
Harmonic analysis
Microlocal analysis
Ordinary differential equations
Partial differential equations
Mathematical problems
Bernhard Riemann
David Hilbert | Riemann–Hilbert problem | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,375 | [
"Scattering theory",
"Integrable systems",
"Theoretical physics",
"Scattering",
"Mathematical problems"
] |
9,710,517 | https://en.wikipedia.org/wiki/Embankment%20%28earthworks%29 | An embankment is a raised wall, bank or mound made of earth or stones, that are used to hold back water or carry a roadway. A road, railway line, or canal is normally raised onto an embankment made of compacted soil (typically clay or rock-based) to avoid a change in level required by the terrain, the alternatives being either to have an unacceptable change in level or detour to follow a contour. A cutting is used for the same purpose where the land is originally higher than required.
Materials
Embankments are often constructed using material obtained from a cutting. Embankments need to be constructed using non-aerated and waterproofed, compacted (or entirely non-porous) material to provide adequate support to the formation and a long-term level surface with stability. An example material for road embankment building is sand-bentonite mixture often used as a protective to protect underground utility cables and pipelines.
Intersection of embankments
To intersect an embankment without a high flyover, a series of tunnels can consist of a section of high tensile strength viaduct (typically built of brick and/or metal) or pair of facing abutments for a bridge.
Notable embankments
Burnley Embankment: The largest canal embankment in Britain.
Harsimus Stem Embankment: The remains of a railway built by the Pennsylvania Railroad in Jersey City, New Jersey, United States
Stanley Embankment: A railway, road and cycleway that connects the Island of Anglesey and Holy Island, Wales. It carries the North Wales Coast Line and the A5 road.
See also
Causeway
Cut and fill
Cut (earthmoving)
Fill dirt
Grade (slope)
Land reclamation
Levee
Roadbed
Track bed
Retaining wall
Notes
References
Further reading
Slope landforms
Rail infrastructure
Road infrastructure
Building engineering
Fills (earthworks) | Embankment (earthworks) | [
"Engineering"
] | 359 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
9,712,439 | https://en.wikipedia.org/wiki/Filled%20Julia%20set | The filled-in Julia set of a polynomial is a Julia set and its interior, non-escaping set.
Formal definition
The filled-in Julia set of a polynomial is defined as the set of all points of the dynamical plane that have bounded orbit with respect to
where:
is the set of complex numbers
is the -fold composition of with itself = iteration of function
Relation to the Fatou set
The filled-in Julia set is the (absolute) complement of the attractive basin of infinity.
The attractive basin of infinity is one of the components of the Fatou set.
In other words, the filled-in Julia set is the complement of the unbounded Fatou component:
Relation between Julia, filled-in Julia set and attractive basin of infinity
The Julia set is the common boundary of the filled-in Julia set and the attractive basin of infinity
where: denotes the attractive basin of infinity = exterior of filled-in Julia set = set of escaping points for
If the filled-in Julia set has no interior then the Julia set coincides with the filled-in Julia set. This happens when all the critical points of are pre-periodic. Such critical points are often called Misiurewicz points.
Spine
The most studied polynomials are probably those of the form , which are often denoted by , where is any complex number. In this case, the spine of the filled Julia set is defined as arc between -fixed point and ,
with such properties:
spine lies inside . This makes sense when is connected and full
spine is invariant under 180 degree rotation,
spine is a finite topological tree,
Critical point always belongs to the spine.
-fixed point is a landing point of external ray of angle zero ,
is landing point of external ray .
Algorithms for constructing the spine:
detailed version is described by A. Douady
Simplified version of algorithm:
connect and within by an arc,
when has empty interior then arc is unique,
otherwise take the shortest way that contains .
Curve :
divides dynamical plane into two components.
Images
Names
airplane
Douady rabbit
dragon
basilica or San Marco fractal or San Marco dragon
cauliflower
dendrite
Siegel disc
Notes
References
Peitgen Heinz-Otto, Richter, P.H. : The beauty of fractals: Images of Complex Dynamical Systems. Springer-Verlag 1986. .
Bodil Branner : Holomorphic dynamical systems in the complex plane. Department of Mathematics Technical University of Denmark, MAT-Report no. 1996-42.
Fractals
Limit sets
Complex dynamics | Filled Julia set | [
"Mathematics"
] | 508 | [
"Limit sets",
"Mathematical analysis",
"Functions and mappings",
"Complex dynamics",
"Mathematical objects",
"Fractals",
"Topology",
"Mathematical relations",
"Dynamical systems"
] |
9,712,648 | https://en.wikipedia.org/wiki/Bing%20metrization%20theorem | In topology, the Bing metrization theorem, named after R. H. Bing, characterizes when a topological space is metrizable.
Formal statement
The theorem states that a topological space is metrizable if and only if it is regular and T0 and has a basis. A family of sets is called when it is a union of countably many discrete collections, where a family of subsets of a space is called discrete, when every point of has a neighborhood that intersects at most one member of
History
The theorem was proven by Bing in 1951 and was an independent discovery with the Nagata–Smirnov metrization theorem that was proved independently by both Nagata (1950) and Smirnov (1951). Both theorems are often merged in the Bing-Nagata-Smirnov metrization theorem. It is a common tool to prove other metrization theorems, e.g. the Moore metrization theorem – a collectionwise normal, Moore space is metrizable – is a direct consequence.
Comparison with other metrization theorems
Unlike the Urysohn's metrization theorem which provides a sufficient condition for metrization, this theorem provides both a necessary and sufficient condition for a space to be metrizable.
See also
References
"General Topology", Ryszard Engelking, Heldermann Verlag Berlin, 1989.
Theorems in topology
de:Satz von Bing-Nagata-Smirnow | Bing metrization theorem | [
"Mathematics"
] | 292 | [
"Mathematical problems",
"Mathematical theorems",
"Topology",
"Theorems in topology"
] |
9,713,058 | https://en.wikipedia.org/wiki/PCAF | P300/CBP-associated factor (PCAF), also known as K(lysine) acetyltransferase 2B (KAT2B), is a human gene and transcriptional coactivator associated with p53.
Structure
Several domains of PCAF can act independently or in unison to enable its functions. PCAF has separate acetyltransferase and E3 ubiquitin ligase domains as well as a bromodomain for interaction with other proteins. PCAF also possesses sites for its own acetylation and ubiquitination.
Function
CBP and p300 are large nuclear proteins that bind to many sequence-specific factors involved in cell growth and/or differentiation, including c-jun and the adenoviral oncoprotein E1A. The protein encoded by the PCAF gene associates with p300/CBP. It has in vitro and in vivo binding activity with CBP and p300, and competes with E1A for binding sites in p300/CBP. It has histone acetyl transferase activity with core histones and nucleosome core particles, indicating that this protein plays a direct role in transcriptional regulation.
Regulation
The acetyltransferase activity and cellular location of PCAF are regulated through acetylation of PCAF itself. PCAF may be autoacetylated (acetylated by itself) or by p300. Acetylation leads to migration to the nucleus and enhances its acetyltransferase activity. PCAF interacts with and is deacetylated by HDAC3, leading to a reduction in PCAF acetyltransferase activity and cytoplasmic localisation.
Protein interactions
PCAF forms complexes with numerous proteins that guide its activity. For example PCAF is recruited by ATF to acetylate histones and promote transcription of ATF4 target genes.
Targets
There are various protein targets of PCAF's acetyltransferase activity including transcription factors such as Fli1, p53 and numerous histone residues. Hdm2, itself a ubiquitin ligase that targets p53, has also been demonstrated to be a target of the ubiquitin-ligase activity of PCAF.
Interactions
PCAF has been shown to interact with:
BRCA2,
CTNNB1,
CREBBP,
EVI1,
HNF1A,
IRF1,
IRF2,
KLF13,
Mdm2
Myc,
NCOA1,
POLR2A,
RBPJ,
TCF3,
TRRAP, and
TWIST1.
See also
Transcription coregulator
Acetyltransferase
References
External links
Further reading
Gene expression
Transcription coregulators | PCAF | [
"Chemistry",
"Biology"
] | 582 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
3,166,203 | https://en.wikipedia.org/wiki/Luminosity%20distance | Luminosity distance DL is defined in terms of the relationship between the absolute magnitude M and apparent magnitude m of an astronomical object.
which gives:
where DL is measured in parsecs. For nearby objects (say, in the Milky Way) the luminosity distance gives a good approximation to the natural notion of distance in Euclidean space.
The relation is less clear for distant objects like quasars far beyond the Milky Way since the apparent magnitude is affected by spacetime curvature, redshift, and time dilation. Calculating the relation between the apparent and actual luminosity of an object requires taking all of these factors into account. The object's actual luminosity is determined using the inverse-square law and the proportions of the object's apparent distance and luminosity distance.
Another way to express the luminosity distance is through the flux-luminosity relationship,
where is flux (W·m−2), and is luminosity (W). From this the luminosity distance (in meters) can be expressed as:
The luminosity distance is related to the "comoving transverse distance" by
and with the angular diameter distance by the Etherington's reciprocity theorem:
where z is the redshift. is a factor that allows calculation of the comoving distance between two objects with the same redshift but at different positions of the sky; if the two objects are separated by an angle , the comoving distance between them would be . In a spatially flat universe, the comoving transverse distance is exactly equal to the radial comoving distance , i.e. the comoving distance from ourselves to the object.
See also
Distance measure
Distance modulus
Notes
External links
Ned Wright's Javascript Cosmology Calculator
iCosmos: Cosmology Calculator (With Graph Generation )
Observational astronomy
Physical quantities
Equations of astronomy | Luminosity distance | [
"Physics",
"Astronomy",
"Mathematics"
] | 386 | [
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Quantity",
"Observational astronomy",
"Equations of astronomy",
"Physical properties",
"Astronomical sub-disciplines"
] |
3,166,565 | https://en.wikipedia.org/wiki/Entrance%20pupil | In an optical system, the entrance pupil is the optical image of the physical aperture stop, as 'seen' through the optical elements in front of the stop. The corresponding image of the aperture stop as seen through the optical elements behind it is called the exit pupil. The entrance pupil defines the cone of rays that can enter and pass through the optical system. Rays that fall outside of the entrance pupil will not pass through the system.
If there is no lens in front of the aperture (as in a pinhole camera), the entrance pupil's location and size are identical to those of the aperture. Optical elements in front of the aperture will produce a magnified or diminished image of the aperture that is displaced from the aperture location. The entrance pupil is usually a virtual image: it lies behind the first optical surface of the system.
The entrance pupil is a useful concept for determining the size of the cone of rays that an optical system will accept. Once the size and the location of the entrance pupil of an optical system is determined, the maximum cone of rays that the system will accept from a given object plane is determined solely by the size of the entrance pupil and its distance from the object plane, without any need to consider ray refraction by the optics.
In photography, the size of the entrance pupil (rather than the size of the physical aperture stop) is used to calibrate the opening and closing of the diaphragm aperture. The f-number (also called the ), , is defined by , where is the focal length and is the diameter of the entrance pupil. Increasing the focal length of a lens (i.e., zooming in) will usually cause the f-number to increase, and the entrance pupil location to move further back along the optical axis.
The center of the entrance pupil is the vertex of a camera's angle of view and consequently its center of perspective, perspective point, view point, projection center or no-parallax point. This point is important in panoramic photography without digital image processing, because the camera must be rotated around the center of the entrance pupil to avoid parallax errors in the final, stitched panorama. Panoramic photographers often incorrectly refer to the entrance pupil as a nodal point, which is a different concept. Depending on the lens design, the entrance pupil location on the optical axis may be behind, within or in front of the lens system; and even at infinite distance from the lens in the case of telecentric systems.
The entrance pupil of the human eye, which is not quite the same as the physical pupil, is typically about in diameter. It can range from () in a very brightly lit place to () in the dark.
An optical system is typically designed with a single aperture stop, and therefore has a single entrance pupil at designed working conditions. In general, though, the determination of which element is the aperture stop depends on the object distance, so a system may have different entrance pupils for different object planes. Similarly, vignetting can cause different lateral locations at a given object plane to have different aperture stops, and therefore different entrance pupils.
See also
Exit pupil
Transmittance
Pupil magnification
References
External links
Stops and Pupils in Field Guide to Geometrical Optics Greivenkamp, John E, 2004
Entrance and exit pupil, RP Photonics Encyclopedia
Optics
Science of photography | Entrance pupil | [
"Physics",
"Chemistry"
] | 686 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
3,168,650 | https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier%20estimator | The Kaplan–Meier estimator, also known as the product limit estimator, is a non-parametric statistic used to estimate the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment. In other fields, Kaplan–Meier estimators may be used to measure the length of time people remain unemployed after a job loss, the time-to-failure of machine parts, or how long fleshy fruits remain on plants before they are removed by frugivores. The estimator is named after Edward L. Kaplan and Paul Meier, who each submitted similar manuscripts to the Journal of the American Statistical Association. The journal editor, John Tukey, convinced them to combine their work into one paper, which has been cited more than 34,000 times since its publication in 1958.
The estimator of the survival function (the probability that life is longer than ) is given by:
with a time when at least one event happened, di the number of events (e.g., deaths) that happened at time , and the individuals known to have survived (have not yet had an event or been censored) up to time .
Basic concepts
A plot of the Kaplan–Meier estimator is a series of declining horizontal steps which, with a large enough sample size, approaches the true survival function for that population. The value of the survival function between successive distinct sampled observations ("clicks") is assumed to be constant.
An important advantage of the Kaplan–Meier curve is that the method can take into account some types of censored data, particularly right-censoring, which occurs if a patient withdraws from a study, is lost to follow-up, or is alive without event occurrence at last follow-up. On the plot, small vertical tick-marks state individual patients whose survival times have been right-censored. When no truncation or censoring occurs, the Kaplan–Meier curve is the complement of the empirical distribution function.
In medical statistics, a typical application might involve grouping patients into categories, for instance, those with Gene A profile and those with Gene B profile. In the graph, patients with Gene B die much quicker than those with Gene A. After two years, about 80% of the Gene A patients survive, but less than half of patients with Gene B.
To generate a Kaplan–Meier estimator, at least two pieces of data are required for each patient (or each subject): the status at last observation (event occurrence or right-censored), and the time to event (or time to censoring). If the survival functions between two or more groups are to be compared, then a third piece of data is required: the group assignment of each subject.
Problem definition
Let be a random variable as the time that passes between the start of the possible exposure period, , and the time that the event of interest takes place, . As indicated above, the goal is to estimate the survival function underlying . Recall that this function is defined as
, where is the time.
Let be independent, identically distributed random variables, whose common distribution is that of : is the random time when some event happened. The data available for estimating is not , but the list of pairs where for , is a fixed, deterministic integer, the censoring time of event and . In particular, the information available about the timing of event is whether the event happened before the fixed time and if so, then the actual time of the event is also available. The challenge is to estimate given this data.
Derivation of the Kaplan–Meier estimator
Two derivations of the Kaplan–Meier estimator are shown. Both are based on rewriting the survival function in terms of what is sometimes called hazard, or mortality rates. However, before doing this it is worthwhile to consider a naive estimator.
A naive estimator
To understand the power of the Kaplan–Meier estimator, it is worthwhile to first describe a naive estimator of the survival function.
Fix and let . A basic argument shows that the following proposition holds:
Proposition 1: If the censoring time of event exceeds (), then if and only if .
Let be such that . It follows from the above proposition that
Let and consider only those , i.e. the events for which the outcome was not censored before time . Let be the number of elements in . Note that the set is not random and so neither is . Furthermore, is a sequence of independent, identically distributed Bernoulli random variables with common parameter . Assuming that , this suggests to estimate using
where the second equality follows because implies , while the last equality is simply a change of notation.
The quality of this estimate is governed by the size of . This can be problematic when is small, which happens, by definition, when a lot of the events are censored. A particularly unpleasant property of this estimator, that suggests that perhaps it is not the "best" estimator, is that it ignores all the observations whose censoring time precedes . Intuitively, these observations still contain information about : For example, when for many events with , also holds, we can infer that events often happen early, which implies that is large, which, through means that must be small. However, this information is ignored by this naive estimator. The question is then whether there exists an estimator that makes a better use of all the data. This is what the Kaplan–Meier estimator accomplishes. Note that the naive estimator cannot be improved when censoring does not take place; so whether an improvement is possible critically hinges upon whether censoring is in place.
The plug-in approach
By elementary calculations,
where the second to last equality used that is integer valued and for the last line we introduced
By a recursive expansion of the equality , we get
Note that here .
The Kaplan–Meier estimator can be seen as a "plug-in estimator" where each is estimated based on the data and the estimator of is obtained as a product of these estimates.
It remains to specify how is to be estimated. By Proposition 1, for any such that , and both hold. Hence, for any such that ,
By a similar reasoning that lead to the construction of the naive estimator above, we arrive at the estimator
(think of estimating the numerator and denominator separately in the definition of the "hazard rate" ). The Kaplan–Meier estimator is then given by
The form of the estimator stated at the beginning of the article can be obtained by some further algebra. For this, write where, using the actuarial science terminology, is the number of known deaths at time , while is the number of those persons who are alive (and not being censored) at time .
Note that if , . This implies that we can leave out from the product defining all those terms where . Then, letting be the times when , and , we arrive at the form of the Kaplan–Meier estimator given at the beginning of the article:
As opposed to the naive estimator, this estimator can be seen to use the available information more effectively: In the special case mentioned beforehand, when there are many early events recorded, the estimator will multiply many terms with a value below one and will thus take into account that the survival probability cannot be large.
Derivation as a maximum likelihood estimator
Kaplan–Meier estimator can be derived from maximum likelihood estimation of the discrete hazard function. More specifically given as the number of events and the total individuals at risk at time , discrete hazard rate can be defined as the probability of an individual with an event at time . Then survival rate can be defined as:
and the likelihood function for the hazard function up to time is:
therefore the log likelihood will be:
finding the maximum of log likelihood with respect to yields:
where hat is used to denote maximum likelihood estimation. Given this result, we can write:
More generally (for continuous as well as discrete survival distributions), the Kaplan-Meier estimator may be interpreted as a nonparametric maximum likelihood estimator.
Benefits and limitations
The Kaplan–Meier estimator is one of the most frequently used methods of survival analysis. The estimate may be useful to examine recovery rates, the probability of death, and the effectiveness of treatment. It is limited in its ability to estimate survival adjusted for covariates; parametric survival models and the Cox proportional hazards model may be useful to estimate covariate-adjusted survival.
The Kaplan-Meier estimator is directly related to the Nelson-Aalen estimator and both maximize the empirical likelihood.
Statistical considerations
The Kaplan–Meier estimator is a statistic, and several estimators are used to approximate its variance. One of the most common estimators is Greenwood's formula:
where is the number of cases and is the total number of observations, for .
In some cases, one may wish to compare different Kaplan–Meier curves. This can be done by the log rank test, and the Cox proportional hazards test.
Other statistics that may be of use with this estimator are pointwise confidence intervals, the Hall-Wellner band and the equal-precision band.
Software
Mathematica: the built-in function SurvivalModelFit creates survival models.
SAS: The Kaplan–Meier estimator is implemented in the proc lifetest procedure.
R: the Kaplan–Meier estimator is available as part of the survival package.
Stata: the command sts returns the Kaplan–Meier estimator.
Python: the lifelines and scikit-survival packages each include the Kaplan–Meier estimator.
MATLAB: the ecdf function with the 'function','survivor' arguments can calculate or plot the Kaplan–Meier estimator.
StatsDirect: The Kaplan–Meier estimator is implemented in the Survival Analysis menu.
SPSS: The Kaplan–Meier estimator is implemented in the Analyze > Survival > Kaplan-Meier... menu.
Julia: the Survival.jl package includes the Kaplan–Meier estimator.
Epi Info: Kaplan–Meier estimator survival curves and results for the log rank test are obtained with the KMSURVIVAL command.
See also
Survival Analysis
Frequency of exceedance
Median lethal dose
Nelson–Aalen estimator
References
Further reading
External links
Estimator
Actuarial science
Survival analysis
Reliability engineering | Kaplan–Meier estimator | [
"Mathematics",
"Engineering"
] | 2,206 | [
"Applied mathematics",
"Systems engineering",
"Actuarial science",
"Reliability engineering"
] |
3,170,723 | https://en.wikipedia.org/wiki/Maxwell%20stress%20tensor | The Maxwell stress tensor (named after James Clerk Maxwell) is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.
In the relativistic formulation of electromagnetism, the nine components of the Maxwell stress tensor appear, negated, as components of the electromagnetic stress–energy tensor, which is the electromagnetic component of the total stress–energy tensor. The latter describes the density and flux of energy and momentum in spacetime.
Motivation
As outlined below, the electromagnetic force is written in terms of and . Using vector calculus and Maxwell's equations, symmetry is sought for in the terms containing and , and introducing the Maxwell stress tensor simplifies the result.
in the above relation for conservation of momentum, is the momentum flux density and plays a role similar to in Poynting's theorem.
The above derivation assumes complete knowledge of both and (both free and bounded charges and currents). For the case of nonlinear materials (such as magnetic iron with a BH-curve), the nonlinear Maxwell stress tensor must be used.
Equation
In physics, the Maxwell stress tensor is the stress tensor of an electromagnetic field. As derived above, it is given by:
,
where is the electric constant and is the magnetic constant, is the electric field, is the magnetic field and is Kronecker's delta. In the Gaussian system, it is given by:
,
where is the magnetizing field.
An alternative way of expressing this tensor is:
where is the dyadic product, and the last tensor is the unit dyad:
The element of the Maxwell stress tensor has units of momentum per unit of area per unit time and gives the flux of momentum parallel to the th axis crossing a surface normal to the th axis (in the negative direction) per unit of time.
These units can also be seen as units of force per unit of area (negative pressure), and the element of the tensor can also be interpreted as the force parallel to the th axis suffered by a surface normal to the th axis per unit of area. Indeed, the diagonal elements give the tension (pulling) acting on a differential area element normal to the corresponding axis. Unlike forces due to the pressure of an ideal gas, an area element in the electromagnetic field also feels a force in a direction that is not normal to the element. This shear is given by the off-diagonal elements of the stress tensor.
It has recently been shown that the Maxwell stress tensor is the real part of a more general complex electromagnetic stress tensor whose imaginary part accounts for reactive electrodynamical forces.
In magnetostatics
If the field is only magnetic (which is largely true in motors, for instance), some of the terms drop out, and the equation in SI units becomes:
For cylindrical objects, such as the rotor of a motor, this is further simplified to:
where is the shear in the radial (outward from the cylinder) direction, and is the shear in the tangential (around the cylinder) direction. It is the tangential force which spins the motor. is the flux density in the radial direction, and is the flux density in the tangential direction.
In electrostatics
In electrostatics the effects of magnetism are not present. In this case the magnetic field vanishes, i.e. , and we obtain the electrostatic Maxwell stress tensor. It is given in component form by
and in symbolic form by
where is the appropriate identity tensor usually .
Eigenvalue
The eigenvalues of the Maxwell stress tensor are given by:
These eigenvalues are obtained by iteratively applying the matrix determinant lemma, in conjunction with the Sherman–Morrison formula.
Noting that the characteristic equation matrix, , can be written as
where
we set
Applying the matrix determinant lemma once, this gives us
Applying it again yields,
From the last multiplicand on the RHS, we immediately see that is one of the eigenvalues.
To find the inverse of , we use the Sherman-Morrison formula:
Factoring out a term in the determinant, we are left with finding the zeros of the rational function:
Thus, once we solve
we obtain the other two eigenvalues.
See also
Ricci calculus
Energy density of electric and magnetic fields
Poynting vector
Electromagnetic stress–energy tensor
Magnetic pressure
Magnetic tension
References
David J. Griffiths, "Introduction to Electrodynamics" pp. 351–352, Benjamin Cummings Inc., 2008
John David Jackson, "Classical Electrodynamics, 3rd Ed.", John Wiley & Sons, Inc., 1999
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
Tensor physical quantities
Electromagnetism
James Clerk Maxwell | Maxwell stress tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 1,060 | [
"Electromagnetism",
"Physical phenomena",
"Tensors",
"Physical quantities",
"Quantity",
"Tensor physical quantities",
"Fundamental interactions"
] |
16,471,126 | https://en.wikipedia.org/wiki/Groundwater%20energy%20balance | The groundwater energy balance is the energy balance of a groundwater body in terms of incoming hydraulic energy associated with groundwater inflow into the body, energy associated with the outflow, energy conversion into heat due to friction of flow, and the resulting change of energy status and groundwater level.
Theory
When multiplying the horizontal velocity of groundwater (dimension, for example, per cross-sectional area) with the groundwater potential (dimension energy per volume of water, or ) one obtains an energy flow (flux) in for the given flow and cross-sectional area.
Summation or integration of the energy flux in a vertical cross-section of unit width (say 1m) from the lower flow boundary (the impermeable layer or base) up to the water table in an unconfined aquifer gives the energy flow through the cross-section in per m width of the aquifer.
While flowing, the groundwater loses energy due to friction of flow, i.e. hydraulic energy is converted into heat. At the same time, energy may be added with the recharge of water coming into the aquifer through the water table. Thus one can make an hydraulic energy balance of a block of soil between two nearby cross-sections. In steady state, i.e. without change in energy status and without accumulation or depletion of water stored in the soil body, the energy flow in the first section plus the energy added by groundwater recharge between the sections minus the energy flow in the second section must equal the energy loss due to friction of flow.
In mathematical terms this balance can be obtained by differentiating the cross-sectional integral of Fe in the direction of flow using the Leibniz rule, taking into account that the level of the water table may change in the direction of flow.
The mathematics is simplified using the Dupuit–Forchheimer assumption.
The hydraulic friction losses can be described in analogy to Joule's law in electricity (see Joule's law#Hydraulic equivalent), where the friction losses are proportional to the square value of the current (flow) and the electrical resistance of the material through which the current occurs. In groundwater hydraulics (fluid dynamics, hydrodynamics) one often works with hydraulic conductivity (i.e. permeability of the soil for water), which is inversely proportional to the hydraulic resistance.
The resulting equation of the energy balance of groundwater flow can be used, for example, to calculate the shape of the water table between drains under specific aquifer conditions. For this a numerical solution can be used, taking small steps along the impermeable base. The drainage equation is to be solved by trial and error (iterations), because the hydraulic potential is taken with respect to a reference level taken as the level of the water table at the water divide midway between the drains. When calculating the shape of the water table, its level at the water divide is initially not known. Therefore, this level is to be assumed before the calculations on the shape of the water table can be started. According to the findings of the calculation procedure, the initial assumption is to be adjusted and the calculations are to be restarted until the level of the water table at the divide does not differ significantly from the assumed level.
The trial and error procedure is cumbersome and therefore computer programs may be developed to aid in the calculations.
Application
The energy balance of groundwater flow can be applied to flow of groundwater to subsurface drains. The computer program EnDrain compares the outcome of the traditional drain spacing equation, based on Darcy's law together with the continuity equation (i.e. conservation of mass), with the solution obtained by the energy balance and it can be seen that drain spacings are wider in the latter case. This is owing to the introduction of the energy supplied by the incoming recharge.
See also
DPHM-RS
Drainage equation
Groundwater discharge
Groundwater flow equation
Hydrogeology
References
External links
Articles on the energy balance of groundwater flow can be downloaded from : under nr. 3 and 4.
Aquifers
Hydrology
Hydrogeology
Hydraulic engineering
Conservation laws | Groundwater energy balance | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 842 | [
"Hydrology",
"Hydrogeology",
"Equations of physics",
"Conservation laws",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering",
"Symmetry",
"Physics theorems"
] |
16,473,495 | https://en.wikipedia.org/wiki/Cement-mortar%20lined%20ductile%20iron%20pipe | Cement-mortar lined ductile iron pipe is a ductile iron pipe with cement lining on the inside surface, and is commonly used for water distribution.
Cement-mortar lined ductile iron pipe is governed by standards set forth by DIPRA (Ductile Iron Pipe Research Association), and was first used in 1922 in Charleston, South Carolina.
Ductile Iron is commonly used in place of cast iron pipe for fluid distribution systems. The purpose of installing a cement/mortar lining to the interior wall of the pipe is to reduce the process of tuberculation inside the pipe network. The cement/mortar lining provides an area of high pH near the pipe wall and provides a barrier between the water and the pipe, reducing its susceptibility to corrosion.
References
Piping
1922 introductions | Cement-mortar lined ductile iron pipe | [
"Chemistry",
"Engineering"
] | 157 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
16,479,948 | https://en.wikipedia.org/wiki/West%20Ice | The West Ice (Norwegian: Vestisen, or Vesterisen, Danish: see below) is a patch of the Greenland Sea covered by pack ice during winter time. It is located north of Iceland, between East Greenland and Jan Mayen island. In Greenland and the Danish language, vestisen refers to the sea ice-covered waters off Greenland's west coast.
West Ice in the Greenland Sea
The West Ice is a major breeding ground for seals, especially harp seals and hooded seals. It was discovered in the early 18th century by British whalers. At the time, whalers were not interested in seal hunting as long as there was ample stock of bowhead whales in the area. However, after the 1750s, the whale population had been depleted in the area, and systematic seal hunting started, first by British ships and then by German, Dutch, Danish, Norwegian, and Russian ships. The annual catches were 120,000 animals around 1900, mostly by Norway and Russia, and rose to 350,000 by the 1920s. They then declined, first because of imposed restrictions on total allowable catch and then in response to decreasing market demand. Nevertheless, the seal population in the West Ice was rapidly falling, from an estimated 1,000,000 in 1956 to 100,000 in the 1980s. In the 1980s–1990s, takings of harp seals totaled 8,000–10,000, and annual catches of hooded seals totaled a few thousand between 1997 and 2001. Norway accounts for all recent seal hunting in the West Ice, as Russia has not hunted hooded seals since 1995, and catches harp seals at the East Ice in the White Sea – Barents Sea.
Seal hunting in the West Ice was a dangerous occupation, as floating ice, storms and winds posed constant threat to the ships; in the 19th century, the hunters often encountered frozen human bodies on the West Ice. A major accident occurred around 5 April 1952 when a sudden storm surprised 53 ships hunting in the area. Seven of them sank and five vanished, namely Ringsel, Brattind and Vårglimt from Troms and Buskøy and Pels from Sunnmøre, with 79 men on board. The search for them involved ships and planes and continued for many days, but no trace of the missing boats was found.
The word "West" contrasts with the East Ice (Østisen), which refers to the ice-covered waters east and south of Svalbard, including Barents Sea and White Sea.
West Ice in the Baffin Bay
The word vestisen ("the west ice") in a Greenland-specific context in the Danish language refers to the sea ice off Greenland's west coast in the Davis Strait and Baffin Bay. This could cause confusion when comparing or translating Danish and Norwegian sources. The band of sea ice in the East Greenland Current is referred to as Storisen, which translates as The Large or Grand Ice, in reference to the density of multi-year sea ice and icebergs. The Storisen is a band of ice rather than a specific area, typically spanning the entire east coast and round Cape Farewell. The word East Ice is occasionally used to more generally refer to all sea ice waters off the east coast, which thus includes the patch that in Norwegian and English is named West Ice.
References
Bodies of ice of Iceland
European seas
Seas of Greenland
Seas of North America
Geography of North America
Geography of Europe
Sea ice
Seal hunting
Atlantic Ocean | West Ice | [
"Physics"
] | 699 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
16,481,963 | https://en.wikipedia.org/wiki/Lefschetz%20duality | In mathematics, Lefschetz duality is a version of Poincaré duality in geometric topology, applying to a manifold with boundary. Such a formulation was introduced by , at the same time introducing relative homology, for application to the Lefschetz fixed-point theorem. There are now numerous formulations of Lefschetz duality or Poincaré–Lefschetz duality, or Alexander–Lefschetz duality.
Formulations
Let M be an orientable compact manifold of dimension n, with boundary , and let be the fundamental class of the manifold M. Then cap product with z (or its dual class in cohomology) induces a pairing of the (co)homology groups of M and the relative (co)homology of the pair . Furthermore, this gives rise to isomorphisms of with , and of with for all .
Here can in fact be empty, so Poincaré duality appears as a special case of Lefschetz duality.
There is a version for triples. Let decompose into subspaces A and B, themselves compact orientable manifolds with common boundary Z, which is the intersection of A and B. Then, for each , there is an isomorphism
Notes
References
Duality theories
Manifolds | Lefschetz duality | [
"Mathematics"
] | 265 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Category theory",
"Duality theories",
"Geometry",
"Manifolds"
] |
16,487,457 | https://en.wikipedia.org/wiki/Modern%20dress | Modern dress is a term used in theatre and film to refer to productions of plays from the past in which the setting is updated to the present day (or at least to a more recent time period), but the text is left relatively unchanged. For example, Baz Luhrmann's film Romeo + Juliet uses a relatively unaltered text of Shakespeare's play but updates the setting to contemporary America.
The first performances of Shakespeare in modern dress were produced by Barry Jackson at the Birmingham Repertory Theatre in Birmingham, England from 1923. The production of Cymbeline that opened in Birmingham in April of that year "bewildered" critics, leading to what Jackson called "a national and worldwide controversy".
References
Costume design | Modern dress | [
"Engineering"
] | 146 | [
"Costume design",
"Design"
] |
1,639,710 | https://en.wikipedia.org/wiki/Pullulanase | Pullulanase (, limit dextrinase, amylopectin 6-glucanohydrolase, bacterial debranching enzyme, debranching enzyme, α-dextrin endo-1,6-α-glucosidase, R-enzyme, pullulan α-1,6-glucanohydrolase) is a specific kind of glucanase, an amylolytic exoenzyme, that degrades pullulan. It is produced as an extracellular, cell surface-anchored lipoprotein by Gram-negative bacteria of the genus Klebsiella. Type I pullulanases specifically attack α-1,6 linkages, while type II pullulanases are also able to hydrolyse α-1,4 linkages. It is also produced by some other bacteria and archaea. Pullulanase is used as a processing aid in grain processing biotechnology (production of ethanol and sweeteners).
Pullulanase is also known as pullulan-6-glucanohydrolase (Debranching enzyme). Its substrate, pullulan, is regarded as a chain of maltotriose units linked by α-1,6-glycosidic bonds. Pullulanase will hydrolytically cleave pullulan (α-glucan polysaccharides).
Pullulanase enzyme in the food industry
In the food industry, pullulanase works well as an ingredient. Pullulan can be applied directly to foods as a protective glaze or edible film due to its ability to form films. It can be used as a spice and flavoring agent for micro-encapsulation. It is used in mayonnaise to maintain consistency and quality. It is additionally used in low-calorie food formulations as a starch replacement.
Pullulanase can be used to convert starches in grains into fermentable sugars, which yeast can use to produce alcohol during fermentation.
References
External links
EC 3.2.1 | Pullulanase | [
"Biology"
] | 440 | [
"Bacteria stubs",
"Bacteria"
] |
1,639,746 | https://en.wikipedia.org/wiki/Stink%20bomb | A stink bomb, sometimes called a stinkpot, is a device designed to create an unpleasant smell. They range in effectiveness from being used as simple pranks to military grade malodorants or riot control chemical agents.
History
A stink bomb that could be launched with arrows was invented by Leonardo da Vinci.
The 1972 U.S. presidential campaign of Edmund Muskie was disrupted at least four times in Florida in 1972 with the use of stink bombs during the Florida presidential primary. Stink bombs were set off at campaign picnics in Miami and Tampa, at the Muskie campaign headquarters in Tampa and at offices in Tampa where the campaign's telephone bank was located. The stink bomb plantings served to disrupt the picnics and campaign operations, and was deemed by the U.S. Select Committee on Presidential Campaign Activities of the U.S. Senate to have "disrupted, confused, and unnecessarily interfered with a campaign for the office of the Presidency".
In 2004, it was reported that the Israeli weapons research and development directorate had created a liquid stink bomb, dubbed the "skunk bomb", with an odor that lingers for five years on clothing. It is a synthetic stink bomb based upon the chemistry of the spray that is emitted from the anal glands of the skunk. It was designed as a crowd control tool to be used as a deterrent that causes people to scatter, such as at a protest. It has been described as a less than lethal weapon.
Range
At the lower end of the spectrum, relatively harmless stink bombs consist of a mixture of ammonium sulfide, vinegar and bicarbonate, which smells strongly of rotten eggs. When exposed to air, the ammonium sulfide reacts with moisture, hydrolyzes, and a mixture of hydrogen sulfide (rotten egg smell) and ammonia is released. Another mixture consists of hydrogen sulfide and ammonia mixed together directly.
Other popular substances on which to base stink bombs are thiols with lower molecular weight such as methyl mercaptan and ethyl mercaptan—the chemicals that are added in minute quantities to natural gas in order to make gas leaks detectable by smell. A variation on this idea is the scent bomb, or perfume bomb, filled with an overpowering "cheap perfume" smell.
At the upper end of the spectrum, the governments of Israel and the United States of America are developing stink bombs for use by their law enforcement agencies and militaries as riot control and area denial weapons. Using stink bombs for these purposes has advantages over traditional riot control agents: unlike pepper spray and tear gas, stink bombs are believed not to be dangerous, although their psychological effects can make people sick.
Prank stink bombs and perfume bombs are usually sold as a 1- or 2-mL sealed glass ampoule, which can be broken by throwing against a hard surface or by crushing under one's shoe sole, thus releasing the malodorous liquid contained therein. Another variety of prank stink bomb comprises two bags, one smaller and inside the other. The inner one contains a liquid and the outer one a powder. When the inner one is ruptured by squeezing it, the liquid reacts with the powder, producing hydrogen sulfide, which expands and bursts the outer bag, releasing an unpleasant odor.
Chemicals used
Typically, lower molecular weight volatile organic compounds are used. Generally, the higher the molecular weight for a given class of compounds, the lower the volatility and initial concentration but the longer the persistence. Some chemicals (typically thiols) have a certain concentration threshold over which the smell is not perceived significantly stronger; therefore a lower-volatility compound is capable of providing comparable stench intensity to a higher-volatility compound, but for longer time. Another issue is the operating temperature, on which the compound's volatility strongly depends. Care should be taken as some compounds are toxic either in higher concentration or after prolonged exposure in low concentration.
Some plants may be used as improvised stink bombs; one such plant is the Parkia speciosa or 'stinky bean', which grows in India, Southeast Asia and Eastern Australia. The pods from this plant are collected when partly dried and stamped on, to release the stink.
Some common components are:
Organosulfur compounds
Methanethiol (used rarely; it is a gas and therefore more difficult to handle than liquids)
Ethanethiol, smelling similar to leeks, onions, durian or cooked cabbage
Propanethiol
Butanethiol, smelling similar to skunk spray
Inorganic sulfur compounds
Ammonium sulfide, rotten eggs
Carboxylic acids
Propionic acid, sweat
Butyric acid, rancid dairy or vomit
Valeric acid, smelling of dirty feet
Caproic acid, smelling of cheese and goats
Aldehydes (e.g. butanal)
Amines
Triethylamine, old fish
Ethanolamine, unpleasant
Putrescine, rotten meat
Cadaverine, rotten meat
Heterocyclic compounds
Indole, smelling of feces
Skatole, smelling of feces
Standard bathroom malodor
The US Government Standard Bathroom Malodor, said to be one of the worst-smelling substances, is quoted as having this composition: (Note that this substance is a concoction)
See also
Chemical warfare
Practical joke
Stink bug
Thioacetone, a very foul smelling chemical
Stink Blasters, stinking action figure toys
References
Further reading
Trivedi, Bijal P. (January 7, 2002). "U.S. Military Is Seeking Ultimate 'Stink Bomb. National Geographic News.
External links
Chemical weapons
Incapacitating agents
Non-lethal weapons
Practical joke devices
Foul-smelling chemicals | Stink bomb | [
"Chemistry",
"Biology"
] | 1,148 | [
"Incapacitating agents",
"Chemical accident",
"Biochemistry",
"Chemical weapons"
] |
1,640,198 | https://en.wikipedia.org/wiki/Helenite | Helenite, also known as Mount St. Helens obsidian, emerald obsidianite, and ruby obsidianite, is a glass made from the fused volcanic rock dust from Mount St. Helens and marketed as a gemstone. Helenite was first created accidentally after the eruption of Mount St. Helens in 1980. Workers from the Weyerhaeuser Timber Company were attempting to salvage equipment damaged after the volcanic eruption. Using acetylene torches, they noticed that the intense heat was melting the nearby volcanic ash and rock and turning it a greenish color. The silica, aluminium, iron, and trace amounts of chromium and copper present in the rocks and ash in the area, combined with the heat of the torches, transformed the volcanic particles into a compound that would be later commercially replicated as helenite.
As word of the discovery spread, jewelry companies took note and began to find ways to reproduce the helenite. Helenite is made by heating rock dust and particles from the Mount St. Helens area in a furnace to a temperature of approximately . Although helenite and obsidian are both forms of glass, helenite differs from obsidian in that it is man-made. The stone has been marketed by the jewelry industry because of its emerald-like color, good refractive index, although its durability is low. It has a hardness of just 5 to 5 ½ and chips about as easily as obsidian or window glass. It is best used in earrings, pendants, brooches, and other types of jewelry where it will not encounter impact or abrasion. Even in these uses it should be considered to be a very delicate stone. If it is used as a ring stone, the facet edges will be easily abraded, the faces will be easily scratched, and the stone might be chipped with even a slight impact. It is seen as an inexpensive alternative to naturally-occurring green gemstones, such as emerald and peridot. Helenite can also come in various red, green and blue varieties.
See also
Goldstone (glass)
Olivine
References
External links
Glossary: H, Illustrated Dictionary of Jewelry at enchantedlearning.com
Helenite at Geology.com
Helenite Gemstone & Information at JTV.com
Gemstones
Glass
Mount St. Helens | Helenite | [
"Physics",
"Chemistry"
] | 460 | [
"Glass",
"Unsolved problems in physics",
"Homogeneous chemical mixtures",
"Materials",
"Gemstones",
"Amorphous solids",
"Matter"
] |
1,641,139 | https://en.wikipedia.org/wiki/Temperature%20gradient%20gel%20electrophoresis | Temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE) are forms of electrophoresis which use either a temperature or chemical gradient to denature the sample as it moves across an acrylamide gel. TGGE and DGGE can be applied to nucleic acids such as DNA and RNA, and (less commonly) proteins. TGGE relies on temperature dependent changes in structure to separate nucleic acids. DGGE separates genes of the same size based on their different denaturing ability which is determined by their base pair sequence. DGGE was the original technique, and TGGE a refinement of it.
History
DGGE was invented by Leonard Lerman, while he was a professor at SUNY Albany.
The same equipment can be used for analysis of protein, which was first done by Thomas E. Creighton of the MRC Laboratory of Molecular Biology, Cambridge, England. Similar looking patterns are produced by proteins and nucleic acids, but the fundamental principles are quite different.
TGGE was first described by Thatcher and Hodson and by Roger Wartell of Georgia Tech. Extensive work was done by the group of Riesner in Germany. Commercial equipment for DGGE is available from Bio-Rad, INGENY and CBS Scientific; a system for TGGE is available from Biometra.
Temperature gradient gel electrophoresis
DNA has a negative charge and so will move to the positive electrode in an electric field. A gel is a molecular mesh, with holes roughly the same size as the diameter of the DNA string. When an electric field is applied, the DNA will begin to move through the gel, at a speed roughly inversely proportional to the length of the DNA molecule (shorter lengths of DNA travel faster) — this is the basis for size dependent separation in standard electrophoresis.
In TGGE there is also a temperature gradient across the gel. At room temperature, the DNA will exist stably in a double-stranded form. As the temperature is increased, the strands begin to separate (melting), and the speed at which they move through the gel decreases drastically. Critically, the temperature at which melting occurs depends on the sequence (GC basepairs are more stable than AT due to stacking interactions, not due to the difference in hydrogen bonds (there are three hydrogen bonds between a cytosine and guanine base pair, but only two between adenine and thymine), so TGGE provides a "sequence dependent, size independent method" for separating DNA molecules. TGGE separates molecules and gives additional information about melting behavior and stability (Biometra, 2000).
Denaturing gradient gel electrophoresis
Denaturing gradient gel electrophoresis (DGGE) works by applying a small sample of DNA (or RNA) to an electrophoresis gel that contains a denaturing agent. Researchers have found that certain denaturing gels are capable of inducing DNA to melt at various stages. As a result of this melting, the DNA spreads through the gel and can be analyzed for single components, even those as small as 200-700 base pairs.
What is unique about the DGGE technique is that as the DNA is subjected to increasingly extreme denaturing conditions, the melted strands fragment completely into single strands. The process of denaturation on a denaturing gel is very sharp: "Rather than partially melting in a continuous zipper-like manner, most fragments melt in a step-wise process. Discrete portions or domains of the fragment suddenly become single-stranded within a very narrow range of denaturing conditions" (Helms, 1990). This makes it possible to discern differences in DNA sequences or mutations of various genes: sequence differences in fragments of the same length often cause them to partially melt at different positions in the gradient and therefore "stop" at different positions in the gel. By comparing the melting behavior of the polymorphic DNA fragments side by side on denaturing gradient gels, it is possible to detect fragments that have mutations in the first melting domain (Helms, 1990). Placing two samples side by side on the gel and allowing them to denature together, researchers can easily see even the smallest differences in two samples or fragments of DNA.
There are a number of disadvantages to this technique: "Chemical gradients such as those used in DGGE are not as reproducible, are difficult to establish and often do not completely resolve heteroduplexes" (Westburg, 2001). These problems are addressed by TGGE, which uses a temperature, rather than chemical, gradient to denature the sample.
Method
To separate nucleic acids by TGGE, the following steps must be performed: preparing and pouring the gels, electrophoresis, staining, and elution of DNA. Because a buffered system must be chosen, it is important that the system remain stable within the context of increasing temperature. Thus, urea is typically utilized for gel preparation; however, researchers need to be aware that the amount of urea used will affect the overall temperature required to separate the DNA. The gel is loaded, the sample is placed on the gel according to the type of gel that is being run—i.e. parallel or perpendicular—the voltage is adjusted and the sample can be left to run. Depending on which type of TGGE is to be run, either perpendicular or parallel, varying amounts of sample need to be prepared and loaded. A larger amount of one sample is used with perpendicular, while a smaller amount of many samples are used with parallel TGGE. Once the gel has been run, the gel must be stained to visualize the results. While there are a number of stains that can be used for this purpose, silver staining has proven to be the most effective tool. The DNA can be eluted from the silver stain for further analysis through PCR amplification.
Applications
TGGE and DGGE are broadly useful in biomedical and ecological research; selected applications are described below.
Mutations in mtDNA
According to a recent investigation by Wong, Liang, Kwon, Bai, Alper and Gropman, TGGE can be utilized to examine the mitochondrial DNA of an individual. According to these authors, TGGE was utilized to determine two novel mutations in the mitochondrial genome: "A 21-year-old woman who has been suspected of mitochondrial cytopathy, but negative for common mitochondrial DNA (mtDNA) point mutations and deletions, was screened for unknown mutations in the entire mitochondrial genome by temperature gradient gel electrophoresis".
p53 mutation in pancreatic secretions
Lohr and coworkers (2001) report that in a comprehensive study of pancreatic secretions of individuals without pancreatic carcinoma, p53 mutations could be found in the pancreatic juices of a small percentage of participants. Because mutations of p53 has been extensively found in pancreatic carcinomas, the researchers for this investigation were attempting to determine if the mutation itself can be linked to the development of pancreatic cancer. While Lohr was able to find p53 mutations via TGGE in a few subjects, none subsequently developed pancreatic carcinoma. Thus, the researchers conclude by noting that the p53 mutation may not be the sole indicator of pancreatic carcinoma oncogenesis.
Microbial ecology
DGGE of small ribosomal subunit coding genes was first described by Gerard Muyzer, while he was Post-doc at Leiden University, and has become a widely used technique in microbial ecology.
PCR amplification of DNA extracted from mixed microbial communities with PCR primers specific for 16S rRNA gene fragments of bacteria and archaea, and 18S rRNA gene fragments of eukaryotes results in mixtures of PCR products.
Because these amplicons all have the same length, they cannot be separated from each other by agarose gel electrophoresis. However, sequence variations (i.e. differences in GC content and distribution) between different microbial rRNAs result in different denaturation properties of these DNA molecules.
Hence, DGGE banding patterns can be used to visualize variations in microbial genetic diversity and provide a rough estimate of the richness of abundance of predominant microbial community members. This method is often referred to as community fingerprinting. Recently, several studies have shown that DGGE of functional genes (e.g. genes involved in sulfur reduction, nitrogen fixation, and ammonium oxidation) can provide information about microbial function and phylogeny simultaneously. For instance, Tabatabaei et al. (2009) applied DGGE and managed to reveal the microbial pattern during the anaerobic fermentation of palm oil mill effluent (POME) for the first time.
References
Charles J. Sailey, M.D., M.S. Parts taken from a summary paper entitled "TGGE." 2003. The University of the Sciences in Philadelphia.
Biochemistry methods
Electrophoresis
Gene tests | Temperature gradient gel electrophoresis | [
"Chemistry",
"Biology"
] | 1,886 | [
"Biochemistry methods",
"Genetics techniques",
"Instrumental analysis",
"Gene tests",
"Biochemical separation processes",
"Molecular biology techniques",
"Biochemistry",
"Electrophoresis"
] |
1,641,247 | https://en.wikipedia.org/wiki/Anti-reflective%20coating | An antireflective, antiglare or anti-reflection (AR) coating is a type of optical coating applied to the surface of lenses, other optical elements, and photovoltaic cells to reduce reflection. In typical imaging systems, this improves the efficiency since less light is lost due to reflection. In complex systems such as cameras, binoculars, telescopes, and microscopes the reduction in reflections also improves the contrast of the image by elimination of stray light. This is especially important in planetary astronomy. In other applications, the primary benefit is the elimination of the reflection itself, such as a coating on eyeglass lenses that makes the eyes of the wearer more visible to others, or a coating to reduce the glint from a covert viewer's binoculars or telescopic sight.
Many coatings consist of transparent thin film structures with alternating layers of contrasting refractive index. Layer thicknesses are chosen to produce destructive interference in the beams reflected from the interfaces, and constructive interference in the corresponding transmitted beams. This makes the structure's performance change with wavelength and incident angle, so that color effects often appear at oblique angles. A wavelength range must be specified when designing or ordering such coatings, but good performance can often be achieved for a relatively wide range of frequencies: usually a choice of IR, visible, or UV is offered.
Applications
Anti-reflective coatings are used in a wide variety of applications where light passes through an optical surface, and low loss or low reflection is desired. Examples include anti-glare coatings on corrective lenses and camera lens elements, and antireflective coatings on solar cells.
Corrective lenses
Opticians may recommend "anti-reflection lenses" because the decreased reflection enhances the cosmetic appearance of the lenses. Such lenses are often said to reduce glare, but the reduction is very slight. Eliminating reflections allows slightly more light to pass through, producing a slight increase in contrast and visual acuity.
Antireflective ophthalmic lenses should not be confused with polarized lenses, which are found only in sunglasses and decrease (by absorption) the visible glare of sun reflected off surfaces such as sand, water, and roads. The term "antireflective" relates to the reflection from the surface of the lens itself, not the origin of the light that reaches the lens.
Many anti-reflection lenses include an additional coating that repels water and grease, making them easier to keep clean. Anti-reflection coatings are particularly suited to high-index lenses, as these reflect more light without the coating than a lower-index lens (a consequence of the Fresnel equations). It is also generally easier and cheaper to coat high index lenses.
Photolithography
Antireflective coatings (ARC) are often used in microelectronic photolithography to help reduce image distortions associated with reflections off the surface of the substrate. Different types of antireflective coatings are applied either before (Bottom ARC, or BARC) or after the photoresist, and help reduce standing waves, thin-film interference, and specular reflections.
Solar cells
Solar cells are often coated with an anti-reflective coating. Materials that have been used include magnesium fluoride, silicon nitride, silicon dioxide, titanium dioxide, and aluminum oxide.
Types
Index-matching
The simplest form of anti-reflective coating was discovered by Lord Rayleigh in 1886. The optical glass available at the time tended to develop a tarnish on its surface with age, due to chemical reactions with the environment. Rayleigh tested some old, slightly tarnished pieces of glass, and found to his surprise that they transmitted more light than new, clean pieces. The tarnish replaces the air-glass interface with two interfaces: an air-tarnish interface and a tarnish-glass interface. Because the tarnish has a refractive index between those of glass and air, each of these interfaces exhibits less reflection than the air-glass interface did. In fact, the total of the two reflections is less than that of the "naked" air-glass interface, as can be calculated from the Fresnel equations.
One approach is to use graded-index (GRIN) anti-reflective coatings, that is, ones with nearly continuously varying indices of refraction. With these, it is possible to curtail reflection for a broad band of frequencies and incidence angles.
Single-layer interference
The simplest interference anti-reflective coating consists of a single thin layer of transparent material with refractive index equal to the square root of the substrate's refractive index. In air, such a coating theoretically gives zero reflectance for light with wavelength (in the coating) equal to four times the coating's thickness. Reflectance is also decreased for wavelengths in a broad band around the center. A layer of thickness equal to a quarter of some design wavelength is called a "quarter-wave layer".
The most common type of optical glass is crown glass, which has an index of refraction of about 1.52. An optimal single-layer coating would have to be made of a material with an index of about 1.23. There are no solid materials with such a low refractive index. The closest materials with good physical properties for a coating are magnesium fluoride, MgF2 (with an index of 1.38), and fluoropolymers, which can have indices as low as 1.30, but are more difficult to apply. MgF2 on a crown glass surface gives a reflectance of about 1%, compared to 4% for bare glass. MgF2 coatings perform much better on higher-index glasses, especially those with index of refraction close to 1.9. MgF2 coatings are commonly used because they are cheap and durable. When the coatings are designed for a wavelength in the middle of the visible band, they give reasonably good anti-reflection over the entire band.
Researchers have produced films of mesoporous silica nanoparticles with refractive indices as low as 1.12, which function as antireflection coatings.
Multi-layer interference
By using alternating layers of a low-index material like silica and a higher-index material, it is possible to obtain reflectivities as low as 0.1% at a single wavelength. Coatings that give very low reflectivity over a broad band of frequencies can also be made, although these are complex and relatively expensive. Optical coatings can also be made with special characteristics, such as near-zero reflectance at multiple wavelengths, or optimal performance at angles of incidence other than 0°.
Absorbing
An additional category of anti-reflection coatings is the so-called "absorbing ARC". These coatings are useful in situations where high transmission through a surface is unimportant or undesirable, but low reflectivity is required. They can produce very low reflectance with few layers, and can often be produced more cheaply, or at greater scale, than standard non-absorbing AR coatings. (See, for example, US Patent 5,091,244.) Absorbing ARCs often make use of unusual optical properties exhibited in compound thin films produced by sputter deposition. For example, titanium nitride and niobium nitride are used in absorbing ARCs. These can be useful in applications requiring contrast enhancement or as a replacement for tinted glass (for example, in a CRT display).
Moth eye
Moths' eyes have an unusual property: their surfaces are covered with a natural nanostructured film, which eliminates reflections. This allows the moth to see well in the dark, without reflections to give its location away to predators. The structure consists of a hexagonal pattern of bumps, each roughly 200 nm high and spaced on 300 nm centers. This kind of antireflective coating works because the bumps are smaller than the wavelength of visible light, so the light sees the surface as having a continuous refractive index gradient between the air and the medium, which decreases reflection by effectively removing the air-lens interface. Practical anti-reflective films have been made by humans using this effect; this is a form of biomimicry. Canon uses the moth-eye technique in their SWC subwavelength structure coating, which significantly reduces lens flare.
Such structures are also used in photonic devices, for example, moth-eye structures grown from tungsten oxide and iron oxide can be used as photoelectrodes for splitting water to produce hydrogen. The structure consists of tungsten oxide spheroids several hundred micrometers in diameter, coated with a few nanometers of iron oxide.
Circular polarizer
A circular polarizer laminated to a surface can be used to eliminate reflections. The polarizer transmits light with one chirality ("handedness") of circular polarization. Light reflected from the surface after the polarizer is transformed into the opposite "handedness". This light cannot pass back through the circular polarizer because its chirality has changed (e.g. from right circular polarized to left circularly polarized). A disadvantage of this method is that if the input light is unpolarized, the transmission through the assembly will be less than 50%.
Theory
There are two separate causes of optical effects due to coatings, often called thick-film and thin-film effects. Thick-film effects arise because of the difference in the index of refraction between the layers above and below the coating (or film); in the simplest case, these three layers are the air, the coating, and the glass. Thick-film coatings do not depend on how thick the coating is, so long as the coating is much thicker than a wavelength of light. Thin-film effects arise when the thickness of the coating is approximately the same as a quarter or a half a wavelength of light. In this case, the reflections of a steady source of light can be made to add destructively and hence reduce reflections by a separate mechanism. In addition to depending very much on the thickness of the film and the wavelength of light, thin-film coatings depend on the angle at which the light strikes the coated surface.
Reflection
Whenever a ray of light moves from one medium to another (for example, when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the interface) between the two media. This can be observed when looking through a window, for instance, where a (weak) reflection from the front and back surfaces of the window glass can be seen. The strength of the reflection depends on the ratio of the refractive indices of the two media, as well as the angle of the surface to the beam of light. The exact value can be calculated using the Fresnel equations.
When the light meets the interface at normal incidence (perpendicularly to the surface), the intensity of light reflected is given by the reflection coefficient, or reflectance, R:
where n0 and nS are the refractive indices of the first and second media respectively. The value of R varies from 0 (no reflection) to 1 (all light reflected) and is usually quoted as a percentage. Complementary to R is the transmission coefficient, or transmittance, T. If absorption and scattering are neglected, then the value T is always 1 − R. Thus if a beam of light with intensity I is incident on the surface, a beam of intensity RI is reflected, and a beam with intensity TI is transmitted into the medium.
For the simplified scenario of visible light travelling from air (n0 ≈ 1.0) into common glass (), the value of R is 0.04, or 4%, on a single reflection. So at most 96% of the light () actually enters the glass, and the rest is reflected from the surface. The amount of light reflected is known as the reflection loss.
In the more complicated scenario of multiple reflections, say with light travelling through a window, light is reflected both when going from air to glass and at the other side of the window when going from glass back to air. The size of the loss is the same in both cases. Light also may bounce from one surface to another multiple times, being partially reflected and partially transmitted each time it does so. In all, the combined reflection coefficient is given by . For glass in air, this is about 7.7%.
Rayleigh's film
As observed by Lord Rayleigh, a thin film (such as tarnish) on the surface of glass can reduce the reflectivity. This effect can be explained by envisioning a thin layer of material with refractive index n1 between the air (index n0) and the glass (index nS). The light ray now reflects twice: once from the surface between air and the thin layer, and once from the layer-to-glass interface.
From the equation above and the known refractive indices, reflectivities for both interfaces can be calculated, denoted R01 and R1S respectively. The transmission at each interface is therefore and . The total transmittance into the glass is thus T1ST01. Calculating this value for various values of n1, it can be found that at one particular value of optimal refractive index of the layer, the transmittance of both interfaces is equal, and this corresponds to the maximal total transmittance into the glass.
This optimal value is given by the geometric mean of the two surrounding indices:
For the example of glass () in air (), this optimal refractive index is .
The reflection loss of each interface is approximately 1.0% (with a combined loss of 2.0%), and an overall transmission T1ST01 of approximately 98%. Therefore, an intermediate coating between the air and glass can halve the reflection loss.
Interference coatings
The use of an intermediate layer to form an anti-reflection coating can be thought of as analogous to the technique of impedance matching of electrical signals. (A similar method is used in fibre optic research, where an index-matching oil is sometimes used to temporarily defeat total internal reflection so that light may be coupled into or out of a fiber.) Further reduced reflection could in theory be made by extending the process to several layers of material, gradually blending the refractive index of each layer between the index of the air and the index of the substrate.
Practical anti-reflection coatings, however, rely on an intermediate layer not only for its direct reduction of reflection coefficient, but also use the interference effect of a thin layer. Assume the layer's thickness is controlled precisely, such that it is exactly one quarter of the wavelength of light in the layer (, where λ0 is the vacuum wavelength). The layer is then called a quarter-wave coating. For this type of coating a normally incident beam I, when reflected from the second interface, will travel exactly half its own wavelength further than the beam reflected from the first surface, leading to destructive interference. This is also true for thicker coating layers (3λ/4, 5λ/4, etc.), however the anti-reflective performance is worse in this case due to the stronger dependence of the reflectance on wavelength and the angle of incidence.
If the intensities of the two beams R1 and R2 are exactly equal, they will destructively interfere and cancel each other, since they are exactly out of phase. Therefore, there is no reflection from the surface, and all the energy of the beam must be in the transmitted ray, T. In the calculation of the reflection from a stack of layers, the transfer-matrix method can be used.
Real coatings do not reach perfect performance, though they are capable of reducing a surface reflection coefficient to less than 0.1%. Also, the layer will have the ideal thickness for only one distinct wavelength of light. Other difficulties include finding suitable materials for use on ordinary glass, since few useful substances have the required refractive index () that will make both reflected rays exactly equal in intensity. Magnesium fluoride (MgF2) is often used, since this is hard-wearing and can be easily applied to substrates using physical vapor deposition, even though its index is higher than desirable ().
Further reduction is possible by using multiple coating layers, designed such that reflections from the surfaces undergo maximal destructive interference. One way to do this is to add a second quarter-wave thick higher-index layer between the low-index layer and the substrate. The reflection from all three interfaces produces destructive interference and anti-reflection. Other techniques use varying thicknesses of the coatings. By using two or more layers, each of a material chosen to give the best possible match of the desired refractive index and dispersion, broadband anti-reflection coatings covering the visible range (400–700 nm) with maximal reflectivity of less than 0.5% are commonly achievable.
The exact nature of the coating determines the appearance of the coated optic; common AR coatings on eyeglasses and photographic lenses often look somewhat bluish (since they reflect slightly more blue light than other visible wavelengths), though green and pink-tinged coatings are also used.
If the coated optic is used at non-normal incidence (that is, with light rays not perpendicular to the surface), the anti-reflection capabilities are degraded somewhat. This occurs because the phase accumulated in the layer relative to the phase of the light immediately reflected decreases as the angle increases from normal. This is counterintuitive, since the ray experiences a greater total phase shift in the layer than for normal incidence. This paradox is resolved by noting that the ray will exit the layer spatially offset from where it entered and will interfere with reflections from incoming rays that had to travel further (thus accumulating more phase of their own) to arrive at the interface. The net effect is that the relative phase is actually reduced, shifting the coating, such that the anti-reflection band of the coating tends to move to shorter wavelengths as the optic is tilted. Non-normal incidence angles also usually cause the reflection to be polarization-dependent.
Textured coatings
Reflection can be reduced by texturing the surface with 3D pyramids or 2D grooves (gratings). These kind of textured coating can be created using for example the Langmuir-Blodgett method.
If wavelength is greater than the texture size, the texture behaves like a gradient-index film with reduced reflection. To calculate reflection in this case, effective medium approximations can be used. To minimize reflection, various profiles of pyramids have been proposed, such as cubic, quintic or integral exponential profiles.
If wavelength is smaller than the textured size, the reflection reduction can be explained with the help of the geometric optics approximation: rays should be reflected many times before they are sent back toward the source. In this case the reflection can be calculated using ray tracing.
Using texture reduces reflection for wavelengths comparable with the feature size as well. In this case no approximation is valid, and reflection can be calculated by solving Maxwell equations numerically.
Antireflective properties of textured surfaces are well discussed in literature for a wide range of size-to-wavelength ratios (including long- and short-wave limits) to find the optimal texture size.
History
As mentioned above, natural index-matching "coatings" were discovered by Lord Rayleigh in 1886. Harold Dennis Taylor of Cooke company developed a chemical method for producing such coatings in 1904.
Interference-based coatings were invented and developed in 1935 by Olexander Smakula, who was working for the Carl Zeiss optics company. These coatings remained a German military secret for several years, until the Allies discovered the secret at the end of World War II.
Katharine Burr Blodgett and Irving Langmuir developed organic anti-reflection coatings known as Langmuir–Blodgett films in the late 1930s.
See also
Anti-scratch coating
Dichroic filter
Lens flare, which AR coating helps to reduce.
References
Sources
External links
Browser-based thin film design and optimization software
Browser-based numerical calculator of single-layer thin film reflectivity
Thin-film optics | Anti-reflective coating | [
"Materials_science",
"Mathematics"
] | 4,110 | [
"Thin-film optics",
"Planes (geometry)",
"Thin films"
] |
1,641,702 | https://en.wikipedia.org/wiki/Lamella%20%28materials%29 | A lamella (: lamellae) is a small plate or flake, from the Latin, and may also refer to collections of fine sheets of material held adjacent to one another in a gill-shaped structure, often with fluid in between though sometimes simply a set of "welded" plates. The term is used in biological contexts for thin membranes of plates of tissue. In the context of materials science, the microscopic structures in bone and nacre are called lamellae. Moreover, the term lamella is often used to describe crystal structure of some materials.
Uses of the term
In surface chemistry (especially mineralogy and materials science), lamellar structures are fine layers, alternating between different materials. They can be produced by chemical effects (as in eutectic solidification), biological means, or a deliberate process of lamination, such as pattern welding. Lamellae can also describe the layers of atoms in the crystal lattices of materials such as metals.
In surface anatomy, a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between.
In chemical engineering, the term is used for devices such as filters and heat exchangers.
In mycology, a lamella (or gill) is a papery hymenophore rib under the cap of some mushroom species, most often agarics.
The term has been used to describe the construction of lamellar armour, as well as the layered structures that can be described by a lamellar vector field.
In medical professions, especially orthopedic surgery, the term is used to refer to 3D printed titanium technology which is used to create implantable medical devices (in this case, orthopedic implants).
In context of water-treatment, lamellar filters may be referred to as plate filters or tube filters.
This term is used to describe a certain type of ichthyosis, a congenital skin condition. Lamellar Ichthyosis often presents with a "colloidal" membrane at birth. It is characterized by generalized dark scaling.
The term lamella(e) is used in the flooring industry to describe the finished top-layer of an engineered wooden floor. For example, an engineered walnut floor will have several layers of wood and a top walnut lamella.
In archaeology, the term is used for a variety of small flat and thin objects, such as Amulet MS 5236, a very thin gold plate with a stamped text from Ancient Greece in the 6th century BC.
In crystallography, the term was first used by Christopher Chantler and refers to a very thin layer of a perfect crystal, from which curved crystal physics may be derived.
In textile industry, a lamella is a thin metallic strip used alone or wound around a core thread for goldwork embroidery and tapestry weaving.
In September 2010, the U.S. Food and Drug Administration (FDA) announced a recall of two medications which contained "extremely thin glass flakes (lamellae) that are barely visible in most cases. The lamellae result from the interaction of the formulation with glass vials over the shelf life of the product."
See also
Lamella (cell biology)
Middle lamella
Annulate lamella
Lamella (structure)
References
Materials science | Lamella (materials) | [
"Physics",
"Materials_science",
"Engineering"
] | 666 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
1,641,770 | https://en.wikipedia.org/wiki/Globe%20valve | A globe valve, different from ball valve, is a type of valve used for regulating flow in a pipeline, consisting of a movable plug or disc element and a stationary ring seat in a generally spherical body.
Globe valves are named for their spherical body shape with the two halves of the body being separated by an internal baffle. This has an opening that forms a seat onto which a movable plug can be screwed in to close (or shut) the valve. The plug is also called a disc. In globe valves, the plug is connected to a stem which is operated by screw action using a handwheel in manual valves. Typically, automated globe valves use smooth stems rather than threaded and are opened and closed by an actuator assembly.
Information
Although globe valves in the past had the spherical bodies which gave them their name, many modern globe valves do not have much of a spherical shape. However, the term globe valve is still often used for valves that have such an internal mechanism. In plumbing, valves with such a mechanism are also often called stop valves since they don't have the spherical housing, but the term stop valve may refer to valves which are used to stop flow even when they have other mechanisms or designs.
Parts of a typical globe valve
Body
The body is the main pressure-containing structure of the valve and the most easily identified as it forms the mass of the valve. It contains all of the valve's internal parts that will come in contact with the substance being controlled by the valve. The bonnet is connected to the body and provides the containment of the fluid, gas, or slurry that is being controlled.
Globe valves are typically two-port valves, although three-port valves are also produced mostly in straight-flow configuration. Ports are openings in the body for fluid flowing in or out. The two ports may be oriented straight across from each other or anywhere on the body, or oriented at an angle (such as a 90°). Globe valves with ports at such an angle are called angle globe valves. Globe valves are mainly used for corrosive or highly viscous fluids that solidify at room temperature. This is because straight valves are designed so that the outlet pipe is in line with the inlet pipe and the fluid has a good chance of staying there in the case of horizontal piping. In the case of angle valves, the outlet pipe is directed towards the bottom. This allows the fluid to drain off. In turn, this prevents clogging and/or corrosion of the valve components over a period of time.
A globe valve can also have a body in the shape of a "Y". This will allow the construction of the valve to be straight at the bottom as opposed to the conventional pot-type construction (to arrange bottom seat) in case of other valves. This will again allow the fluid to pass through without difficulty and minimizes fluid clogging/corrosion in the long term.
Bonnet
The bonnet provides a leak-proof closure for the valve body. The threaded section of the stem goes through a hole with matching threads in the bonnet. Globe valves may have a screw-in, union, or bolted bonnet. Screw-in bonnet is the simplest bonnet, offering a durable, pressure-tight seal. Union bonnet is suitable for applications
Plug or disc
The valve's closure mechanism involves plugs that connect to a stem, which is adjusted either by sliding or screwing it up or down to regulate flow. Plugs come in balanced or unbalanced types. Unbalanced plugs, typically solid, are suitable for smaller valves or those with low pressure drops. They offer advantages such as simpler design, with potential leakage only at the seat, and usually lower cost. However, they are limited in size, as larger unbalanced plugs may require impractical forces to seal and control flow. On the other hand, balanced plugs feature holes through the plug itself. They offer advantages such as easier shut-off due to reduced static forces required. However, they introduce a second potential leak path between the plug and the cage, and they tend to be more expensive.
Stem
The stem serves as a connector from the actuator to the inside of the valve and transmits this actuation force. Stems are either smooth for actuator-controlled valves or threaded for manual valves. The smooth stems are surrounded by packing material to prevent leaking material from the valve. This packing is a wearable material and will have to be replaced during maintenance. With a smooth stem the ends are threaded to allow connection to the plug and the actuator. The stem must not only withstand a large amount of compression force during valve closure, but also have high tensile strength during valve opening. In addition, the stem must be very straight, or have low run-out, in order to ensure good valve closure. This minimum run-out also minimizes wear of the packing contained in the bonnet, which provides the seal against leakage. The stem may be provided with a shroud over the packing nut to prevent foreign bodies entering the packing material, which would accelerate wear.
Cage
The cage is a part of the valve that surrounds the plug and is located inside the body of the valve. Typically, the cage is one of the greatest determiners of flow within the valve. As the plug is moved, more of the openings in the cage are exposed and flow is increased and vice versa. The design and layout of the openings can have a large effect on flow of material (the flow characteristics of different materials at temperatures, pressures that are in a range). Cages are also used to guide the plug to the seat of the valve for a good shutoff, substituting the guiding from the bonnet.
Seat
The seat ring provides a stable, uniform and replaceable shut-off surface. The seat is usually screwed in or torqued. This pushes the cage down on the lip of the seat and holds it firmly to the body of the valve. The seat may also be threaded and screwed into a thread cut in the same area of the body. However this method makes removal of the seat ring during maintenance difficult if not impossible. Seat rings are also typically beveled at the seating surface to allow for some guiding during the final stages of closing the valve.
Economical globe valves or stop valves with a similar mechanism used in plumbing often have a rubber washer at the bottom of the disc for the seating surface, so that rubber can be compressed against the seat to form a leak-tight seal when shut.
See also
Ball valve
Butterfly valve
Control valve
Diaphragm valve
Gate valve
Needle valve
Pinch valve
References
External links
Control Valve Handbook – Fisher Controls International (4th Edition) – a complete 297-page online book.
Process Instrumentation (Lecture 8): Control valves – an article from a University of South Australia website.
Piping
Valves
Plumbing valves | Globe valve | [
"Physics",
"Chemistry",
"Engineering"
] | 1,384 | [
"Building engineering",
"Chemical engineering",
"Physical systems",
"Valves",
"Hydraulics",
"Mechanical engineering",
"Piping"
] |
1,642,813 | https://en.wikipedia.org/wiki/Fluorescence-lifetime%20imaging%20microscopy | Fluorescence-lifetime imaging microscopy or FLIM is an imaging technique based on the differences in the exponential decay rate of the photon emission of a fluorophore from a sample. It can be used as an imaging technique in confocal microscopy, two-photon excitation microscopy, and multiphoton tomography.
The fluorescence lifetime (FLT) of the fluorophore, rather than its intensity, is used to create the image in FLIM. Fluorescence lifetime depends on the local micro-environment of the fluorophore, thus precluding any erroneous measurements in fluorescence intensity due to change in brightness of the light source, background light intensity or limited photo-bleaching. This technique also has the advantage of minimizing the effect of photon scattering in thick layers of sample. Being dependent on the micro-environment, lifetime measurements have been used as an indicator for pH, viscosity and chemical species concentration.
Fluorescence lifetimes
A fluorophore which is excited by a photon will drop to the ground state with a certain probability based on the decay rates through a number of different (radiative and/or nonradiative) decay pathways. To observe fluorescence, one of these pathways must be by spontaneous emission of a photon. In the ensemble description, the fluorescence emitted will decay with time according to
where
.
In the above, is time, is the fluorescence lifetime, is the initial fluorescence at , and are the rates for each decay pathway, at least one of which must be the fluorescence decay rate . More importantly, the lifetime, is independent of the initial intensity and of the emitted light. This can be utilized for making non-intensity based measurements in chemical sensing.
Measurement
Fluorescence-lifetime imaging yields images with the intensity of each pixel determined by , which allows one to view contrast between materials with different fluorescence decay rates (even if those materials fluoresce at exactly the same wavelength), and also produces images which show changes in other decay pathways, such as in FRET imaging.
Pulsed illumination
Fluorescence lifetimes can be determined in the time domain by using a pulsed source.
When a population of fluorophores is excited by an ultrashort or delta pulse of light, the time-resolved fluorescence will decay exponentially as described above. However, if the excitation pulse or detection response is wide, the measured fluorescence, d(t), will not be purely exponential. The instrumental response function, IRF(t) will be convolved or blended with the decay function, F(t).
The instrumental response of the source, detector, and electronics can be measured, usually from scattered excitation light. Recovering the decay function (and corresponding lifetimes) poses additional challenges as division in the frequency domain tends to produce high noise when the denominator is close to zero.
TCSPC
Time-correlated single-photon counting (TCSPC) is usually employed because it compensates for variations in source intensity and single photon pulse amplitudes.
Using commercial TCSPC equipment a fluorescence decay curve can be recorded with a time resolution down to 405 fs.
The recorded fluorescence decay histogram obeys Poisson statistics which is considered in determining goodness of fit during fitting.
More specifically, TCSPC records times at which individual photons are detected by a fast single-photon detector (typically a photo-multiplier tube (PMT) or a single photon avalanche photo diode (SPAD)) with respect to the excitation laser pulse.
The recordings are repeated for multiple laser pulses and after enough recorded events, one is able to build a histogram of the number of events across all of these recorded time points.
This histogram can then be fit to an exponential function that contains the exponential lifetime decay function of interest, and the lifetime parameter can accordingly be extracted.
Multi-channel PMT systems with 16 to 64 elements have been commercially available, whereas the recently demonstrated CMOS single-photon avalanche diode (SPAD)-TCSPC FLIM systems can offer even higher number of detection channels and additional low-cost options.
Gating method
Pulse excitation is still used in this method. Before the pulse reaches the sample, some of the light is reflected by a dichroic mirror and gets detected by a photodiode that activates a delay generator controlling a gated optical intensifier (GOI) that sits in front of the CCD detector. The GOI only allows for detection for the fraction of time when it is open after the delay. Thus, with an adjustable delay generator, one is able to collect fluorescence emission after multiple delay times encompassing the time range of the fluorescence decay of the sample. In recent years integrated intensified CCD cameras entered the market. These cameras consist of an image intensifier, CCD sensor and an integrated delay generator. ICCD cameras with shortest gating times of down to 200ps and delay steps of 10ps allow sub-nanosecond resolution FLIM. In combination with an endoscope this technique is used for intraoperative diagnosis of brain tumors.
Phase modulation
Fluorescence lifetimes can be determined in the frequency domain by a phase-modulation method. The method uses a light source that is pulsed or modulated at high frequency (up to 500 MHz) such as an LED, diode laser or a continuous wave source combined with an electro-optic modulator or an acousto-optic modulator. The fluorescence is (a.) demodulated and (b.) phase shifted; both quantities are related to the characteristic decay times of the fluorophore. Also, y-components to the excitation and fluorescence sine waves will be modulated, and lifetime can be determined from the modulation ratio of these y-components. Hence, 2 values for the lifetime can be determined from the phase-modulation method. The lifetimes are determined through a fitting procedures of these experimental parameters. An advantage of PMT-based or camera-based frequency domain FLIM is its fast lifetime image acquisition making it suitable for applications such as live cell research.
Analysis
The goal of the analysis algorithm is to extract the pure decay curve from the measured decay and to estimate the lifetime(s). The latter is usually accomplished by fitting single or multi exponential functions. A variety of methods have been developed to solve this problem. The most widely used technique is the least square iterative re-convolution which is based on the minimization of the weighted sum of the residuals. In this technique theoretical exponential decay curves are convoluted with the instrument response function, which is measured separately, and the best fit is found by iterative calculation of the residuals for different inputs until a minimum is found. For a set of observations of the fluorescence signal in time bin i, the lifetime estimation is carried out by minimization of:
Besides experimental difficulties, including the wavelength dependent instrument response function, mathematical treatment of the iterative de-convolution problem is not straightforward and it is a slow process which in the early days of FLIM made it impractical for a pixel-by-pixel analysis.
Non fitting methods are attractive because they offer a very fast solution to lifetime estimation. One of the major and straightforward techniques in this category is the rapid lifetime determination (RLD) method. RLD calculates the lifetimes and their amplitudes directly by dividing the decay curve into two parts of equal width t. The analysis is performed by integrating the decay curve in equal time intervals t:
Ii is the recorded signal in the i-th channel and K is the number of channels. The lifetime can be estimated using:
For multi exponential decays this equation provides the average lifetime. This method can be extended to analyze bi-exponential decays. One major drawback of this method is that it cannot take into account the instrument response effect and for this reason the early part of the measured decay curves should be ignored in the analyses. This means that part of the signal is discarded and the accuracy for estimating short lifetimes goes down.
One of the interesting features of the convolution theorem is that the integral of the convolution is the product of the factors that make up the integral. There are a few techniques which work in transformed space that exploit this property to recover the pure decay curve from the measured curve. Laplace and Fourier transformation along with Laguerre gauss expansion have been used to estimate the lifetime in transformed space. These approaches are faster than the deconvolution based methods but they suffer from truncation and sampling problems. Moreover, application of methods like Laguerre gauss expansion is mathematically complicated. In Fourier methods the lifetime of a single exponential decay curve is given by:
Where:
and n is the harmonic number and T is the total time range of detection.
Applications
FLIM has primarily been used in biology as a method to detect photosensitizers in cells and tumors as well as FRET in instances where ratiometric imaging is difficult.
The technique was developed in the late 1980s and early 1990s (Gating method: Bugiel et al. 1989. König 1989, Phase modulation: Lakowicz at al. 1992,) before being more widely applied in the late 1990s. In cell culture, it has been used to study EGF receptor signaling and trafficking.
Time domain FLIM (tdFLIM) has also been used to show the interaction of both types of nuclear intermediate filament proteins lamins A and B1 in distinct homopolymers at the nuclear envelope, which further interact with each other in higher order structures. FLIM imaging is particularly useful in neurons, where light scattering by brain tissue is problematic for ratiometric imaging. In neurons, FLIM imaging using pulsed illumination has been used to study Ras, CaMKII, Rac, and Ran family proteins. FLIM has been used in clinical multiphoton tomography to detect intradermal cancer cells as well as pharmaceutical and cosmetic compounds.
More recently FLIM has also been used to detect flavanols in plant cells.
Autofluorescent coenzymes NAD(P)H and FAD
Multi-photon FLIM is increasingly used to detect auto-fluorescence from coenzymes as markers for changes in mammalian metabolism.
FRET imaging
Since the fluorescence lifetime of a fluorophore depends on both radiative (i.e. fluorescence) and non-radiative (i.e. quenching, FRET) processes, energy transfer from the donor molecule to the acceptor molecule will decrease the lifetime of the donor.
Thus, FRET measurements using FLIM can provide a method to discriminate between the states/environments of the fluorophore.
In contrast to intensity-based FRET measurements, the FLIM-based FRET measurements are also insensitive to the concentration of fluorophores and can thus filter out artifacts introduced by variations in the concentration and emission intensity across the sample.
See also
Phasor approach to fluorescence lifetime and spectral imaging
References
External links
Fluorescence Excited-State Lifetime Imaging
Lifetime and spectral analysis tools in ImageJ: http://spechron.com
Fluorescence Lifetime Imaging Microscopy
Principle of TCSPC FLIM (Becker&Hickl GmbH)
Fluorescence techniques
Optical microscopy techniques | Fluorescence-lifetime imaging microscopy | [
"Biology"
] | 2,325 | [
"Fluorescence techniques"
] |
1,643,266 | https://en.wikipedia.org/wiki/Spectroscopic%20notation | Spectroscopic notation provides a way to specify atomic ionization states, atomic orbitals, and molecular orbitals.
Ionization states
Spectroscopists customarily refer to the spectrum arising from a given ionization state of a given element by the element's symbol followed by a Roman numeral. The numeral I is used for spectral lines associated with the neutral element, II for those from the first ionization state, III for those from the second ionization state, and so on. For example, "He I" denotes lines of neutral helium, and "C IV" denotes lines arising from the third ionization state, C3+, of carbon. This notation is used for example to retrieve data from the NIST Atomic Spectrum Database.
Atomic and molecular orbitals
Before atomic orbitals were understood, spectroscopists discovered various distinctive series of spectral lines in atomic spectra, which they identified by letters. These letters were later associated with the azimuthal quantum number, ℓ. The letters, "s", "p", "d", and "f", for the first four values of ℓ were chosen to be the first letters of properties of the spectral series observed in alkali metals. Other letters for subsequent values of ℓ were assigned in alphabetical order, omitting the letter "j" because some languages do not distinguish between the letters "i" and "j":
{| class="wikitable"
|- align="center"
! width="40px" | letter !! name !! width="30px" | ℓ
|- align="center"
| s || align="left" | sharp || 0
|- align="center"
| p || align="left" | principal || 1
|- align="center"
| d || align="left" | diffuse || 2
|- align="center"
| f || align="left" | fundamental || 3
|- align="center"
| g
|
| 4
|- align="center"
| h
|
| 5
|- align="center"
| i
|
| 6
|- align="center"
| k
|
| 7
|- align="center"
| l
|
| 8
|- align="center"
| m
|
| 9
|- align="center"
| n
|
| 10
|- align="center"
| o
|
| 11
|- align="center"
| q
|
| 12
|- align="center"
| r
|
| 13
|- align="center"
| t
|
| 14
|- align="center"
| u
|
| 15
|- align="center"
| v
|
| 16
|- align="center"
| ...
|
| ...
|}
This notation is used to specify electron configurations and to create the term symbol for the electron states in a multi-electron atom. When writing a term symbol, the above scheme for a single electron's orbital quantum number is applied to the total orbital angular momentum associated to an electron state.
Molecular spectroscopic notation
The spectroscopic notation of molecules uses Greek letters to represent the modulus of the orbital angular momentum along the internuclear axis.
The quantum number that represents this angular momentum is Λ.
Λ = 0, 1, 2, 3, ...
Symbols: Σ, Π, Δ, Φ
For Σ states, one denotes if there is a reflection in a plane containing the nuclei (symmetric), using the + above. The − is used to indicate that there is not.
For homonuclear diatomic molecules, the index g or u denotes the existence of a center of symmetry (or inversion center) and indicates the symmetry of the vibronic wave function with respect to the point-group inversion operation i. Vibronic states that are symmetric with respect to i are denoted g for (German for "even"), and unsymmetric states are denoted u for (German for "odd").
Quarkonium
For mesons whose constituents are a heavy quark and its own antiquark (quarkonium) the same notation applies as for atomic states. However, uppercase letters are used.
Furthermore, the first number is (as in nuclear physics) where is the number of nodes in the radial wave function, while in atomic physics is used. Hence, a 1P state in quarkonium corresponds to a 2p state in an atom or positronium.
See also
References
Atomic physics
Spectroscopy | Spectroscopic notation | [
"Physics",
"Chemistry"
] | 945 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
" molecular",
"Atomic physics",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
1,643,492 | https://en.wikipedia.org/wiki/Cosmic%20latte | Cosmic latte is the average color of the galaxies of the universe as perceived from the Earth, found by a team of astronomers from Johns Hopkins University (JHU). In 2002, Karl Glazebrook and Ivan Baldry determined that the average color of the universe was a greenish white, but they soon corrected their analysis in a 2003 paper in which they reported that their survey of the light from over 200,000 galaxies averaged to a slightly beigeish white. The hex triplet value for cosmic latte is #FFF8E7.
Discovery of the color
Finding the average colour of the universe was not the focus of the study. Rather, the study examined spectral analysis of different galaxies to study star formation. Like Fraunhofer lines, the dark lines displayed in the study's spectral ranges display older and younger stars and allow Glazebrook and Baldry to determine the age of different galaxies and star systems. What the study revealed is that the overwhelming majority of stars formed about 5 billion years ago. Because these stars would have been "brighter" in the past, the color of the universe changes over time, shifting from blue to red as more blue stars change to yellow and eventually red giants.
As light from distant galaxies reaches the Earth, the average "color of the universe" (as seen from Earth) tends towards pure white, due to the light coming from the stars when they were much younger and bluer.
Naming the color
The corrected color was initially published on the Johns Hopkins University (JHU) News website and updated on the team's initial announcement. Multiple news outlets, including NPR and BBC, displayed the color in stories and some relayed the request by Glazebrook on the announcement asking for suggestions for names, jokingly adding all were welcome as long as they were not "beige".
These were the results of a vote of the JHU astronomers involved based on the new color:
Though Drum's suggestion of "cappuccino cosmico" received the most votes, the researchers favored Drum's other suggestion, "cosmic latte". "" means "milk" in Italian, Galileo's native language, and the similar "" means "milky", similar to the Italian term for the Milky Way, "". They enjoyed the fact that the color would be similar to the Milky Way's average color as well, as it is part of the sum of the universe. They also claimed to be "caffeine biased".
See also
References
External links
Official project website: The Cosmic Spectrum (archived 2016) from Professor Karl Glazebrook's website
Color
Physical cosmology
Shades of white
de:Kosmisch-Latte | Cosmic latte | [
"Physics",
"Astronomy"
] | 545 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
1,643,511 | https://en.wikipedia.org/wiki/Multiplicative%20quantum%20number | In quantum field theory, multiplicative quantum numbers are conserved quantum numbers of a special kind. A given quantum number q is said to be additive if in a particle reaction the sum of the q-values of the interacting particles is the same before and after the reaction. Most conserved quantum numbers are additive in this sense; the electric charge is one example. A multiplicative quantum number q is one for which the corresponding product, rather than the sum, is preserved.
Any conserved quantum number is a symmetry of the Hamiltonian of the system (see Noether's theorem). Symmetry groups which are examples of the abstract group called Z2 give rise to multiplicative quantum numbers. This group consists of an operation, P, whose square is the identity, P2 = 1. Thus, all symmetries which are mathematically similar to parity (physics) give rise to multiplicative quantum numbers.
In principle, multiplicative quantum numbers can be defined for any abelian group. An example would be to trade the electric charge, Q, (related to the abelian group U(1) of electromagnetism), for the new quantum number exp(2iπ Q). Then this becomes a multiplicative quantum number by virtue of the charge being an additive quantum number. However, this route is usually followed only for discrete subgroups of U(1), of which Z2 finds the widest possible use.
See also
Parity, C-symmetry, T-symmetry and G-parity
References
Group theory and its applications to physical problems, by M. Hamermesh (Dover publications, 1990)
Quantum field theory
Nuclear physics | Multiplicative quantum number | [
"Physics"
] | 342 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs",
"Nuclear physics"
] |
1,643,621 | https://en.wikipedia.org/wiki/List%20of%20wort%20plants | This is an alphabetical listing of wort plants, meaning plants that employ the syllable wort in their English-language common names.
According to the Oxford English Dictionary's Ask Oxford site, "A word with the suffix -wort is often very old. The Old English word was wyrt. The modern variation, root, comes from Old Norse. It was often used in the names of herbs and plants that had medicinal uses, the first part of the word denoting the complaint against which it might be specially efficacious. By the middle of the 17th-century -wort was beginning to fade from everyday use.
The Naturalist Newsletter states, "Wort derives from the Old English wyrt, which simply meant plant. The word goes back even further, to the common ancestor of English and German, to the Germanic wurtiz. Wurtiz also evolved into the modern German word Wurzel, meaning root."
Wort plants
Adderwort, adder's wort - Persicaria bistorta.
American lungwort - Mertensia virginica.
Asterwort - Any composite plant of the family Asteraceae.
Awlwort - Subularia aquatica. The plant bears awl-shaped leaves.
Banewort - Ranunculus flammula or Atropa belladonna
Barrenwort - Epimedium, especially Epimedium alpinum.
Bearwort - Meum athamanticum
Bellwort - Uvularia or plants in the family Campanulaceae.
Birthwort - Aristolochia. Also, birthroot (Trillium erectum).
Birthwort - Aristolochiaceae, the birthwort family.
Bishop's wort - Stachys officinalis. Also, fennel flower.
Bitterwort - Gentiana lutea.
Bladderwort - Utricularia (aquatic plants).
Blawort - A flower, commonly called harebell. Also, a certain plant bearing blue flowers.
Bloodwort - Sanguinaria canadensis. Produces escharotic alkaloids that corrode skin, leaving wounds. More commonly known as bloodroot, or sometimes tetterwort.
Blue navelwort - Cynoglossum omphaloides
Blue throatwort - Trachelium caeruleum.
Blushwort - A member of the gentian family. Shame flower.
Bogwort - The bilberry or whortleberry.
Bollockwort - A Middle English name for some types of orchid.
Boragewort - Any plant of the borage family, Boraginaceae.
Bridewort - Filipendula ulmaria and Spiraea spp., also known as meadowsweet.
Brimstonewort - Same as sulphurwort.
Brotherwort - Wild thyme.
Brownwort - Scrophularia vernalis, also known as yellow figwort
Bruisewort - Any plant considered to be useful in treating bruises, as herb Margaret.
Bullwort - Ammi majus. Bishop's wood.
Bullock's or Cow's Lungwort - Verbascum thapsus, the common Mullein.
Burstwort - Herniaria glabra. Formerly used to treat rupture.
Butterwort - Pinguicula vulgaris. Other species of Pinguicula have "butterwort" in their English names.
Cancerwort - Linaria vulgaris. Toadflax. Also includes some members of the Kickxia genus.
Catwort - A plant of the genus Nepeta. Catnip.
Clown's Ringwort - plant featured in the 1605 panel of the New World Tapestry.
Colewort - Brassica oleracea. Cabbage.
Coralwort - Tooth violet.
Crosswort - Eupatorium perfoliatum. Lysimachia quadrifolia. Boneset. Also, maywort, a species of Galium, and species of Phuopsis.
Damewort - Hesperis matronalis. Dame's violet or damask violet or rocket.
Danewort - Sambucus ebulus. The dwarf elder. Also, daneweed.
Dragonwort - An Artemisia, or Polygonum bistorta.
Dropwort - Filipendula vulgaris, Oenanthe, Oxypolis, Tiedemannia
Dungwort - Helleborus foetidus. Stinking hellebore.
Ebony spleenwort - Fern.
Elderwort - Sambucus ebulus.
European Pillwort - Pilularia globulifera. Peppergrass.
Felonwort - Solanum dulcamara. Felonwood or bittersweet.
Feltwort - Another common name for the Mullein, the genus Verbascum.
Felwort - A common name for various species of gentian.
Feverwort - Horse gentian.
Figwort - Some plants in the family Scrophulariaceae, including Euphrasia officinalis (eyebright); Veronica officinalis (speedwell or fluellen); Veronica anagullis (water speedwell); Gratiola officinalis (herb of grace) (hedge hyssop); Herpestis monniera (water hyssop); Scoparia dulcis (sweet broomweed), or Ilysanthes riparia (false pimpernel). Also the Cape figwort, Phygelius capensis.
Fleawort - A plantain. Also some composites, such as the Marsh Fleawort.
Flukewort - Hydrocotyle vulgaris.
French or golden lungwort - Hieracium murorum.
Frostwort - Helianthemum canadenes. Frostweed or rockrose.
Fumewort - Genus Corydalis.
Galewort - Myrica gale. Sweet gale.
Garlicwort - Alliaria officinalis. The hedge garlic.
Gentianwort - Any plant of the family Gentianaceae.
German Madwort - Asperugo procumbens.
Gipsywort - Any plant of the genus Lycopus, as Lycopus europaeus. Gipsy herb.
Glasswort - Any plant of the genus Salicornia. Frog grass. Also, a seaweed yielding kelp.
Golden ragwort - Senecio aureus. Squawweed.
Goutwort - Aegopodium podagraria. Acheweed; Herb Gerard; Goat's foot; Bishop weed; Goutweed.
Greater Spearwort - Ranunculus lingua.
Gutwort - Globularia alypum. Used as a purgative.
Hammerwort - Parietaria officinalis. The plant pellitory.
Hartwort - Any of certain plants of the genera Seseli, Tordylium, and Bupleurum.
Heathwort - Any plant of the genus Erica, the heath family.
Hemlock dropwort - Oenanthe fistulosa. A plant of the parsley family, Apiaceae or Umbelliferae.
Hillwort - Wild thyme, or Mentha pulegium, a kind of pennyroyal.
Hogwort - Croton capitatus. J.K. Rowling named Hogwarts School of Witchcraft and Wizardry after this plant (although she misspelled it).
Holewort - Hollowwort. Corydalis cava.
Honewort - A plant used as a remedy for hone.
Honeywort - A bee plant of the genus Cerinthe.
Hoodwort - Scutellaria lateriflora. Also called skullcap and madweed.
Hornwort - plants in the bryophyte genus Ceratophyllum, such as Rigid Hornwort.
Ironwort - A plant of the genus Sideritis.
Kelpwort - Macrocystis pyrifera, a kind of glasswort.
Kidneywort - Cotyledon umbilicus. Also called pennywort and navelwort.
Knotwort - Any plant of the genus Illecebrum.
Laserwort - Any plant of the genus Laserpitium.
Lazarwort - Laserwort.
Leadwort - Any plant of the genus Plumbago.
Lesser Spearwort - Ranunculus flammula. Banewort.
Lichwort - The wall pellitory, Parietaria officinalis.
Lilywort - A plant of the genus Funkia. Day lily. Also, any plant of the genus Liliaceae.
Liverwort - Any species of Marchantiophyta, a division of non-vascular plants (a type of bryophyte). Also, plants that resemble the liverworts; as, liverleaf (Hepatica).
Lousewort - Pedicularis canadensis. Cockscomb; head betony; wood betony (herb Christopher). Any plant of the genus Pedicularis, as Pedicularis palustris (marsh lousewort).
Lungwort - A plant of the genus Mertensia, the lungworts. Also, a boraginaceous plant of the genus Pulmonaria.
Lustwort - Any plant of the genus Drosera; the sundew.
Madderwort - Any plant of the madder family, Rubiaceae.
Madwort - Alyssum saxitile. Gold-dust.
Maidenhair spleenwort - Asplenium trichomanes
Mallowwort - Any plant of the mallow family, Malvaceae.
Marsh Pennywort - Hydrocotyle vulgaris.
Marsh St.-John's-wort - Elodes virginica.
Marshwort - Vaccinium Oxycoccus, the creeping cranberry. Also, a European umbelliferous plant.
Masterwort - Peucedanum (formerly Imperatoria) Ostruthium or Astrantia major. Also, in the United States, Heracleum lanatum, the cow parsnip.
Maudlinwort - Leucanthemum vulgare.
Maywort - Bedstraw or mugweed. A species of Galium. Also, crosswort, Lysimachia quadrifolia.
Meadowwort - Filipendula, Spiraea
Milkwort - Polygalaceae, the milkwort family, one of which yields buaze. Polygala vulgaris is the milkwort of Europe.
Miterwort or mitrewort (British) - Bishop's cap. Any plant of the genus Mitella.
Moneywort - loosestrife. Herb twopence, an evergreen trailing plant. A popular name for various plants of the genus Lysimachia, especially Lysimachia nummularia, of the primrose family, Primulaceae.
Moonwort - Honesty, a herb of the genus Lunaria. Also, any fern of the genus Botrychium.
Motherwort - A herb, Leonurus cardiaca, of the mint family, Lamiaceae. Also, mugwort.
Mountain spiderwort - Lloydia serotina.
Mudwort - Limosella aquatica, found growing in muddy places.
Mugwort - Artemisia vulgaris.
Mulewort - Any plant of the genus Hemionitis.
Nailwort - Any species of Paronychia. Also Draba verna, Saxifraga tridactylites.
Navelwort - Plants in the genera Cotyledon and Omphalodes.
Nettlewort - Any plant of the nettle family, Urticaceae.
Nipplewort - Lapsana communis.
Peachwort - Lady's Thumb, Polygonum persicaria.
Pearlwort - Pearl grass; pearl plant; any species of the genus Sagina.
Pennywort - Linaria cymbalaria. Also, any one of a number of peltate-leaved plants. Watercup; trumpetleaf. Also Gotu Kola (Centella asiatica) and species from genus Hydrocotyle.
Pepperwort - Lepidium latifolium; Lepidium campestre; Spanish cress, Lepidium cardamines. Peppergrass; cockweed; dittander; Marsilea minuta
Peterwort - Saint Peter's wort.
Pilewort - Lesser celandine.
Pipewort - Eriocaulon.
Quillwort - Isoetes, of the quillwort family; certain seedless plants or "fern allies".
Quinsywort - Asperula cynanchica.
Common ragwort - Jacobaea vulgaris, and some other plants of the genus Jacobaea (once called cankerweed).
Rattlewort - rattlebox, Crotalaria sagittalis.
Ribwort - Plantago lanceolata. Hen plant. English plantain, the common plantain introduced into the United States from Europe.
Rosewort - A plant of the rose family, Rosaceae. Also, roseroot, Rhodiola rosea, whose root has the fragrance of a rose.
Rupturewort - Alternanthera polygonoides. Also species of Herniaria, such as Smooth Rupturewort, H. glabra.
Saltwort - A vague and indefinite name applied to any halophyte of the genus Salsola. Salsola kali is the prickly saltwort. Also, some species of Salicornia, the glassworts, are called saltworts.
Sandwort - A plant of the genus Arenaria. One of the Caryophyllaceae.
Sawwort - A plant of the genus Serratula, especially Serratula tinctoria. Also, plants in the genus Saussurea.
Scorpionwort - either Ornithopus scorpioides or scorpion grass, the forget-me-not.
Scurvywort - Lesser celandine.
Sea Lungwort - Mertensia maritima.
Sea Milkwort - Of the primrose family, Primulaceae.
Sea ragwort - Dusty miller, Jacobaea maritima.
Sea sandwort - A plant of the North Atlantic seacoast.
Sea starwort - Aster Tripolium.
Setterwort - Helleborus foetidus. Pegroots; setter grass; bear's foot.
Sicklewort - Prunella vulgaris, the healall.
Sleepwort - Lettuce, especially Lactuca virosa
Slipperwort - A plant of the genus Calceolaria.
Sneezewort - Achillea ptarmica. Goosetongue; Bastard pellitory.
Soapwort - One of the bruiseworts.
Sparrowwort - A general name for the plants of the genus Passerina
Spearwort - A plant of the genus Ranunculus.
Spiderwort - Tradescantia virginica. Also species in the genus Commelina, such as Blue Spiderwort, C. coelestis.
Spleenwort - Asplenium. A large genus of ferns; formerly used for spleen disorders.
Spoonwort - Any plant of the genus Cochlearia, as Cochlearia officinalis. Scurvy grass.
Springwort - Euphorbia lathyris. Caper spurge.
Spurwort - Sherardia arvensis. Field madder.
Stabwort - Wood sorrel, Oxalis acetosella.
Staggerwort - Same as Staverwort .
Staithwort - Same as colewort.
Standerwort - Standergrass, Orchis mascula.
Starwort - Any plant of the genus Aster.
Staverwort - Staggerwort; ragwort, Jacobaea vulgaris.
Stinkwort – Various plants including Helleborus foetidus, the stinking hellebore; Dittrichia graveolens and Inula graveolens; and Datura stramonium, jimson weed.
Stitchwort - Any of several plants of the genus Stellaria. Stichwort.
St. James' Wort - Senecio jacobaea or Senecio aureus, two species of ragwort.
St. John's Wort - Can refer to any species of Hypericum.
Stonewort - A general name for plants of the genus Chara and Nitella; water horsetail.
St. Paul's Wort – species of the genus Sigesbeckia, such as Eastern St Paul's-wort, Sigesbeckia orientalis.
St. Peter's Wort - Any plant of the genus Ascyrum, such as Hypericum quadrangulum.
Strapwort - Corrigiola litoralis.
Sulphurwort - Peucedanum officinale. Hog fennel.
Swallowwort - The Greater Celandine, Chelidonium majus. Also, any plant of the genus Asclepias or Cynanchum. Black Swallow-wort is Vincetoxicum nigrum.
Sweetwort - Any plant of a sweet taste.
Talewort - Borago officinalis. Formerly considered a valuable remedy.
Tetterwort - According to 1913 Webster's Dictionary, Chelidonium majus in England or Sanguinaria canadensis in the Americas. Used to treat tetter.
Thoroughwort - Eupatorium perfoliatum. Boneset.
Throatwort - A name for one or two plants of the genus Campanula.
Thrumwort - Amaranthus caudatus; velvet flower or love-lies-bleeding.
Toothwort - Cardamine, Lathraea squamaria, and Lathraea clandestina
Towerwort - the tower mustard and some allied species of Arabis.
Tree lungwort - Lobaria pulmonaria, a lichen.
Trophywort - The Indian cress; nasturtium, Tropaeolum majus.
Venus's-navelwort - Either blue navelwort or white navelwort.
Wall Pennywort - Umbilicus rupestris.
Wartwort - Euphorbia helioscopia An attractive European weed which has flowers that turn towards the sun.
Water dropwort - Any plant in the Oenanthe genus, so-called because of the resemblance of some species to dropwort, Filipendula vulgari (see above).
Water figwort - Water betony.
Water pennywort - Hydrocotyle; marsh pennyworts.
Water starwort - Callitriche verna, of the water-milfoil family, and other species of the genus Callitriche, such as Common Water-starwort, Callitriche stagnalis.
Waterwort - An aquatic plant of the genus Elatine. Any plant of the family Philydraceae.
White navelwort - A plant of the genus Omphalodes.
White Swallowwort - Vincetoxicum officinale.
Willowwort - Any plant of the willow family, Salicaceae. A species of Lythrum. Grasspoly; a variety of loosestrife.
Woundwort - The name of several plants of the genus Stachys, a genus of labiate plants. Also, Anthyllis vulneraria and Achillea millefolium.
Yellow starwort - Elecampane, Inula helenium.
Yellow-wort - Chlora perfoliata, Gentianaceae
References
Wort plants
Plant common names | List of wort plants | [
"Biology"
] | 4,188 | [
"Lists of plants",
"Common names of organisms",
"Plants",
"Lists of biota",
"Plant common names"
] |
17,688,404 | https://en.wikipedia.org/wiki/Urology%20robotics | Urology Robotics, or URobotics, is a new interdisciplinary field for the application of robots in urology and for the development of such systems and novel technologies in this clinical discipline. Urology is among the medical fields with the highest rate of technology advances, which for several years has included the use medical robots.
Applications
The first surgical robot approved by the FDA is the da Vinci system. Even though this was designed to assist in general Laparoscopy, most of its application are in the urology field for radical prostatectomy. Pioneered at the Vattikuti Urology Institute, robotic radical prostatectomy has now become the gold standard for the treatment of prostate cancer.
Other, URobotic systems are under development. These include image-guided robots that, in addition to the direct visual feedback, use medical images for guiding the intervention. Since MRI provides enhanced visualization of soft-tissues compared to x-ray-based imaging, MRI compatible robots are being developed to assist the physician in performing the intervention in the MRI scanner. If prostate cancer lesions can be delineated in the image, robots can accurately target those lesions for biopsy or focal ablations.
References
Urology
Medical robotics | Urology robotics | [
"Biology"
] | 245 | [
"Medical robotics",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
17,690,698 | https://en.wikipedia.org/wiki/Zirconium%20diboride | Zirconium diboride (ZrB2) is a highly covalent refractory ceramic material with a hexagonal crystal structure. ZrB2 is an ultra-high temperature ceramic (UHTC) with a melting point of 3246 °C. This along with its relatively low density of ~6.09 g/cm3 (measured density may be higher due to hafnium impurities) and good high temperature strength makes it a candidate for high temperature aerospace applications such as hypersonic flight or rocket propulsion systems. It is an unusual ceramic, having relatively high thermal and electrical conductivities, properties it shares with isostructural titanium diboride and hafnium diboride.
ZrB2 parts are usually hot pressed (pressure applied to the heated powder) and then machined to shape. Sintering of ZrB2 is hindered by the material's covalent nature and presence of surface oxides which increase grain coarsening before densification during sintering. Pressureless sintering of ZrB2 is possible with sintering additives such as boron carbide and carbon which react with the surface oxides to increase the driving force for sintering but mechanical properties are degraded compared to hot pressed ZrB2.
Additions of ~30 vol% SiC to ZrB2 is often added to ZrB2 to improve oxidation resistance through SiC creating a protective oxide layer - similar to aluminium's protective alumina layer.
ZrB2 is used in ultra-high temperature ceramic matrix composites (UHTCMCs).
Carbon fiber reinforced zirconium diboride composites show high toughness while silicon carbide fiber reinforced zirconium diboride composites are brittle and show a catastrophic failure.
Preparation
ZrB2 can be synthesized by stoichiometric reaction between constituent elements, in this case Zr and B. This reaction provides for precise stoichiometric control of the materials. At 2000 K, the formation of ZrB2 via stoichiometric reaction is thermodynamically favorable (ΔG=−279.6 kJ mol−1) and therefore, this route can be used to produce ZrB2 by self-propagating high-temperature synthesis (SHS). This technique takes advantage of the high exothermic energy of the reaction to cause high temperature, fast combustion reactions. Advantages of SHS include higher purity of ceramic products, increased sinterability, and shorter processing times. However, the extremely rapid heating rates can result in incomplete reactions between Zr and B, the formation of stable oxides of Zr, and the retention of porosity. Stoichiometric reactions have also been carried out by reaction of attrition milled (wearing materials by grinding) Zr and B powder (and then hot pressing at 600 °C for 6 h), and nanoscale particles have been obtained by reacting attrition milled Zr and B precursor crystallites (10 nm in size).
Reduction of ZrO2 and HfO2 to their respective diborides can also be achieved via metallothermic reduction. Inexpensive precursor materials are used and reacted according to the reaction below:
ZrO2 + B2O3 + 5Mg → ZrB2 + 5MgO
Mg is used as a reactant to allow for acid leaching of unwanted oxide products. Stoichiometric excesses of Mg and B2O3 are often required during metallothermic reductions to consume all available ZrO2. These reactions are exothermic and can be used to produce the diborides by SHS. Production of ZrB2 from ZrO2 via SHS often leads to incomplete conversion of reactants, and therefore double SHS (DSHS) has been employed by some researchers. A second SHS reaction with Mg and H3BO3 as reactants along with the ZrB2/ZrO2 mixture yields increased conversion to the diboride, and particle sizes of 25–40 nm at 800 °C. After metallothermic reduction and DSHS reactions, MgO can be separated from ZrB2 by mild acid leaching.
Synthesis of UHTCs by boron carbide reduction is one of the most popular methods for UHTC synthesis. The precursor materials for this reaction (ZrO2/TiO2/HfO2 and B4C) are less expensive than those required by the stoichiometric and borothermic reactions. ZrB2 is prepared at greater than 1600 °C for at least 1 hour by the following reaction:
2ZrO2 + B4C + 3C → 2ZrB2 + 4CO
This method requires a slight excess of boron, as some boron is oxidized during boron carbide reduction. ZrC has also been observed as a product from the reaction, but if the reaction is carried out with 20–25% excess B4C, the ZrC phase disappears, and only ZrB2 remains. Lower synthesis temperatures (~1600 °C) produce UHTCs that exhibit finer grain sizes and better sinterability. Boron carbide must be subjected to grinding prior to the boron carbide reduction to promote oxide reduction and diffusion processes.
Boron carbide reductions can also be carried out via reactive plasma spraying if a UHTC coating is desired. Precursor or powder particles react with plasma at high temperatures (6000–15000 °C) which greatly reduces the reaction time. ZrB2 and ZrO2 phases have been formed using a plasma voltage and current of 50 V and 500 A, respectively. These coating materials exhibit uniform distribution of fine particles and porous microstructures, which increased hydrogen flow rates.
Another method for the synthesis of UHTCs is the borothermic reduction of ZrO2, TiO2, or HfO2 with B. At temperatures higher than 1600 °C, pure diborides can be obtained from this method. Due to the loss of some boron as boron oxide, excess boron is needed during borothermic reduction. Mechanical milling can lower the reaction temperature required during borothermic reduction. This is due to the increased particle mixing and lattice defects that result from decreased particle sizes of ZnO2 and B after milling. This method is also not very useful for industrial applications due to the loss of expensive boron as boron oxide during the reaction.
Nanocrystals of ZrB2were successfully synthesized by Zoli's reaction, a reduction of ZrO2 with NaBH4 using a molar ratio M:B of 1:4 at 700 °C for 30 min under argon flow.
ZrO2 + 3NaBH4 → ZrB2 + 2Na(g,l) + NaBO2 + 6H2(g)
ZrB2 can be prepared from solution-based synthesis methods as well, although few substantial studies have been conducted. Solution-based methods allow for low temperature synthesis of ultrafine UHTC powders. Yan et al. have synthesized ZrB2 powders using the inorganic-organic precursors ZrOCl2•8H2O, boric acid and phenolic resin at 1500 °C. The synthesized powders exhibit 200 nm crystallite size and low oxygen content (~ 1.0 wt%). ZrB2 preparation from polymeric precursors has also been recently investigated. ZrO2 and HfO2 can be dispersed in boron carbide polymeric precursors prior to reaction. Heating the reaction mixture to 1500 °C results in the in situ generation of boron carbide and carbon, and the reduction of ZrO2 to ZrB2 soon follows. The polymer must be stable, processable, and contain boron and carbon to be useful for the reaction. Dinitrile polymers formed from the condensation of dinitrile with decaborane satisfy these criteria.
Chemical vapor deposition can be used to prepare zirconium diboride. Hydrogen gas is used to reduce vapors of zirconium tetrachloride and boron trichloride at substrate temperatures greater than 800 °C.
Recently, high-quality thin films of ZrB2 can also be prepared by physical vapor deposition.
Defects and secondary phases in zirconium diboride
Zirconium diboride gains its high-temperature mechanical stability from the high atomic defect energies (i.e. the atoms do not deviate easily from their lattice sites). This means that the concentration of defects will remain low, even at high temperatures, preventing failure of the material.
The layered bonding between each layer is also very strong but means that the ceramic is highly anisotropic, having different thermal expansions in the 'z' <001> direction. Although the material has excellent high temperature properties, the ceramic has to be produced extremely carefully as any excess of either zirconium or boron will not be accommodated in the ZrB2 lattice (i.e. the material does not deviate from stoichiometry). Instead it will form extra lower melting point phases which may initiate failure under extreme conditions.
Diffusion and transmutation in zirconium diboride
Zirconium diboride is also investigated as a possible material for nuclear reactor control rods due to the presence of boron.
10B + nth → [11B] → α + 7Li + 2.31 MeV.
The layered structure provides a plane for helium diffusion to occur. He is formed as a transmutation product of boron-10—it is the alpha particle in the above reaction—and will rapidly migrate through the lattice between the layers of zirconium and boron, however not in the 'z' direction. Of interest, the other transmutation product, lithium, is likely to be trapped in the boron vacancies that are produced by the boron-10 transmutation and not be released from the lattice.
Application in solar energy sector
Direct absorption solar collector are being considered of great interest in combination with colloidal systems dispersed in suitable liquids. The obtained fluids are able to carry out, at the same time, the double function of volumetric solar absorbers and heat transfer media. Optical properties of ZrB2 have been investigated for this purpose.
References
Zirconium(II) compounds
Borides
Ceramic materials
Refractory materials | Zirconium diboride | [
"Physics",
"Engineering"
] | 2,136 | [
"Refractory materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
9,015,477 | https://en.wikipedia.org/wiki/Alpha%20sheet | Alpha sheet (also known as alpha pleated sheet or polar pleated sheet) is an atypical secondary structure in proteins, first proposed by Linus Pauling and Robert Corey in 1951. The hydrogen bonding pattern in an alpha sheet is similar to that of a beta sheet, but the orientation of the carbonyl and amino groups in the peptide bond units is distinctive; in a single strand, all the carbonyl groups are oriented in the same direction on one side of the pleat, and all the amino groups are oriented in the same direction on the opposite side of the sheet. Thus the alpha sheet accumulates an inherent separation of electrostatic charge, with one edge of the sheet exposing negatively charged carbonyl groups and the opposite edge exposing positively charged amino groups. Unlike the alpha helix and beta sheet, the alpha sheet configuration does not require all component amino acid residues to lie within a single region of dihedral angles; instead, the alpha sheet contains residues of alternating dihedrals in the traditional right-handed (αR) and left-handed (αL) helical regions of Ramachandran space. Although the alpha sheet is only rarely observed in natural protein structures, it has been speculated to play a role in amyloid disease and it was found to be a stable form for amyloidogenic proteins in molecular dynamics simulations. Alpha sheets have also been observed in X-ray crystallography structures of designed peptides.
The regular formation of alpha-sheet by unfolded proteins inevitably involves many L amino acid residues readily adopting the alphaL conformation, which appears at first sight to go against textbook chemistry, which is that, of the 20 amino acids, it is glycine that strongly favours this conformation. The conundrum is resolved by realizing that the alphaL region comprises two overlapping areas, here called γL and αL, which should be considered separately. It turns out that, while the γL conformation is adopted, almost exclusively, by glycine, the αL conformation of alpha-sheet is more commonly, or about as commonly, adopted by any of 15 L-amino acids compared to glycine, the exceptions being proline, threonine, valine and isoleucine, which are rare at this conformation. Hence, of the 20 amino acids, 16 readily adopt the αL conformation.
Experimental evidence
When Pauling and Corey first proposed the alpha sheet, they suggested that it agreed well with fiber diffraction results from beta-keratin fibers. However, since the alpha sheet did not appear to be energetically favorable, they argued that beta sheets would occur more commonly among normal proteins, and subsequent demonstration that beta-keratin is made of beta sheets consigned the alpha sheet proposal to obscurity. However the alpha strand conformation is observed in isolated instances in native state proteins as solved by X-ray crystallography or protein NMR, although an extended alpha sheet is not identified in any known natural protein. Native proteins containing alpha-strand regions or alpha-sheet-patterned hydrogen bonding include synaptotagmin, lysozyme, and potassium channels, where the alpha-strands line the ion-conducting pore.
Evidence for the existence of alpha-sheet in a mutant form of transthyretin has been presented. Alpha-sheet conformations have been observed in crystal structures of short non-natural peptides, especially those containing a mixture of L and D amino acids. The first crystal structure containing an alpha sheet was observed in the capped tripeptide Boc–AlaL–a-IleD–IleL–OMe. Other peptides that assume alpha-sheet structures include capped diphenyl-glycine-based dipeptides and tripeptides.
Role in amyloidogenesis
The alpha sheet has been proposed as a possible intermediate state in the conformational change in the formation of amyloid fibrils by peptides and proteins such as amyloid beta, poly-glutamine repeats, lysozyme, prion proteins, and transthyretin repeats, all of which are associated with protein misfolding disease. For example, amyloid beta is a major component of amyloid plaques in the brains of Alzheimer's disease patients, and polyglutamine repeats in the huntingtin protein are associated with Huntington's disease. These proteins undergo a conformational change from largely random coil or alpha helix structures to the highly ordered beta sheet structures found in amyloid fibrils. Most beta sheets in known proteins are "twisted" about 15° for optimal hydrogen bonding and steric packing; however, some evidence from electron crystallography suggests that at least some amyloid fibrils contain "flat" sheets with only 1–2.5° of twist. An alpha-sheet amyloid intermediate is suggested to explain some anomalous features of the amyloid fibrillization process, such as the evident amino acid sequence dependence of amyloidogenesis despite the belief that the amyloid fold is mainly stabilized by the protein backbone.
Xu, using atomic force microscopy, has shown that formation of amyloid fibers is a two-step process in which proteins first aggregate into colloidal spheres of ≈20 nm diameter. The spheres then join together spontaneously to form linear chains, which evolve into mature amyloid fibers. The formation of these linear chains appears to be driven by the development of an electrostatic dipole in each of the colloidal spheres strong enough to overcome coulomb repulsion. This suggests a possible mechanism by which alpha sheet may promote amyloid aggregation; the peptide bond has a relatively large intrinsic electrostatic dipole, but normally the dipoles of nearby bonds cancel each other out. In the alpha sheet, unlike other conformations, the peptide bonds are oriented in parallel so that the dipoles of the individual bonds can add up to create a strong overall electrostatic dipole.
Notably, the protein lysozyme is among the few native-state proteins shown to contain an alpha-strand region; lysozyme from both chickens and humans contains an alpha strand located close to the site of a mutation known to cause hereditary amyloidosis in humans, usually an autosomal dominant genetic disease. Molecular dynamics simulations of the mutant protein reveal that the region around the mutation assumes an alpha strand conformation. Lysozyme is among the naturally occurring proteins known to form amyloid fibers under experimental conditions, and both natively alpha-strand region and the mutation site fall within the larger region identified as the core of lysozyme amyloid fibrillogenesis.
A mechanism for direct alpha sheet and beta sheet interconversion has also been suggested, based on peptide plane flipping in which the αRαL dipeptide inverts to produce a ββ dihedral angle conformation. This process has also been observed in simulations of transthyretin and implicated as occurring naturally in certain protein families by examination of their dihedral angle conformations in crystal structures. It is suggested that alpha-sheet folds into multi-strand solenoids.
Evidence employing retro-enantio N-methylated peptides, or those with alternating L and D amino acids, as inhibitors of beta-amyloid aggregation is consistent with alpha-sheet being the main material of the amyloid precursor.
References
Protein structural motifs | Alpha sheet | [
"Biology"
] | 1,491 | [
"Protein structural motifs",
"Protein classification"
] |
9,015,776 | https://en.wikipedia.org/wiki/Electromagnetic%20clutch | Electromagnetic clutches operate electrically but transmit torque mechanically. This is why they used to be referred to as electro-mechanical clutches. Over the years, EM became known as electromagnetic versus electro-mechanical, referring more about their actuation method versus physical operation. Since the clutches started becoming popular over 60 years ago, the variety of applications and clutch designs has increased dramatically, but the basic operation remains the same today.
Single-face clutches make up approximately 90% of all electromagnetic clutch sales.
Electromagnetic clutches are most suitable for remote operation since no mechanical linkages are required to control their engagement, providing fast, smooth operation. However, because the activation energy dissipates as heat in the electromagnetic actuator when the clutch is engaged, there is a risk of overheating. Consequently, the maximum operating temperature of the clutch is limited by the temperature rating of the insulation of the electromagnet. This is a major limitation. Another disadvantage is higher initial cost.
Friction-plate clutch
A friction-plate clutch uses a single plate friction surface to engage the input and output members of the clutch.
How it works
Engagement
When the clutch is actuated, current flows through the electromagnet producing a magnetic field. The rotor portion of the clutch becomes magnetized and sets up a magnetic loop that attracts the armature. The armature is pulled against the rotor and a frictional force is generated at contact. Within a relatively short time, the load is accelerated to match the speed of the rotor, thereby engaging the armature and the output hub of the clutch. In most instances, the rotor is constantly rotating with the input all the time.
Disengagement
When current is removed from the clutch, the armature is free to turn with the shaft. In most designs, springs hold the armature away from the rotor surface when power is released, creating a small air gap.
Cycling
Cycling is achieved by interrupting the current through the electromagnet. Slippage normally occurs only during acceleration. When the clutch is fully engaged, there is no relative slip, assuming the clutch is sized properly, and thus torque transfer is 100% efficient.
Applications
Machinery
This type of clutch is used in some lawnmowers, copy machines, and conveyor drives. Other applications include packaging machinery, printing machinery, food processing machinery, and factory automation.
Vehicles
When the electromagnetic clutch is used in automobiles, there may be a clutch release switch inside the gear lever. The driver operates the switch by holding the gear lever to change the gear, thus cutting off current to the electromagnet and disengaging the clutch. With this mechanism, there is no need to depress the clutch pedal. Alternatively, the switch may be replaced by a touch sensor or proximity sensor which senses the presence of the hand near the lever and cuts off the current. The advantages of using this type of clutch for automobiles are that complicated linkages are not required to actuate the clutch, and the driver needs to apply a considerably reduced force to operate the clutch. It is a type of semi-automatic transmission.
Electromagnetic clutches are also often found in AWD systems, and are used to vary the amount of power sent to individual wheels or axles.
Most, but not all, automobile air conditioning systems are switched on and off by using an electromagnetic clutch. To activate the compressor the clutch is activated. This connects the air conditioning compressor's shaft end to a pulley driven by the engine's crankshaft through a belt.
Electromagnetic clutches have been used on diesel locomotives, e.g. by Hohenzollern Locomotive Works.
Other types of electromagnetic clutches
Multiple disk clutches
Introduction – Multiple disk clutches are used to deliver extremely high torque in a relatively small space. These clutches can be used dry or wet (oil bath). Running the clutches in an oil bath also greatly increases the heat dissipation capability, which makes them ideally suited for multiple speed gear boxes and machine tool applications.
How it works – Multiple disk clutches operate via an electrical actuation but transmit torque mechanically. When current is applied through the clutch coil, the coil becomes an electromagnet and produces magnetic lines of flux. These lines of flux are transferred through the small air gap between the field and the rotor. The rotor portion of the clutch becomes magnetized and sets up a magnetic loop, which attracts both the armature and friction disks. The attraction of the armature compresses (squeezes) the friction disks, transferring the torque from the in inner driver to the out disks. The output disks are connected to a gear, coupling, or pulley via drive cup. The clutch slips until the input and output RPMs are matched. This happens relatively quickly typically (0.2 - 2 sec).
When the current is removed from the clutch, the armature is free to turn with the shaft. Springs hold the friction disks away from each other, so there is no contact when the clutch is not engaged, creating a minimal amount of drag.
Electromagnetic tooth clutches
Introduction – Of all the electromagnetic clutches, the tooth clutches provide the greatest amount of torque in the smallest overall size. Because torque is transmitted without any slippage, clutches are ideal for multi stage machines where timing is critical such as multi-stage printing presses. Sometimes, exact timing needs to be kept, so tooth clutches can be made with a single position option which means that they will only engage at a specific degree mark. They can be used in dry or wet (oil bath) applications, so they are very well suited for gear box type drives.
They should not be used in high speed applications or applications that have engagement speeds over 50 rpm otherwise damage to the clutch teeth would occur when trying to engage the clutch.
How it works – Electromagnetic tooth clutches operate via an electric actuation but transmit torque mechanically. When current flows through the clutch coil, the coil becomes an electromagnet and produces magnetic lines of flux. This flux is then transferred through the small gap between the field and the rotor. The rotor portion of the clutch becomes magnetized and sets up a magnetic loop, which attracts the armature teeth to the rotor teeth. In most instances, the rotor is consistently rotating with the input (driver). As soon as the clutch armature and rotor are engaged, lock up is 100%.
When current is removed from the clutch field, the armature is free to turn with the shaft. Springs hold the armature away from the rotor surface when power is released, creating a small air gap and providing complete disengagement from input to output.
Electromagnetic particle clutches
Introduction – Magnetic particle clutches are unique in their design, from other electro-mechanical clutches because of the wide operating torque range available. Like a standard, single face clutch, torque to voltage is almost linear. However, in a magnetic particle clutch torque can be controlled very accurately. This makes these units ideally suited for tension control applications, such as wire winding, foil, film, and tape tension control. Because of their fast response, they can also be used in high cycle application, such as card readers, sorting machines, and labeling equipment.
How it works – Magnetic particles (very similar to iron filings) are located in the powder cavity. When current flows through the coil, the magnetic flux that is created tries to bind the particles together, almost like a magnetic particle slush. As the current is increased, the magnetic field builds, strengthening the binding of the particles. The clutch rotor passes through the bound particles, causing drag between the input and the output during rotation. Depending upon the output torque requirement, the output and input may lock at 100% transfer.
When current is removed from the clutch, the input is almost free to turn with the shaft. Because the magnetic particles remain in the cavity, all magnetic particle clutches have some minimum drag.
Hysteresis-powered clutch
Electrical hysteresis units have an extremely high torque range. Since these units can be controlled remotely, they are ideal for testing applications where varying torque is required. Since drag torque is minimal, these units offer the widest available torque range of any electromagnetic product. Most applications involving powered hysteresis units are in test stand requirements. Since all torque is transmitted magnetically, there is no contact, so no wear occurs to any of the torque transfer components providing for extremely long life.
When the current is applied, it creates magnetic flux. This passes into the rotor portion of the field. The hysteresis disk physically passes through the rotor, without touching it. These disks have the ability to become magnetized depending upon the strength of the flux (this dissipates as flux is removed). This means, as the rotor rotates, magnetic drag between the rotor and the hysteresis disk takes place causing rotation. In a sense, the hysteresis disk is pulled after the rotor. Depending upon the output torque required, this pull eventually can match the input speed, giving a 100% lockup.
When current is removed from the clutch, the armature is free to turn and no relative force is transmitted between either member. Therefore, the only torque seen between the input and the output is bearing drag.
See also
Electromagnetic brake
Magnetic coupling
References
W. Pelczewski: SPRZEGLA ELEKTROMAGNETYCZNE (Polish original edition); German Edition: Elektromagnetische Kupplung, Kapitel: Elektromagnetische Induktionskuppling; Vieweg 1971,
External links
Clutches
Auto parts
Electromagnetism
Electromagnetic brakes and clutches | Electromagnetic clutch | [
"Physics",
"Engineering"
] | 1,944 | [
"Electromagnetism",
"Physical phenomena",
"Electromagnetic brakes and clutches",
"Fundamental interactions",
"Mechanical engineering"
] |
9,017,148 | https://en.wikipedia.org/wiki/Spherulite%20%28polymer%20physics%29 | In polymer physics, spherulites (from Greek sphaira = ball and lithos = stone) are spherical semicrystalline regions inside non-branched linear polymers. Their formation is associated with crystallization of polymers from the melt and is controlled by several parameters such as the number of nucleation sites, structure of the polymer molecules, cooling rate, etc. Depending on those parameters, spherulite diameter may vary in a wide range from a few micrometers to millimeters. Spherulites are composed of highly ordered lamellae, which result in higher density, hardness, but also brittleness when compared to disordered regions in a polymer. The lamellae are connected by amorphous regions which provide elasticity and impact resistance. Alignment of the polymer molecules within the lamellae results in birefringence producing a variety of colored patterns, including a Maltese cross, when spherulites are viewed between crossed polarizers in an optical microscope.
Formation
If a molten linear polymer (such as polyethylene) is cooled down rapidly, then the orientation of its molecules, which are randomly aligned, curved and entangled remain frozen and the solid has disordered structure. However, upon slow cooling, some polymer chains take on a certain orderly configuration: they align themselves in plates called crystalline lamellae.
Growth from the melt would follow the temperature gradient (see figure). For example, if the gradient is directed normal to the direction of molecular alignment then the lamella growth sideward into a planar crystallite. However, in absence of thermal gradient, growth occurs radially, in all directions resulting in spherical aggregates, that is spherulites. The largest surfaces of the lamellae are terminated by molecular bends and kinks, and growth in this direction results in disordered regions. Therefore, spherulites have semicrystalline structure where highly ordered lamellae plates are interrupted by amorphous regions.
The size of spherulites varies in a wide range, from micrometers up to 8 centimeter and is controlled by the nucleation. Strong supercooling or intentional addition of crystallization seeds results in relatively large number of nucleation sites; then spherulites are numerous and small and interact with each other upon growth. In case of fewer nucleation sites and slow cooling, a few larger spherulites are created.
The seeds can be induced by impurities, plasticizers, fillers, dyes and other substances added to improve other properties of the polymer. This effect is poorly understood and irregular, so that the same additive can promote nucleation in one polymer, but not in another. Many of the good nucleating agents are metal salts of organic acids, which themselves are crystalline at the solidification temperature of the polymer solidification.
Properties
Mechanical
Formation of spherulites affects many properties of the polymer material; in particular, crystallinity, density, tensile strength and Young's modulus of polymers increase during spherulization. This increase is due to the lamellae fraction within the spherulites, where the molecules are more densely packed than in the amorphous phase. Stronger intermolecular interaction within the lamellae accounts for increased hardness, but also for higher brittleness. On the other hand, the amorphous regions between the lamellae within the spherulites give the material certain elasticity and impact resistance.
Changes in mechanical properties of polymers upon formation of spherulites however strongly depend on the size and density of the spherulites. A representative example is shown in the figure demonstrating that the strain at failure rapidly decreases with the increase in the spherulite size and thus with the decrease in their number in isotactic polypropylene. Similar trends are observed for tensile strength, yield stress and toughness. Increase in the total volume of the spherulites results in their interaction as well as shrinkage of the polymer, which becomes brittle and easily cracks under load along the boundaries between the spherulites.
Optical
Alignment of the polymer molecules within the lamellae results in birefringence producing a variety of colored patterns when spherulites are viewed between crossed polarizers in an optical microscope. In particular, the so-called "Maltese cross" is often present which consists of four dark perpendicular cones diverging from the origin (see right picture), sometimes with a bright center (front picture). Its formation can be explained as follows. Linear polymer chains can be regarded as a linear polarizers. If their direction coincides with that of one of the crossed polarizers then little light is transmitted; the transmission is increased when the chains make a non-zero angle with both polarizers, and the induced transmittance is dependent on the wavelength, partly because of the absorption properties of the polymer.
This effect results in the dark perpendicular cones (Maltese cross) and colored brighter regions in between them in the front and right pictures. It reveals that the molecular axis of the polymer molecules in the spherules is either normal or perpendicular to the radius vector, i.e. molecular orientation is uniform when going along a line from the spherulite center to its edge along its radius. However, this orientation changes with rotation angle. The pattern may be different (bright or dark) for the center of the spherulites indicating misorientation of the molecules in the nucleation seeds of individual spherulites. Any dark or light spots are dependent on the angle made with the polarizer, which results in a symmetrical image due to the spherical shape.
When spherulites were rotated in their plane, the corresponding Maltese cross patterns did not change, indicating that the molecular arrangement is homogeneous versus the polar angle. From the birefringence point of view, spherulites can be positive or negative. This distinction depends not on the orientation of the molecules (parallel or perpendicular to the radial direction) but to the orientation of the major refractive index of the molecule relative to the radial vector. The spherulite polarity depends on the constituent molecules, but it can also change with temperature.
See also
Crystallization of polymers
References
Bibliography
Polymers | Spherulite (polymer physics) | [
"Chemistry",
"Materials_science"
] | 1,281 | [
"Polymers",
"Polymer chemistry"
] |
9,017,838 | https://en.wikipedia.org/wiki/Virtual%20fixture | A virtual fixture is an overlay of augmented sensory information upon a user's perception of a real environment in order to improve human performance in both direct and remotely manipulated tasks. Developed in the early 1990s by Louis Rosenberg at the U.S. Air Force Research Laboratory (AFRL), Virtual Fixtures was a pioneering platform in virtual reality and augmented reality technologies.
History
Virtual Fixtures was first developed by Louis Rosenberg in 1992 at the USAF Armstrong Labs, resulting in the first immersive augmented reality system ever built. Because 3D graphics were too slow in the early 1990s to present a photorealistic and spatially-registered augmented reality, Virtual Fixtures used two real physical robots, controlled by a full upper-body exoskeleton worn by the user. To create the immersive experience for the user, a unique optics configuration was employed that involved a pair of binocular magnifiers aligned so that the user's view of the robot arms were brought forward so as to appear registered in the exact location of the user's real physical arms. The result was a spatially-registered immersive experience in which the user moved his or her arms, while seeing robot arms in the place where his or her arms should be. The system also employed computer-generated virtual overlays in the form of simulated physical barriers, fields, and guides, designed to assist in the user while performing real physical tasks.
Fitts Law performance testing was conducted on batteries of human test subjects, demonstrating for the first time, that a significant enhancement in human performance of real-world dexterous tasks could be achieved by providing immersive augmented reality overlays to users.
Concept
The concept of virtual fixtures was first introduced as an overlay of virtual sensory information on a workspace in order to improve human performance in direct and remotely manipulated tasks. The virtual sensory overlays can be presented as physically realistic structures, registered in space such that they are perceived by the user to be fully present in the real workspace environment. The virtual sensory overlays can also be abstractions that have properties not possible of real physical structures. The concept of sensory overlays is difficult to visualize and talk about, as a consequence the virtual fixture metaphor was introduced. To understand what a virtual fixture is an analogy with a real physical fixture such as a ruler is often used. A simple task such as drawing a straight line on a piece of paper free-hand is a task that most humans are unable to perform with good accuracy and high speed. However, the use of a simple device such as a ruler allows the task to be carried out quickly and with good accuracy. The use of a ruler helps the user by guiding the pen along the ruler reducing the tremor and mental load of the user, thus increasing the quality of the results.
When the Virtual Fixture concept was proposed to the U.S. Air Force in 1991, augmented surgery was an example use case, expanding the idea from a virtual ruler guiding a real pencil, to a virtual medical fixture guiding a real physical scalpel manipulated by a real surgeon. The objective was to overlay virtual content upon the surgeon's direct perception of the real workspace with sufficient realism that it would be perceived as authentic additions to the surgical environment and thereby enhance surgical skill, dexterity, and performance. A proposed benefit of virtual medical fixtures as compared to real hardware was that because they were virtual additions to the ambient reality, they could be partially submerged within real patients, providing guidance and/or barriers within unexposed tissues.
The definition of virtual fixtures is much broader than simply providing guidance of the end-effector. For example, auditory virtual fixtures are used to increase the user awareness by providing audio clues that helps the user by providing multi modal cues for localization of the end-effector. However, in the context of human-machine collaborative systems, the term virtual fixtures is often used to refer to a task dependent virtual aid that is overlaid upon a real environment and guides the user's motion along desired directions while preventing motion in undesired directions or regions of the workspace.
Virtual fixtures can be either guiding virtual fixtures or forbidden regions virtual fixtures. A forbidden regions virtual fixture could be used, for example, in a teleoperated setting where the operator has to drive a vehicle at a remote site to accomplish an objective. If there are pits at the remote site which would be harmful for the vehicle to fall into forbidden regions could be defined at the various pits locations, thus preventing the operator from issuing commands that would result in the vehicle ending up in such a pit.
Such illegal commands could easily be sent by an operator because of, for instance, delays in the teleoperation loop, poor telepresence or a number of other reasons.
An example of a guiding virtual fixture could be when the vehicle must follow a certain trajectory,
The operator is then able to control the progress along the preferred direction while motion along the non-preferred direction is constrained.
With both forbidden regions and guiding virtual fixtures the stiffness, or its inverse the compliance, of the fixture can be adjusted. If the compliance is high (low stiffness) the fixture is soft. On the other hand, when the compliance is zero (maximum stiffness) the fixture is hard.
Virtual fixture control law
This section describes how a control law that implements virtual fixtures can be derived. It is assumed that the robot is a purely kinematic device with end-effector position and end-effector orientation expressed in the robot's base frame . The input control signal to the robot is assumed to be a desired end-effector velocity . In a tele-operated system it is often useful to scale the input velocity from the operator, before feeding it to the robot controller. If the input from the user is of another form such as a force or position it must first be transformed to an input velocity, by for example scaling or differentiating.
Thus the control signal would be computed from the operator's input velocity as:
If there exists a one-to-one mapping between the operator and the slave robot.
If the constant is replaced by a diagonal matrix it is possible to adjust the compliance independently for different dimensions of . For example, setting the first three elements on the diagonal of to and all other elements to zero would result in a system that only permits translational motion and not rotation. This would be an example of a hard virtual fixture that constrains the motion from to
. If the rest of the elements on the diagonal were set to a small value, instead of zero, the fixture would be soft, allowing some motion in the rotational directions.
To express more general constraints assume a time-varying matrix which represents the preferred direction at time . Thus if the preferred direction is along a curve in . Likewise, would give preferred directions that span a surface. From two
projection operators can be defined, the span and kernel of the column space:
If does not have full column rank the span can not be computed, consequently it is better to compute the span by using the pseudo-inverse, thus in practice the span is computed as:
where denotes the pseudo-inverse of .
If the input velocity is split into two components as:
it is possible to rewrite the control law as:
Next introduce a new compliance that affects only the non-preferred
component of the velocity input and write the final control law as:
References
Perception
Robotic sensing
Virtual reality
Control theory
Cybernetics
Robot control
Augmented reality
Telepresence robots
Remote sensing | Virtual fixture | [
"Mathematics",
"Engineering"
] | 1,519 | [
"Robotics engineering",
"Applied mathematics",
"Control theory",
"Robot control",
"Dynamical systems"
] |
9,018,986 | https://en.wikipedia.org/wiki/Mass%20segregation%20%28astronomy%29 | In astronomy, dynamical mass segregation is the process by which heavier members of a gravitationally bound system, such as a star cluster, tend to move toward the center, while lighter members tend to move farther away from the center.
Equipartition of kinetic energy
During a close encounter of two members of the cluster, the members exchange both energy and momentum. Although energy can be exchanged in either direction, there is a statistical tendency for the kinetic energy of the two members to equalize during an encounter; this statistical phenomenon is called equipartition, and is similar to the fact that the expected kinetic energy of the molecules of a gas are all the same at a given temperature.
Since kinetic energy is proportional to mass times the square of the speed, equipartition requires the less massive members of a cluster to be moving faster. The more massive members will thus tend to sink into lower orbits (that is, orbits closer to the center of the cluster), while the less massive members will tend to rise to higher orbits.
The time it takes for the kinetic energies of the cluster members to roughly equalize is called the relaxation time of the cluster. A relaxation time-scale assuming energy is exchanged through two-body interactions was approximated in the textbook by Binney & Tremaine as
where is the number of stars in the cluster and is the typical time it takes for a star to cross the cluster. This is on the order of 100 million years for a typical globular cluster with radius 10 parsecs consisting of 100 thousand stars. The most massive stars in a cluster can segregate more rapidly than the less massive stars. This time-scale can be approximated using a toy model developed by Lyman Spitzer of a cluster where stars only have two possible masses ( and ). In this case, the more massive stars (mass ) will segregate in the time
Outward segregation of white dwarfs was observed in the globular cluster 47 Tucanae in a HST study of the region.
Primordial mass segregation
Primordial mass segregation is non-uniform distribution of masses present at the formation of a cluster. The argument that a star cluster is primordially mass segregated is typically based on a comparison of virialization timescales and the cluster's age. However, several dynamical mechanisms to accelerate virialization compared to two-body interactions have been examined. In star-forming regions, it is often observed that O-type stars are preferentially located in the center of a young cluster.
Evaporation
After relaxation, the speed of some low mass members can be greater than the escape velocity of the cluster, which results in these members being lost to the cluster. This process is called evaporation. (A similar phenomenon explains the loss of lighter gases from a planet, such as hydrogen and helium from the Earth—after equipartition, some molecules of sufficiently light gases at the top of the atmosphere will exceed the escape velocity of the planet and be lost.)
Through evaporation, most open clusters eventually dissipate, as indicated by the fact that most existing open clusters are quite young. Globular clusters, being more tightly bound, appear to be more durable.
In the Galaxy
The relaxation time of the Milky Way galaxy is approximately 10 trillion years, on the order of thousand times the age of the galaxy itself. Thus, any observed mass segregation in our galaxy must be almost entirely primordial.
See also
References
Sources
Astrophysics
Effects of gravity | Mass segregation (astronomy) | [
"Physics",
"Astronomy"
] | 715 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
9,019,008 | https://en.wikipedia.org/wiki/Echelle%20grating | An echelle grating (from French échelle, meaning "ladder") is a type of diffraction grating characterised by a relatively low groove density, but a groove shape which is optimized for use at high incidence angles and therefore in high diffraction orders. Higher diffraction orders allow for increased dispersion (spacing) of spectral features at the detector, enabling increased differentiation of these features. Echelle gratings are, like other types of diffraction gratings, used in spectrometers and similar instruments. They are most useful in cross-dispersed high resolution spectrographs, such as HARPS, PARAS, and numerous other astronomical instruments.
History
The concept of a coarsely-ruled grating used at grazing angles was discovered by Albert Michelson in 1898, where he referred to it as an "echelon". However, it was not until 1923 that echelle spectrometers began to take on their characteristic form, in which the high-resolution grating is used in tandem with a crossed low-dispersion grating. This configuration was invented by Nagaoka and Mishima and has been used in a similar layout ever since.
Principle
As with other diffraction gratings, the echelle grating conceptually consists of a number of slits with widths close to the wavelength of the diffracted light. The light of a single wavelength in a standard grating at normal incidence is diffracted to the central zero order and successive higher orders at specific angles, defined by the grating density/wavelength ratio and the selected order. The angular spacing between higher orders monotonically decreases and higher orders can get very close to each other, while lower ones are well separated. The intensity of the diffraction pattern can be altered by tilting the grating. With reflective gratings (where the holes are replaced by a highly reflective surface), the reflective portion can be tilted (blazed) to scatter a majority of the light into the preferred direction of interest (and into a specific diffraction order).
For multiple wavelengths the same is true; however, in that case it is possible for longer wavelengths of a higher order to overlap with the next order(s) of a shorter wavelength, which is usually an unwanted side effect.
In echelle gratings, however, this behavior is deliberately used and the blaze is optimized for multiple overlapping higher orders. Since this overlap is not directly useful, a second, perpendicularly mounted dispersive element (grating or prism) is inserted as an "order separator" or "cross disperser" into the beam path. Hence the spectrum consists of stripes with different, but slightly overlapping, wavelength ranges that run across the imaging plane in an oblique pattern.
It is exactly this behavior that helps to overcome imaging problems with broadband, high-resolution spectroscopic devices, as in the utilisation of extremely long, linear detection arrays, or strong defocus or other aberrations, and makes the use of readily available 2D-detection arrays feasible, which reduces measurement times and improves efficiency.
See also
Diffraction grating
Grism
Literature
Thomas Eversberg, Klaus Vollmann: Spectroscopic Instrumentation - Fundamentals and Guidelines for Astronomers. Springer, Heidelberg 2014,
References
Echelle Gratings
High Resolution Echelle Spectrometer Introduction at HIRES website
Spectroscopy on Small Telescopes: The Echelle Spectrograph, by Martin J. Porter
Diffraction gratings
Diffraction
es:Red de difracción#Tipo Echelle | Echelle grating | [
"Physics",
"Chemistry",
"Materials_science"
] | 743 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
9,019,987 | https://en.wikipedia.org/wiki/Nuclear%20flask | A nuclear flask is a shipping container that is used to transport active nuclear materials between nuclear power station and spent fuel reprocessing facilities.
Each shipping container is designed to maintain its integrity under normal transportation conditions and during hypothetical accident conditions. They must protect their contents against damage from the outside world, such as impact or fire. They must also contain their contents from leakage, both for physical leakage and for radiological shielding.
Spent nuclear fuel shipping casks are used to transport spent nuclear fuel used in nuclear power plants and research reactors to disposal sites such as the nuclear reprocessing center at COGEMA La Hague site.
International
United Kingdom
Railway-carried flasks are used to transport spent fuel from nuclear power stations in the UK and the Sellafield spent nuclear fuel reprocessing facility. Each flask weighs more than , and transports usually not more than of spent nuclear fuel.
Over the past 35 years, British Nuclear Fuels plc (BNFL) and its subsidiary PNTL have conducted over 14,000 cask shipments of SNF worldwide, transporting more than 9,000 tonnes of SNF over 16 million miles via road, rail, and sea without a radiological release. BNFL designed, licensed, and currently own and operate a fleet of approximately 170 casks of the Excellox design. BNFL has maintained a fleet of transport casks to ship
SNF for the United Kingdom, continental Europe, and Japan for reprocessing.
In the UK a series of public demonstrations were conducted in which spent fuel flasks (loaded with steel bars) were subjected to simulated accident conditions. A randomly selected flask (never used for holding used fuel) from the production line was first dropped from a tower. The flask was dropped in such a way that the weakest part of it would hit the ground first. The lid of the flask was slightly damaged but very little material escaped from the flask. A little water escaped from the flask but it was thought that in a real accident that the escape of radioactivity associated with this water would not be a threat to humans or their environment.
For a second test the same flask was fitted with a new lid, filled again with steel bars and water before a train was driven into it at high speed. The flask survived with only cosmetic damage while the train was destroyed. Although referred to as a test, the actual stresses the flask underwent were well below what they are designed to withstand, as much of the energy from the collision was absorbed by the train and in moving the flask some distance.
This flask is on display at the training centre at Heysham 1 Power Station.
Description
Introduced in the early 1960s, Magnox flasks consists of four layers; an internal skip containing the waste; guides and protectors surrounding the skip; all contained within the steel main body of flask itself, with characteristic cooling fins; and (since the early 1990s) a transport cabin of panels which provide an external housing. Flasks for waste from the later advanced gas cooled reactor power stations are similar, but have thinner steel main walls at thickness, to allow room for extensive internal lead shielding. The flask is protected by a bolt hasp which prevents the content from being accessed during transit.
Transport
All the flasks are owned by the Nuclear Decommissioning Authority, the owners of Direct Rail Services. A train conveying flasks would be hauled by two locomotives, either Class 20 or Class 37, but Class 66 and Class 68 locomotives are increasingly being used; locomotives are used in pairs as a precaution in case one fails en route. Greenpeace protest that flasks in rail transit pose a hazard to passengers standing on platforms, although many tests performed by the Health and Safety Executive have proved that it is safe for passengers to stand on the platform while a flask passes by.
Safety
The crashworthiness of the flask was demonstrated publicly when a British Rail Class 46 locomotive was forcibly driven into a derailed flask (containing water and steel rods in place of radioactive material) at ; the flask sustaining minimal superficial damage without compromising its integrity, while both the flatbed wagon carrying it and the locomotive were more-or-less destroyed. Additionally, flasks were heated to temperatures of over to prove safety in a fire. However, critics consider the testing flawed for various reasons. The heat test is claimed to be considerably below that of theoretical worst-case fires in a tunnel, and the worst case impact today would have a closing speed of around . Nevertheless, there have been several accidents involving flasks, including derailments, collisions, and even a flask being dropped during transfer from train to road, with no leakage having occurred.
Problems have been found where flasks "sweat", when small amounts of radioactive material absorbed into paint migrate to the surface, causing contamination risks. Studies identified that 10–15% of flasks in the United Kingdom were suffering from this problem, but none exceeded the international recommended safety limits. Similar flasks in mainland Europe were found to marginally exceed the contamination limits during testing, and additional monitoring procedures were put into place. In order to reduce the risk, current UK flask wagons are fitted with a lockable cover to ensure any surface contamination remains within the container, and all containers are tested before shipment, with those exceeding the safety level being cleaned until they are within the limit. A report in 2001 identified potential risks, and actions to be taken to ensure safety.
United States
In the United States, the acceptability of the design of each cask is judged against Title 10, Part 71, of the Code of Federal Regulations (other nations' shipping casks, possibly excluding Russia's, are designed and tested to similar standards (International Atomic Energy Agency "Regulations for the Safe Transport of Radioactive Material" No. TS-R-1)). The designs must demonstrate (possibly by computer modelling) protection against radiological release to the environment under all four of the following hypothetical accident conditions, designed to encompass 99% of all accidents:
A 9-meter (30 ft) free fall onto an unyielding surface
A puncture test allowing the container to free-fall 1 meter (about 39 inches) onto a steel rod 15 centimeters (about 6 inches) in diameter
A 30-minute, all-engulfing fire at 800 degrees Celsius (1475 degrees Fahrenheit)
An 8-hour immersion under 0.9 meter (3 ft) of water.
Further, an undamaged package must be subjected to a one-hour immersion under 200 meters (655 ft) of water.
In addition, between 1975 and 1977 Sandia National Laboratories conducted full-scale crash tests on spent nuclear fuel shipping casks. Although the casks were damaged, none would have leaked.
Although the U.S. Department of Transportation (DOT) has the primary responsibility for regulating the safe transport of radioactive materials in the United States, the Nuclear Regulatory Commission (NRC) requires that licensees and carriers involved in spent fuel shipments:
Follow only approved routes;
Provide armed escorts for heavily populated areas;
Use immobilization devices;
Provide monitoring and redundant communications;
Coordinate with law enforcement agencies before shipments; and
Notify in advance the NRC and States through which the shipments will pass.
Since 1965, approximately 3,000 shipments of spent nuclear fuel have been transported safely over the U.S.'s highways, waterways, and railroads.
Baltimore train tunnel fire
On July 18, 2001, a freight train carrying hazardous (non-nuclear) materials derailed and caught fire while passing through the Howard Street railroad tunnel in downtown Baltimore, Maryland, United States. The fire burned for 3 days, with temperatures as high as 1000 °C (1800 °F). Since the casks are designed for a 30-minute fire at 800 °C (1475 °F), several reports have been made regarding the inability of the casks to survive a fire similar to the Baltimore one. However, nuclear waste would never be transported together with hazardous (flammable or explosive) materials on the same train or track.
State of Nevada
The State of Nevada, USA, released a report entitled, "Implications of the Baltimore Rail Tunnel Fire for Full-Scale Testing of Shipping Casks" on February 25, 2003. In the report, they said a hypothetical spent nuclear fuel accident based on the Baltimore fire:
"Concluded steel-lead-steel cask would have failed after 6.3 hours; monolithic steel cask would have failed after 11-12.5 hours."
"Contaminated Area: 32 square miles (82 km2)"
"Latent cancer fatalities: 4,000-28,000 over 50 years (200-1,400 during first year)"
"Cleanup cost: $13.7 Billion (2001 Dollars)"
National Academy of Sciences
The National Academy of Sciences, at the request of the State of Nevada, produced a report on July 25, 2003. The report concluded that the following should be done:
"Need to 3-D model (bolts, seals, etc) more than HI-STAR cask for extreme fire environments."
"For safety and risk analysis, casks should be physically tested to destruction."
"NRC should release all thermal calculations; Holtec is withholding allegedly proprietary information."
NRC
The U.S. Nuclear Regulatory Commission released a report in November 2006. It concluded:
The results of this evaluation also strongly indicate that neither spent nuclear fuel (SNF) particles nor fission products would be released from a spent fuel transportation package carrying intact spent fuel involved in a severe tunnel fire such as the Baltimore tunnel fire. None of the three package designs analyzed for the Baltimore tunnel fire scenario (TN-68, HI-STAR 100, and NAC LWT) experienced internal temperatures that would result in rupture of the fuel cladding. Therefore, radioactive material (i.e., SNF particles or fission products) would be retained within the fuel rods.
There would be no release from the HI-STAR 100, because the inner welded canister remains leak tight. While a release is unlikely, the potential releases calculated for the TN-68 rail package and the NAC LWT truck package indicate that any release of CRUD from either package would be very small - less than an A2 quantity.
Canada
By comparison there has been limited spent nuclear fuel transport in Canada. Transportation casks have been designed for truck and rail transport and Canada's regulatory body, the Canadian Nuclear Safety Commission, granted approval for casks, which may be used for barge shipments as well. The commission's regulations prohibit the disclosure of location, routing and timing of shipments of nuclear materials, such as spent fuel.
International maritime transport
Nuclear flasks containing spent nuclear fuel are sometimes transported by sea for the purposes of reprocessing or relocation to a storage facility. Vessels receiving these cargoes are variously classified INF-1, INF-2 or INF-3 by the International Maritime Organisation. The code was introduced as a voluntary system in 1993 and became mandatory in 2001. The "INF" acronym stands for "Irradiated Nuclear Fuel" though the classification also covers "plutonium and high-level waste" cargoes. In order to receive these classifications, vessels must meet a range of structural and safety standards. Vessels used for the transportation of spent nuclear fuel are typically purpose built and are commonly referred to as Nuclear Fuel Carriers. The global fleet includes vessels under flags of the United Kingdom, Japan, Russian Federation, China and Sweden.
See also
Dry cask storage
Nuclear reprocessing
Radioactive waste
Safeguards Transporter
Spent fuel pool
References
External links
Nuclear Transports in Britain (1999), Cumbrians Opposed to a Radioactive Environment
Crash! Pictures of the 1984 train-crash test
Train test crash 1984 BBC News footage of the 1984 train-crash test.
Operation Smash Hit Publicity film made for the CEGB (subsequently Magnox Electric Ltd) about the train-crash test
Risks of transporting of irradiated fuel and nuclear materials in the UK, Large & Associates, for Greenpeace (2007)
Background Report on the Basic Issues to be Addressed, QuantSci Limited, for the Greater London Authority (2001)
U.S. Transport Council
Nuclear Regulatory Commission "Safety of Spent Fuel Transportation" (NUREG/BR-0292)
Nuclear Regulatory Commission Backgrounder on Transportation of Spent Fuel and Radioactive Materials
Pro-nuclear group's summary
Fuel Solutions Inc.(BFS), the world’s largest shippers of nuclear materials
World Nuclear Transport Institute
Radioactive waste
Shipping containers
Hazardous materials | Nuclear flask | [
"Physics",
"Chemistry",
"Technology"
] | 2,574 | [
"Matter",
"Materials",
"Hazardous waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Hazardous materials",
"Radioactive waste"
] |
7,489,050 | https://en.wikipedia.org/wiki/Powdered%20activated%20carbon%20treatment | Powdered Activated Carbon Treatment (PACT) is a wastewater technology in which powdered activated carbon is added to an anaerobic or aerobic treatment system. The carbon in the biological treatment process adsorbs recalcitrant compounds that are not readily biodegradable, thereby reducing the chemical oxygen demand of the wastewater and removing toxins. The carbon also acts as a "buffer" against the effects of toxic organics in the wastewater.
In such a system, biological treatment and carbon adsorption are combined into a single, synergistic treatment step. The result is a system which offers significant cost reduction compared to activated sludge and granular carbon treatment options. The addition of the powdered activated carbon stabilizes biological systems against upsets and shock loading, controls color and odor, and may reduce disposal costs while removing soluble organics.
System Description
In an aerobic treatment system, influent first enters an aeration tank where the powdered carbon is added, making up a portion of mixed liquor suspended solids. Once the aeration is completed, the treated wastewater and the carbon-biomass slurry are allowed to settle. In cases where complete solids separation is needed, such as for reuse, an MBR may be used in place of the clarifer.
Following treatment, a portion of the carbon and biomass slurry is wasted to solids handling. Due to the presence of the powdered activated carbon, the sludge settles, thickens, and dewaters better compared to conventional activated sludge processes. These solids can be wasted and disposed of as a slurry, dewatered to a compact, stable cake, or pumped as a slurry to a wet air oxidation unit for further processing to regenerate the carbon and destroy the biological solids.
Commercial Applications
The system as it exists today was developed in the 1970s under a collaborative effort between DuPont and Zimpro. PACT systems can be used to treat very difficult to treat wastewaters including
refinery, petrochemical, pharmaceutical, chemical, textile/dye and other industrial wastewaters
landfill leachate
highly contaminated surface water or groundwater.
Powdered activated carbon is also used in the processing of drinking water at treatment facilities, primarily on a seasonal basis in order to deal with aesthetic problems with the water such as odor and taste issues associated with Geosmin and 2-MIB.
Today there are over 100 PACT systems worldwide. Some of these installations have been retrofits to existing conventional activated sludge processes to meet more stringent effluent requirements. Several PACT systems have been built in China in recent years as a result of higher treatment regulations for difficult-to-treat industrial wastewaters. PACT systems have also been used to meet bioassay and toxicity requirements as well as allow for water reuse.
See also
Wet oxidation
Fred Zimmermann
Water purification
Activated sludge
list of waste water treatment technologies
References
Waste treatment technology
Powders | Powdered activated carbon treatment | [
"Physics",
"Chemistry",
"Engineering"
] | 591 | [
"Water treatment",
"Materials",
"Powders",
"Environmental engineering",
"Waste treatment technology",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.