id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
20,051,748 | https://en.wikipedia.org/wiki/Smoked%20glass | Smoked glass is glass held in the smoke of a candle flame (or other inefficiently burning hydrocarbon) such that one surface of the sheet of glass is covered in a layer of smoke residue. The glass is used as a medium for recording pen traces in scientific instruments, and is also used to track pheromone deposition in ants
The advantages of using the glass are that the recording medium is easily renewable (just re-smoke the glass), and that the trace obtained can easily be magnified by projection onto a suitable surface. A variation on this scheme is the use of smoked paper in early seismographs.
The effect of smoked glass can be incorporated into glass manufacture by adding darkening materials, such that light passing through the glass is decreased in brightness. It can be used aesthetically, for example, in the manufacture of coffee tables with smoked glass tops. It can also be used in scientific instruments as a filter, as in the use of smoked glass in cross-staves and sextants, allowing operators to make sun sightings without damaging their eyesight.
See also
Window film
References
Glass coating and surface modification
Smoke | Smoked glass | Chemistry | 233 |
8,633,062 | https://en.wikipedia.org/wiki/Lake%20capture | In geology, lake capture is the process of capture (see Stream capture) of the waters collected in a lake by a neighbor river basin.
The occurrence of a lake capture is mainly controlled by the water balance at the lake's basin and the changes in topography due to erosion, sedimentation, and tectonism. If evaporation at the surface of a lake, plus the water losses through underground infiltration and plant evapotranspiration are high enough to account for all precipitation water collected by the lake, then the lake becomes endorheic, closed, or internally drained. This situation prevails until the water balance changes again and the lake overburdens the limits of its basin or until the lake capture occurs. Opening the drainage of an endorheic lacustrine basin by fluvial erosion generally implies a lake capture.
Lake captures are therefore very sensitive to the preexisting topography as well as to climatic and lithological factors. A climatic change towards more humid conditions can result in a higher water level in the internally drained basin, eventually causing overflow, this . In a longer time-scale, sediment colmatation of the lacustrine basin can also lead to overflow. Both can hinder the relative importance of the capture process carried out by erosion.
Examples include the Late Neogen capture of the endorheic Ebro Basin (capture) or the Pleistocene Lake Bonneville.
See also
River capture
Regressive erosion
References
Endorheic lakes
Geomorphology | Lake capture | Environmental_science | 311 |
528,847 | https://en.wikipedia.org/wiki/Hairpin%20turn | A hairpin turn (also hairpin bend or hairpin corner) is a bend in a road with a very acute inner angle, making it necessary for an oncoming vehicle to turn about 180° to continue on the road. It is named for its resemblance to a bent metal hairpin. Such turns in ramps and trails may be called switchbacks in American English, by analogy with switchback railways.
Description
Hairpin turns are often built when a route climbs up or down a steep slope, so that it can travel mostly across the slope with only moderate steepness, and are often arrayed in a zigzag pattern. Highways with repeating hairpin turns allow easier, safer ascents and descents of mountainous terrain than a direct, steep climb and descent, at the price of greater distances of travel and usually lower speed limits, due to the sharpness of the turn. Highways of this style are also generally less costly to build and maintain than highways with tunnels.
On occasion, the road may loop completely, using a tunnel or bridge to cross itself at a different elevation (example on Reunion Island: ; example near Ashland, Oregon ). When this routing geometry is used for a rail line, it is called a spiral, or spiral loop.
In trail building, an alternative to switchbacks is the stairway.
Notable hairpin turns
Fairmont Hairpin – hairpin bend at the Fairmont Monte Carlo on the Circuit de Monaco
Railways
If a railway curves back on itself like a hairpin turn, it is called a horseshoe curve. The Pennsylvania Railroad built one in Blair County, Pennsylvania, which ascends the Eastern Continental Divide from the east. However, the radius of curvature is much larger than that of a typical road hairpin. See this example at Zlatoust or Hillclimbing for other railway ascent methods.
Skiing
Sections known as hairpins are also found in the slalom discipline of alpine skiing. A hairpin consists of two consecutive vertical or "closed gates", which must be negotiated very quickly. Three or more consecutive closed gates are known as a flush.
See also
Dead Man's Curve
Spiral bridge
Zig zag (railway)
U-turn
References
External links
Motorsport terminology
Road transport
Road hazards | Hairpin turn | Technology | 448 |
41,625 | https://en.wikipedia.org/wiki/Radiometry | Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting.
The use of radiometers to determine the temperature of objects and gasses by measuring radiation flux is called pyrometry. Handheld pyrometer devices are often marketed as infrared thermometers.
Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term.
Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength.
Radiometric quantities
Integral and spectral radiometric quantities
Integral quantities (like radiant flux) describe the total effect of radiation of all wavelengths or frequencies, while spectral quantities (like spectral power) describe the effect of radiation of a single wavelength or frequency . To each integral quantity there are corresponding spectral quantities, defined as the quotient of the integrated quantity by the range of frequency or wavelength considered. For example, the radiant flux Φe corresponds to the spectral power Φe, and Φe,.
Getting an integral quantity's spectral counterpart requires a limit transition. This comes from the idea that the precisely requested wavelength photon existence probability is zero. Let us show the relation between them using the radiant flux as an example:
Integral flux, whose unit is W:
Spectral flux by wavelength, whose unit is :
where is the radiant flux of the radiation in a small wavelength interval .
The area under a plot with wavelength horizontal axis equals to the total radiant flux.
Spectral flux by frequency, whose unit is :
where is the radiant flux of the radiation in a small frequency interval .
The area under a plot with frequency horizontal axis equals to the total radiant flux.
The spectral quantities by wavelength and frequency are related to each other, since the product of the two variables is the speed of light ():
or or
The integral quantity can be obtained by the spectral quantity's integration:
See also
Reflectivity
Microwave radiometer
Measurement of ionizing radiation
Radiometric calibration
Radiometric resolution
References
External links
Radiometry and photometry FAQ Professor Jim Palmer's Radiometry FAQ page (The University of Arizona College of Optical Sciences).
Measurement
Optical metrology
Telecommunications engineering
Observational astronomy
Electromagnetic radiation | Radiometry | Physics,Astronomy,Mathematics,Engineering | 541 |
56,509 | https://en.wikipedia.org/wiki/Potash | Potash ( ) includes various mined and manufactured salts that contain potassium in water-soluble form. The name derives from pot ash, plant ashes or wood ash soaked in water in a pot, the primary means of manufacturing potash before the Industrial Era. The word potassium is derived from potash.
Potash is produced worldwide in amounts exceeding 71.9 million tonnes (~45.4 million tonnes K2O equivalent) per year as of 2021, with Canada being the largest producer, mostly for use in fertilizer. Various kinds of fertilizer-potash constitute the single greatest industrial use of the element potassium in the world. Potassium was first derived in 1807 by electrolysis of caustic potash (potassium hydroxide).
Terminology
Potash refers to potassium compounds and potassium-bearing materials, most commonly potassium carbonate. The word "potash" originates from the Middle Dutch , denoting "pot ashes" in 1477.
The old method of making potassium carbonate () was by collecting or producing wood ash (the occupation of ash burners), leaching the ashes, and then evaporating the resulting solution in large iron pots, which left a white residue denominated "pot ash". Approximately 10% by weight of common wood ash can be recovered as potash. Later, "potash" became widely applied to naturally occurring minerals that contained potassium salts and the commercial product derived from them.
The following table lists a number of potassium compounds that have "potash" in their traditional names:
History
Origin of potash ore
Most of the world reserves of potassium (K) were deposited as sea water in ancient inland oceans. After the water evaporated, the potassium salts crystallized into beds of potash ore. These are the locations where potash is being mined today. The deposits are a naturally occurring mixture of potassium chloride (KCl) and sodium chloride (NaCl), more commonly known as table salt. Over time, as the surface of the earth changed, these deposits were covered by thousands of feet of earth.
Bronze Age
Potash (especially potassium carbonate) has been used in bleaching textiles, making glass, ceramic, and making soap, since the Bronze Age. Potash was principally obtained by leaching the ashes of land plants.
14th–17th century
Potash mining
Beginning in the 14th century potash was mined in Ethiopia. One of the world's largest deposits, 140 to 150 million tons, is located in the Dallol area of the Afar Region.
Wood-derived potash
Potash was one of the most important industrial chemicals. It was refined from the ashes of broadleaved trees and produced primarily in the forested areas of Europe, Russia, and North America. Although methods for producing artificial alkalis were invented in the late 18th century, these did not become economical until the late 19th century and so the dependence on organic sources of potash remained.
Potash became an important international trade commodity in Europe from at least the early 14th century. It is estimated that European imports of potash required 6 or more million cubic metres each year from the early 17th century. Between 1420 and 1620, the primary exporting cities for wood-derived potash were Gdańsk, Königsberg and Riga. In the late 15th century, London was the lead importer due to its position as the centre of soft soap making while the Dutch dominated as suppliers and consumers in the 16th century. From the 1640s, geopolitical disruptions (i.e. Russo-Polish War (1654–1667)) meant that the centres of export moved from the Baltic to Archangelsk, Russia. In 1700, Russian ash was dominant though Gdańsk remained notable for the quality of its potash.
18th century
Kelp ash
On the Orkney islands, kelp ash provided potash and soda ash, production starting "possibly as early as 1719" and lasting for a century. The products were "eagerly sought after by the glass and soap industries of the time."
North America
By the 18th century, higher quality American potash was increasingly exported to Britain. In the late 18th and early 19th centuries, potash production provided settlers in North America badly needed cash and credit as they cleared wooded land for crops. To make full use of their land, settlers needed to dispose of excess wood. The easiest way to accomplish this was to burn any wood not needed for fuel or construction. Ashes from hardwood trees could then be used to make lye, which could either be used to make soap or boiled down to produce valuable potash. Hardwood could generate ashes at the rate of 60 to 100 bushels per acre (500 to 900 m3/km2). In 1790, the sale of ashes could generate $3.25 to $6.25 per acre ($800 to $1,500/km2) in rural New York State – nearly the same rate as hiring a laborer to clear the same area. Potash making became a major industry in British North America. Great Britain was always the most important market. The American potash industry followed the woodsman's ax across the country.
The first US patent
The first US patent of any kind was issued in 1790 to Samuel Hopkins for an improvement "in the making of Pot ash and Pearl ash by a new Apparatus and Process". Pearl ash was a purer quality made by calcination of potash in a reverberatory furnace or kiln. Potash pits were once used in England to produce potash that was used in making soap for the preparation of wool for yarn production.
19th century
After about 1820, New York replaced New England as the most important source; by 1840 the center was in Ohio. Potash production was always a by-product industry, following from the need to clear land for agriculture.
Canada
From 1767, potash from wood ashes was exported from Canada. By 1811, 70% of the total 19.6 million lbs of potash imports to Britain came from Canada. Exports of potash and pearl ash reached 43,958 barrels in 1865. There were 519 asheries in operation in 1871.
20th century industrialization
The wood-ash industry declined in the late 19th century when large-scale production of potash from mineral salts was established in Germany. In the early 20th century, the potash industry was dominated by a cartel in which Germany had the dominant role. WWI saw a brief resurgence of American asheries, with their product typically consisting of 66% hydroxide, 17% carbonate, 16% sulfate and other impurities. Later in the century, the cartel ended as new potash producers emerged in the USSR and Canada.
In 1943, potash was discovered in Saskatchewan, Canada, during oil drilling. Active exploration began in 1951. In 1958, the Potash Company of America became the first potash producer in Canada with the commissioning of an underground potash mine at Patience Lake. As numerous potash producers in Canada developed, the Saskatchewan government became increasingly involved in the industry, leading to the creation of Canpotex in the 1970s.
In 1964 the Canadian company Kalium Chemicals established the first potash mine using the solution process. The discovery was made during oil reserve exploration. The mine was developed near Regina, Saskatchewan. The mine reached depths greater than 1500 meters. It is now the Mosaic Corporation's Belle Plaine unit.
The USSR's potash production had largely been for domestic use and use in the Council for Mutual Economic Assistance countries. After the dissolution of the USSR, Russian and Belarusian potash producers entered into direct competition with producers elsewhere in the world for the first time.
In the beginning of the 20th century, potash deposits were found in the Dallol Depression in the Musely and Crescent localities near the Ethiopean-Eritrean border. The estimated reserves in Musely and Crescent are 173 and 12 million tonnes respectively. The latter is particularly suitable for surface mining. It was explored in the 1960s but the works stopped due to flooding in 1967. Attempts to continue mining in the 1990s were halted by the Eritrean–Ethiopian War and have not resumed as of 2009.
Mining
Shaft mining and strip mining
All commercial potash deposits come originally from evaporite deposits and are often buried deep below the earth's surface. Potash ores are typically rich in potassium chloride (KCl), sodium chloride (NaCl) and other salts and clays, and are typically obtained by conventional shaft mining with the extracted ore ground into a powder. Most potash mines today are deep shaft mines as much as 4,400 feet (1,400 m) underground. Others are mined as strip mines, having been laid down in horizontal layers as sedimentary rock. In above-ground processing plants, the KCl is separated from the mixture to produce a high-analysis potassium fertilizer. Other potassium salts can be separated by various procedures, resulting in potassium sulfate and potassium-magnesium sulfate.
Dissolution mining and evaporation methods
Other methods include dissolution mining and evaporation methods from brines. In the evaporation method, hot water is injected into the potash, which is dissolved and then pumped to the surface where it is concentrated by solar induced evaporation. Amine reagents are then added to either the mined or evaporated solutions. The amine coats the KCl but not NaCl. Air bubbles cling to the amine + KCl and float it to the surface while the NaCl and clay sink to the bottom. The surface is skimmed for the amine + KCl, which is then dried and packaged for use as a K rich fertilizer—KCl dissolves readily in water and is available quickly for plant nutrition.
Recovery of potassium fertilizer salts from sea water has been studied in India. During extraction of salt from seawater by evaporation, potassium salts get concentrated in bittern, an effluent from the salt industry.
Production
Potash deposits are distributed unevenly throughout the world. , deposits are being mined in Canada, Russia, China, Belarus, Israel, Germany, Chile, the United States, Jordan, Spain, the United Kingdom, Uzbekistan and Brazil, with the most significant deposits present under the great depths of the Prairie Evaporite Formation in Saskatchewan, Canada. Canada and Russia are the countries where the bulk of potash is produced; Belarus is also a major producer.
The Permian Basin deposit includes the major mines outside of Carlsbad, New Mexico, to the world's purest potash deposit in Lea County, New Mexico (near the Carlsbad deposits), which is believed to be roughly 80% pure. (Osceola County, Michigan, has deposits 90+% pure; the only mine there was converted to salt production, however.) Canada is the largest producer, followed by Russia and Belarus. The most significant reserve of Canada's potash is located in the province of Saskatchewan and is mined by The Mosaic Company, Nutrien and K+S.
In China, most potash deposits are concentrated in the deserts and salt flats of the endorheic basins of its western provinces, particularly Qinghai. Geological expeditions discovered the reserves in the 1950s but commercial exploitation lagged until Deng Xiaoping's Reform and Opening Up Policy in the 1980s. The 1989 opening of the Qinghai Potash Fertilizer Factory in the remote Qarhan Playa increased China's production of potassium chloride sixfold, from less than a year at Haixi and Tanggu to just under a year.
In 2013, almost 70% of potash production was controlled by Canpotex, an exporting and marketing firm, and the Belarusian Potash Company. The latter was a joint venture between Belaruskali and Uralkali, but on July 30, 2013, Uralkali announced that it had ended the venture.
Potash is water soluble and transporting it requires special transportation infrastructure.
Occupational hazards
Excessive respiratory disease due to environmental hazards, such as radon and asbestos, has been a concern for potash miners throughout history. Potash miners are liable to develop silicosis. Based on a study conducted between 1977 and 1987 of cardiovascular disease among potash workers, the overall mortality rates were low, but a noticeable difference in above-ground workers was documented.
Consumption
Fertilizers
Potassium is the third major plant and crop nutrient after nitrogen and phosphorus. It has been used since antiquity as a soil fertilizer (about 90% of current use). Fertilizer use is the main driver behind potash consumption, especially for its use in fertilizing crops that contribute to high-protein diets. As of at least 2010, more than 95% of potash is mined for use in agricultural purposes.
Elemental potassium does not occur in nature because it reacts violently with water. As part of various compounds, potassium makes up about 2.6% of the Earth's crust by mass and is the seventh most abundant element, similar in abundance to sodium at approximately 1.8% of the crust. Potash is important for agriculture because it improves water retention, yield, nutrient value, taste, color, texture and disease resistance of food crops. It has wide application to fruit and vegetables, rice, wheat and other grains, sugar, corn, soybeans, palm oil and cotton, all of which benefit from the nutrient's quality-enhancing properties.
Demand for food and animal feed has been on the rise since 2000. The United States Department of Agriculture's Economic Research Service (ERS) attributes the trend to average annual population increases of 75 million people around the world. Geographically, economic growth in Asia and Latin America greatly contributed to the increased use of potash-based fertilizer. Rising incomes in developing countries also were a factor in the growing potash and fertilizer use. With more money in the household budget, consumers added more meat and dairy products to their diets. This shift in eating patterns required more acres to be planted, more fertilizer to be applied and more animals to be fed—all requiring more potash.
After years of trending upward, fertilizer use slowed in 2008. The worldwide economic downturn is the primary reason for the declining fertilizer use, dropping prices, and mounting inventories.
The world's largest consumers of potash are China, the United States, Brazil, and India. Brazil imports 90% of the potash it needs. Potash consumption for fertilizers is expected to increase to about 37.8 million tonnes by 2022.
Potash imports and exports are often reported in K2O equivalent, although fertilizer never contains potassium oxide, per se, because potassium oxide is caustic and hygroscopic.
Pricing
At the beginning of 2008, potash prices started a meteoric climb from less than US$200 a tonne to a high of US$875 in February 2009. These subsequently dropped dramatically to an April 2010 low of US$310 level, before recovering in 2011–12, and relapsing again in 2013. For reference, prices in November 2011 were about US$470 per tonne, but as of May 2013 were stable at US$393. After the surprise breakup of the world's largest potash cartel at the end of July 2013, potash prices were poised to drop some 20 percent. At the end of December 2015, potash traded for US$295 a tonne. In April 2016 its price was US$269. In May 2017, prices had stabilised at around US$216 a tonne down 18% from the previous year. By January 2018, prices have been recovering to around US$225 a tonne. World potash demand tends to be price inelastic in the short-run and even in the long run.
Other uses
In addition to its use as a fertilizer, potassium chloride is important in many industrialized economies, where it is used in aluminium recycling, by the chloralkali industry to produce potassium hydroxide, in metal electroplating, oil-well drilling fluid, snow and ice melting, steel heat-treating, in medicine as a treatment for hypokalemia, and water softening. Potassium hydroxide is used for industrial water treatment and is the precursor of potassium carbonate, several forms of potassium phosphate, many other potassic chemicals, and soap manufacturing. Potassium carbonate is used to produce animal feed supplements, cement, fire extinguishers, food products, photographic chemicals, and textiles. It is also used in brewing beer, pharmaceutical preparations, and as a catalyst for synthetic rubber manufacturing. Also combined with silica sand to produce potassium silicate, sometimes known as waterglass, for use in paints and arc welding electrodes. These non-fertilizer uses have accounted for about 15% of annual potash consumption in the United States.
Substitutes
No substitutes exist for potassium as an essential plant nutrient and as an essential nutritional requirement for animals and humans. Manure and glauconite (greensand) are low-potassium-content sources that can be profitably transported only short distances to crop fields.
See also
Bone ash
Saltpeter
Saltwater soap
Sodium hydroxide
References
Further reading
Seaver, Frederick J. (1918) "Historical Sketches of Franklin County And Its Several Towns", J.B Lyons Company, Albany, NY, Section "Making Potash" pp. 27–29
External links
They Burned the Woods and Sold the Ashes
Henry M. Paynter, The First Patent, Invention & Technology, Fall 1990
The First U.S. Patent , issued for a method of potash production
World Agriculture and Fertilizer Markets Map
Russia reaps rich harvest with potash
Agricultural chemicals
Fertilizers
Industrial minerals
Potassium
Salts
Types of ash | Potash | Chemistry | 3,602 |
1,033,084 | https://en.wikipedia.org/wiki/Gaseous%20diffusion | Gaseous diffusion is a technology that was used to produce enriched uranium by forcing gaseous uranium hexafluoride (UF6) through microporous membranes. This produces a slight separation (enrichment factor 1.0043) between the molecules containing uranium-235 (235U) and uranium-238 (238U). By use of a large cascade of many stages, high separations can be achieved. It was the first process to be developed that was capable of producing enriched uranium in industrially useful quantities, but is nowadays considered obsolete, having been superseded by the more-efficient gas centrifuge process (enrichment factor 1.05 to 1.2).
Gaseous diffusion was devised by Francis Simon and Nicholas Kurti at the Clarendon Laboratory in 1940, tasked by the MAUD Committee with finding a method for separating uranium-235 from uranium-238 in order to produce a bomb for the British Tube Alloys project. The prototype gaseous diffusion equipment itself was manufactured by Metropolitan-Vickers (MetroVick) at Trafford Park, Manchester, at a cost of £150,000 for four units, for the M. S. Factory, Valley. This work was later transferred to the United States when the Tube Alloys project became subsumed by the later Manhattan Project.
Background
Of the 33 known radioactive primordial nuclides, two (235U and 238U) are isotopes of uranium. These two isotopes are similar in many ways, except that only 235U is fissile (capable of sustaining a nuclear chain reaction of nuclear fission with thermal neutrons). In fact, 235U is the only naturally occurring fissile nucleus. Because natural uranium is only about 0.72% 235U by mass, it must be enriched to a concentration of 2–5% to be able to support a continuous nuclear chain reaction when normal water is used as the moderator. The product of this enrichment process is called enriched uranium.
Technology
Scientific basis
Gaseous diffusion is based on Graham's law, which states that the rate of effusion of a gas is inversely proportional to the square root of its molecular mass. For example, in a box with a microporous membrane containing a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules, if the pore diameter is smaller than the mean free path length (molecular flow). The gas leaving the container is somewhat enriched in the lighter molecules, while the residual gas is somewhat depleted. A single container wherein the enrichment process takes place through gaseous diffusion is called a diffuser.
Uranium hexafluoride
UF6 is the only compound of uranium sufficiently volatile to be used in the gaseous diffusion process. Fortunately, fluorine consists of only a single isotope 19F, so that the 1% difference in molecular weights between 235UF6 and 238UF6 is due only to the difference in weights of the uranium isotopes. For these reasons, UF6 is the only choice as a feedstock for the gaseous diffusion process. UF6, a solid at room temperature, sublimes at 56.4 °C (133 °F) at 1 atmosphere. The triple point is at 64.05 °C and 1.5 bar. Applying Graham's law gives:
where:
Rate1 is the rate of effusion of 235UF6.
Rate2 is the rate of effusion of 238UF6.
M1 is the molar mass of 235UF6 = 235.043930 + 6 × 18.998403 = 349.034348 g·mol−1
M2 is the molar mass of 238UF6 = 238.050788 + 6 × 18.998403 = 352.041206 g·mol−1
This explains the 0.4% difference in the average velocities of 235UF6 molecules over that of 238UF6 molecules.
UF6 is a highly corrosive substance. It is an oxidant and a Lewis acid which is able to bind to fluoride, for instance the reaction of copper(II) fluoride with uranium hexafluoride in acetonitrile is reported to form copper(II) heptafluorouranate(VI), Cu(UF7)2. It reacts with water to form a solid compound, and is very difficult to handle on an industrial scale. As a consequence, internal gaseous pathways must be fabricated from austenitic stainless steel and other heat-stabilized metals. Non-reactive fluoropolymers such as Teflon must be applied as a coating to all valves and seals in the system.
Barrier materials
Gaseous diffusion plants typically use aggregate barriers (porous membranes) constructed of sintered nickel or aluminum, with a pore size of 10–25 nanometers (this is less than one-tenth the mean free path of the UF6 molecule). They may also use film-type barriers, which are made by boring pores through an initially nonporous medium. One way this can be done is by removing one constituent in an alloy, for instance using hydrogen chloride to remove the zinc from silver-zinc (Ag-Zn) or sodium hydroxide to remove aluminum from Ni-Al alloy.
Energy requirements
Because the molecular weights of 235UF6 and 238UF6 are nearly equal, very little separation of the 235U and 238U occurs in a single pass through a barrier, that is, in one diffuser. It is therefore necessary to connect a great many diffusers together in a sequence of stages, using the outputs of the preceding stage as the inputs for the next stage. Such a sequence of stages is called a cascade. In practice, diffusion cascades require thousands of stages, depending on the desired level of enrichment.
All components of a diffusion plant must be maintained at an appropriate temperature and pressure to assure that the UF6 remains in the gaseous phase. The gas must be compressed at each stage to make up for a loss in pressure across the diffuser. This leads to compression heating of the gas, which then must be cooled before entering the diffuser. The requirements for pumping and cooling make diffusion plants enormous consumers of electric power. Because of this, gaseous diffusion was the most expensive method used until recently for producing enriched uranium.
History
Workers working on the Manhattan Project in Oak Ridge, Tennessee, developed several different methods for the separation of isotopes of uranium. Three of these methods were used sequentially at three different plants in Oak Ridge to produce the 235U for "Little Boy" and other early nuclear weapons. In the first step, the S-50 uranium enrichment facility used the thermal diffusion process to enrich the uranium from 0.7% up to nearly 2% 235U. This product was then fed into the gaseous diffusion process at the K-25 plant, the product of which was around 23% 235U. Finally, this material was fed into calutrons at the Y-12. These machines (a type of mass spectrometer) employed electromagnetic isotope separation to boost the final 235U concentration to about 84%.
The preparation of UF6 feedstock for the K-25 gaseous diffusion plant was the first ever application for commercially produced fluorine, and significant obstacles were encountered in the handling of both fluorine and UF6. For example, before the K-25 gaseous diffusion plant could be built, it was first necessary to develop non-reactive chemical compounds that could be used as coatings, lubricants and gaskets for the surfaces that would come into contact with the UF6 gas (a highly reactive and corrosive substance). Scientists of the Manhattan Project recruited William T. Miller, a professor of organic chemistry at Cornell University, to synthesize and develop such materials, because of his expertise in organofluorine chemistry. Miller and his team developed several novel non-reactive chlorofluorocarbon polymers that were used in this application.
Calutrons were inefficient and expensive to build and operate. As soon as the engineering obstacles posed by the gaseous diffusion process had been overcome and the gaseous diffusion cascades began operating at Oak Ridge in 1945, all of the calutrons were shut down. The gaseous diffusion technique then became the preferred technique for producing enriched uranium.
At the time of their construction in the early 1940s, the gaseous diffusion plants were some of the largest buildings ever constructed. Large gaseous diffusion plants were constructed by the United States, the Soviet Union (including a plant that is now in Kazakhstan), the United Kingdom, France, and China. Most of these have now closed or are expected to close, unable to compete economically with newer enrichment techniques. Some of the technology used in pumps and membranes remains top secret. Some of the materials that were used remain subject to export controls, as a part of the continuing effort to control nuclear proliferation.
Current status
In 2008, gaseous diffusion plants in the United States and France still generated 33% of the world's enriched uranium. However, the French plant (Eurodif's Georges-Besse plant) definitively closed in June 2012, and the Paducah Gaseous Diffusion Plant in Kentucky operated by the United States Enrichment Corporation (USEC) (the last fully functioning uranium enrichment facility in the United States to employ the gaseous diffusion process) ceased enrichment in 2013. The only other such facility in the United States, the Portsmouth Gaseous Diffusion Plant in Ohio, ceased enrichment activities in 2001. Since 2010, the Ohio site is now used mainly by AREVA, a French conglomerate, for the conversion of depleted UF6 to uranium oxide.
As existing gaseous diffusion plants became obsolete, they were replaced by second generation gas centrifuge technology, which requires far less electric power to produce equivalent amounts of separated uranium. AREVA replaced its Georges Besse gaseous diffusion plant with the Georges Besse II centrifuge plant.
See also
Capenhurst
Fick's laws of diffusion
K-25
Lanzhou
Marcoule
Molecular diffusion
Nuclear fuel cycle
Thomas Graham (chemist)
Tomsk
References
External links
Annotated references on gaseous diffusion from the Alsos Library
Isotope separation
Uranium
Membrane technology | Gaseous diffusion | Chemistry | 2,093 |
28,380,302 | https://en.wikipedia.org/wiki/LdrD-RdlD%20toxin-antitoxin%20system | RdlD RNA (regulator detected in LDR-D) is a family of small non-coding RNAs which repress the protein LdrD in a type I toxin-antitoxin system. It was discovered in Escherichia coli strain K-12 in a long direct repeat (LDR) named LDR-D. This locus encodes two products: a 35 amino acid peptide toxin (ldrD) and a 60 nucleotide RNA antitoxin. The 374nt toxin mRNA has a half-life of around 30 minutes while rdlD RNA has a half-life of only 2 minutes. This is in keeping with other type I toxin-antitoxin systems.
Northern blots showed that ldrD and rdlD are both transcribed and primer extension analysis showed the rdlD transcript is not translated.
Homologues exist in related Enterobacteriaceae such as Salmonella enterica and Shigella boydii. The Ldr peptide genes that have been discovered are thought to have evolved from a common ancestor.
LDR sequences
Four long direct repeat (LDR) sequences were identified during genetic sequencing of a 718kb segment of the E. coli genome. One of these, LDR-D was studied further in order to determine the physiological function of these regions. The genes encoded by the other three LDRs, ldrA, ldrB and ldrC were confirmed to have the same activity as ldrD.
Physiological effects of LdrD
The LdrD protein causes growth inhibition, loss of cell viability, nucleoid condensation and alteration in purine metabolism when overexpressed. Once growth arrest has been achieved, it is irreversible. Another potential effect of elevated LdrD could be reduced levels of cAMP in the cell. It inhibits both translation and transcription, which contributes significantly to reducing the cell's viability.
Suspected mechanism of inhibition
The precise mechanism by which RdlD inhibits LdrD is unknown, however it has been shown that RdlD seems to regulate LdrD expression at the post-transcriptional level. The RdlD antisense RNA does not overlap with the translational initiation region of ldrD, as is common with Type 1 toxin-antitoxin systems.
See also
Toxin-antitoxin system
Hok/sok system
RatA
References
Further reading
External links
Antisense RNA
RNA antitoxins
Toxins | LdrD-RdlD toxin-antitoxin system | Environmental_science | 487 |
37,768,872 | https://en.wikipedia.org/wiki/United%20States%20v.%20Solon | United States v. Solon, 596 F.3d 1206 (10th Cir. 2010), was a case in which Nathaniel Solon, a resident of Casper, Wyoming, was convicted for possession of child pornography. The case became known in the media for irregularities in the process, and suspicions (affirmed by the defendant) that the material was introduced by malware on the computer. There were other people accused of similar crimes, who were later acquitted, but Solon was never exonerated.
Background
Nathaniel Solon was charged on January 18, 2007, by the indictment with possessing child pornography. On October 2, 2007, he pleaded guilty. When he came back for another hearing on January 8, 2008, however, Solon stated that he was an innocent man, and the only reason that he had pleaded guilty in the first place was because he believed that he did not have the financial resources to hire an expert witness to investigate his defense. In regard to his explanation, the court appointed him a private attorney to represent him. With the court's approval, Solon was able to request an expert witness, with an investigation budget of $20,000 supplied by the court.
See also
Frameup
References
External links
Framed for Child Porn, a website collecting similar cases, inspired on this case.
American people convicted of child pornography offenses
United States computer case law
United States Court of Appeals for the Tenth Circuit cases
United States due process case law
United States Internet case law
2010 in United States case law
Child pornography law
Casper, Wyoming
Malware | United States v. Solon | Technology | 312 |
51,653 | https://en.wikipedia.org/wiki/Burali-Forti%20paradox | In set theory, a field of mathematics, the Burali-Forti paradox demonstrates that constructing "the set of all ordinal numbers" leads to a contradiction and therefore shows an antinomy in a system that allows its construction. It is named after Cesare Burali-Forti, who, in 1897, published a paper proving a theorem which, unknown to him, contradicted a previously proved result by Georg Cantor. Bertrand Russell subsequently noticed the contradiction, and when he published it in his 1903 book Principles of Mathematics, he stated that it had been suggested to him by Burali-Forti's paper, with the result that it came to be known by Burali-Forti's name.
Stated in terms of von Neumann ordinals
We will prove this by contradiction.
Let be a set consisting of all ordinal numbers.
is transitive because for every element of (which is an ordinal number and can be any ordinal number) and every element of (i.e. under the definition of Von Neumann ordinals, for every ordinal number ), we have that is an element of because any ordinal number contains only ordinal numbers, by the definition of this ordinal construction.
is well ordered by the membership relation because all its elements are also well ordered by this relation.
So, by steps 2 and 3, we have that is an ordinal class and also, by step 1, an ordinal number, because all ordinal classes that are sets are also ordinal numbers.
This implies that is an element of .
Under the definition of Von Neumann ordinals, is the same as being an element of . This latter statement is proven by step 5.
But no ordinal class is less than itself, including because of step 4 ( is an ordinal class), i.e. .
We have deduced two contradictory propositions ( and ) from the sethood of and, therefore, disproved that is a set.
Stated more generally
The version of the paradox above is anachronistic, because it presupposes the definition of the ordinals due to John von Neumann, under which each ordinal is the set of all preceding ordinals, which was not known at the time the paradox was framed by Burali-Forti.
Here is an account with fewer presuppositions: suppose that we associate with each well-ordering
an object called its order type in an unspecified way (the order types are the ordinal numbers). The order types (ordinal numbers) themselves are well-ordered in a natural way,
and this well-ordering must have an order type . It is easily shown in
naïve set theory (and remains true in ZFC but not in New Foundations) that the order
type of all ordinal numbers less than a fixed is itself.
So the order
type of all ordinal numbers less than is itself. But
this means that , being the order type of a proper initial segment of the ordinals, is strictly less than the order type of all the ordinals,
but the latter is itself by definition. This is a contradiction.
If we use the von Neumann definition, under which each ordinal is identified as the set of all preceding ordinals, the paradox is unavoidable: the offending proposition that the order type of all ordinal numbers less than a fixed is itself must be true. The collection of von Neumann ordinals, like the collection in the Russell paradox, cannot be a set in any set theory with classical logic. But the collection of order types in New Foundations (defined as equivalence classes of well-orderings under similarity) is actually a set, and the paradox is avoided because the order type of the ordinals less than
turns out not to be .
Resolutions of the paradox
Modern axioms for formal set theory such as ZF and ZFC circumvent this antinomy by not allowing the construction of sets using terms like "all sets with the property ", as is possible in naive set theory and as is possible with Gottlob Frege's axiomsspecifically Basic Law Vin the "Grundgesetze der Arithmetik." Quine's system New Foundations (NF) uses a different solution. showed that in the original version of Quine's system "Mathematical Logic" (ML), an extension of New Foundations, it is possible to derive the Burali-Forti paradox, showing that this system was contradictory. Quine's revision of ML following Rosser's discovery does not suffer from this defect, and indeed was subsequently proved equiconsistent with NF by Hao Wang.
See also
Absolute infinite
References
Irving Copi (1958) "The Burali-Forti Paradox", Philosophy of Science 25(4): 281–286,
External links
Stanford Encyclopedia of Philosophy: "Paradoxes and Contemporary Logic"—by Andrea Cantini.
Ordinal numbers
Paradoxes of naive set theory | Burali-Forti paradox | Mathematics | 1,051 |
32,763,780 | https://en.wikipedia.org/wiki/Harish-Chandra%27s%20c-function | In mathematics, Harish-Chandra's c-function is a function related to the intertwining operator between two principal series representations, that appears in the Plancherel measure for semisimple Lie groups. introduced a special case of it defined in terms of the asymptotic behavior of a zonal spherical function of a Lie group, and introduced a more general c-function called Harish-Chandra's (generalized) C-function. introduced the Gindikin–Karpelevich formula, a product formula for Harish-Chandra's c-function.
Gindikin–Karpelevich formula
The c-function has a generalization cw(λ) depending on an element w of the Weyl group.
The unique element of greatest length
s0, is the unique element that carries the Weyl chamber onto . By Harish-Chandra's integral formula, cs0 is Harish-Chandra's c-function:
The c-functions are in general defined by the equation
where ξ0 is the constant function 1 in L2(K/M). The cocycle property of the intertwining operators implies a similar multiplicative property for the c-functions:
provided
This reduces the computation of cs to the case when s = sα, the reflection in a (simple) root α, the so-called
"rank-one reduction" of . In fact the integral involves only the closed connected subgroup Gα corresponding to the Lie subalgebra generated by where α lies in Σ0+. Then Gα is a real semisimple Lie group with real rank one, i.e. dim Aα = 1,
and cs is just the Harish-Chandra c-function of Gα. In this case the c-function can be computed directly and is given by
where
and α0=α/〈α,α〉.
The general Gindikin–Karpelevich formula for c(λ) is an immediate consequence of this formula and the multiplicative properties of cs(λ), as follows:
where the constant c0 is chosen so that c(–iρ)=1 .
Plancherel measure
The c-function appears in the Plancherel theorem for spherical functions, and the Plancherel measure is 1/c2 times Lebesgue measure.
p-adic Lie groups
There is a similar c-function for p-adic Lie groups.
and found an analogous product formula for the c-function of a p-adic Lie group.
References
Lie groups | Harish-Chandra's c-function | Mathematics | 522 |
73,792,270 | https://en.wikipedia.org/wiki/Global%20Digital%20Compact | The Global Digital Compact is an initiative proposed in the United Nations Secretary-General António Guterres's Common Agenda. The objective of this compact is to ensure that digital technologies are used responsibly and for the benefit of all, while addressing the digital divide and fostering a safe and inclusive digital environment. The Global Digital Compact is part of the Pact for the Future, which was discussed and adopted at the UN Summit of the Future in September 2024.
Background and Process
Following consultations with over 1 million voices from around the world, the UN Member States adopted a declaration that emphasized the importance of improving digital cooperation. In response, the Secretary-General's report, "Our Common Agenda," proposes a Summit of the Future, with a technology track leading to the Global Digital Compact.
On 17 January 2023, the President of the UN General Assembly appointed Rwanda and Sweden as Co-facilitators to lead the intergovernmental process on the Global Digital Compact. A road map for the process was published on 16 January 2023.
As part of the consultative process, the United Nations invites input from individuals, groups, associations, organizations, and entities to help shape the Global Digital Compact. The input provided will inform deliberations of the Global Digital Compact, which will take place in 2024 as part of the Summit of the Future.
Key Aspects
The Global Digital Compact aims to bring together governments, private sector entities, civil society organizations, and other stakeholders to work collaboratively on a set of shared principles and commitments. Some key aspects of the Global Digital Compact include:
Connectivity: Ensuring that all people, including schools, have access to the internet and digital tools for connectivity and socio-economic prosperity.
Internet Fragmentation: Preventing the division and fragmentation of the internet to maintain a unified global digital space.
Data Protection: Providing individuals with options for how their data is used and ensuring their privacy is respected.
Human Rights Online: Applying human rights principles in the digital sphere, including freedom of expression, privacy, and protection from discrimination and misleading content.
Artificial Intelligence Regulation: Promoting the ethical development and use of artificial intelligence in alignment with shared global values.
Digital Commons: Recognizing digital technologies as a global public good and encouraging their development and use for the benefit of all.
Relation to Other Initiatives
The Global Digital Compact is related to various other international efforts, such as the Sustainable Development Goals (SDGs), the UN Secretary-General's Roadmap on Digital Cooperation, and the Partner2Connect Digital Coalition.
External links
UNGA President letter: designation of co-facilitators
Background Note
References
United Nations
Digital technology | Global Digital Compact | Technology | 530 |
69,496,724 | https://en.wikipedia.org/wiki/Lutetium%20phosphide | Lutetium phosphide is an inorganic compound of lutetium and phosphorus with the chemical formula . The compound forms dark crystals, does not dissolve in water.
Synthesis
Heating powdered lutetium and red phosphorus in an inert atmosphere or vacuum:
4Lu + P4 -> 4LuP
It can also be formed in the reaction of lutetium and phosphine.
Physical properties
Lutetium phosphide forms dark cubic crystals, space group Fmm, cell parameters a = 0.5533 nm, Z = 4.
Stable in air, does not dissolve in water and reacts actively with nitric acid.
Uses
The compound is a semiconductor used in high power, high-frequency applications, and in laser diodes.
Also used in gamma radiation detectors due to its ability to absorb radiation.
References
Phosphides
Lutetium compounds
Semiconductors
Rock salt crystal structure | Lutetium phosphide | Physics,Chemistry,Materials_science,Engineering | 182 |
13,152,539 | https://en.wikipedia.org/wiki/SAPO%20%28computer%29 | The SAPO (short for Samočinný počítač, “automatic computer”) was the first Czechoslovak computer. It operated in the years 1957–1960 in Výzkumný ústav matematických strojů, part of the Czechoslovak Academy of Sciences. The computer was the first fault-tolerant computer – it had three parallel arithmetic logic units, which decided on the correct result by voting, an example of triple modular redundancy (if all three results were different, the operation was repeated).
SAPO was designed between 1950 and 1956 by a team led by Czechoslovak cybernetics pioneer Antonín Svoboda. Svoboda had experience from building in the United States, where he worked at MIT until 1946. It was an electromechanical design with 7,000 relays and 400 vacuum tubes, and a magnetic drum memory with capacity of 1024 32-bit words. Each instruction had 5 operands (addresses) – 2 for arithmetic operands, one for result and addresses of next instruction in case of positive and negative result. It operated on binary floating point numbers.
In 1960, after a spark from one of the relays ignited the greasing oil and the whole relay unit burnt down, it was decided not to repair the computer because of its obsolescence.
See also
EPOS (computer)
References
Further reading
External links
Beginnings of computer design in Czechoslovakia (in Czech), Google translation
Google translation
Electro-mechanical computers
One-of-a-kind computers | SAPO (computer) | Technology | 308 |
11,784,110 | https://en.wikipedia.org/wiki/Linphone | Linphone (contraction of Linux phone) is a free voice over IP softphone, SIP client and service. It may be used for audio and video direct calls and calls through any VoIP softswitch or IP-PBX. Linphone also provides the possibility to exchange instant messages. It has a simple multilanguage interface based on Qt for GUI and can also be run as a console-mode application on Linux.
Both SIP service and software could be used together, but also independently: it's possible to connect Linphone service with any SIP client (software or hardware), and to use Linphone software with any SIP service.
The softphone is currently developed by Belledonne Communications in France. Linphone was initially developed for Linux but now supports many additional platforms including Microsoft Windows, macOS, and mobile phones running Windows Phone, iOS or Android. It supports ZRTP for end-to-end encrypted voice and video communication.
Linphone is licensed under the GNU GPL-3.0-or-later and supports IPv6. Linphone can also be used behind network address translator (NAT), meaning it can run behind home routers. It is compatible with telephony by using an Internet telephony service provider (ITSP).
Features
Linphone hosts a free SIP service on its website.
The Linphone client provides access to following functionalities:
Multi-account work
Registration on any SIP-service and line status management
Contact list with status of other users
Conference call initiation
Combination of message history and call details
DTMF signals sending (SIP INFO / RFC 2833)
File sharing
Additional plugins
Open standards support
Protocols
SIP according to RFC 3261 (UDP, TCP and TLS)
SIP SIMPLE
NAT traversal by TURN and ICE
RTP/RTCP
Media-security: SRTP and ZRTP
Audio codecs
Audio codec support: Speex (narrow band and wideband), G.711 (μ-law, A-law), GSM, Opus, and iLBC (through an optional plugin)
Video codecs
Video codec support: MPEG-4, Theora, VP8 and H.264 (with a plugin based on x264), with resolutions from QCIF (176×144) to SVGA (800×600) provided that network bandwidth and CPU power are sufficient.
Gallery
See also
Comparison of VoIP software
List of SIP software
Opportunistic encryption
References
External links
Cross-platform software
Android (operating system) software
Free and open-source Android software
Communication software
Free VoIP software
Instant messaging clients
Instant messaging clients for Linux
IOS software
MacOS instant messaging clients
Videotelephony
VoIP software
Windows instant messaging clients
BlackBerry software | Linphone | Technology | 565 |
65,575,944 | https://en.wikipedia.org/wiki/Ryegrass%20mosaic%20virus | Ryegrass mosaic virus (RMV) is a virus in the genus Rymovirus. As the name suggests its hosts include ryegrass, but also other relatives in the family Poaceae. RMV's genome was sequenced in 2015.
References
Potyviridae
Viral plant pathogens and diseases
Monocot diseases | Ryegrass mosaic virus | Biology | 65 |
7,131,295 | https://en.wikipedia.org/wiki/Knockdown%20texture | Knockdown texture is a drywall finishing style. It is a mottled texture, it has more changes in textures than a simple flat finish, but less changes than orange peel, or popcorn, texture.
Knockdown texture is created by watering down joint compound to a soupy consistency. A trowel is then used to apply the joint compound. The joint compound will begin to form stalactites as it dries. The trowel is then run over the surface of the drywall, knocking off the stalactites and leaving the mottled finish.
A much more common, and faster technique is to apply the texture mud (which is slightly different from joint compound, in that it has less shrinkage upon drying) with a texture machine – a compressor and a texture spray hopper which sprays mud instead of paint. This applies what is referred to as a splatter coat. The use of a compressor allows this to be applied to walls as well as ceilings. When knocking this down, the mud is allowed to dry for a short period, then skimmed with a knockdown knife – a large, usually plastic (to reduce noticeable edges) knife.
Knockdown texture reduces construction costs because it conceals imperfections in the drywall that normally require higher more expensive stages of sand and prime for drywall installers.
Construction | Knockdown texture | Engineering | 273 |
165,384 | https://en.wikipedia.org/wiki/Curie%20temperature | In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism is lost at a critical temperature.
The force of magnetism is determined by the magnetic moment, a dipole moment within an atom that originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction.
Permanent magnetism is caused by the alignment of magnetic moments, and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law.
In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.
Curie temperatures of materials
History
That heating destroys magnetism was already described in De Magnete (1600):Iron filings, after being heated for a long time, are attracted by a loadstone, yet not so strongly or from so great a distance as when not heated. A loadstone loses some of its virtue by too great a heat; for its humour is set free, whence its peculiar nature is marred. (Book 2, Chapter 23).
Magnetic moments
At the atomic level, there are two contributors to the magnetic moment, the electron magnetic moment and the nuclear magnetic moment. Of these two terms, the electron magnetic moment dominates, and the nuclear magnetic moment is insignificant. At higher temperatures, electrons have higher thermal energy. This has a randomizing effect on aligned magnetic domains, leading to the disruption of order, and the phenomena of the Curie point.
Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic materials have different intrinsic magnetic moment structures. At a material's specific Curie temperature (), these properties change. The transition from antiferromagnetic to paramagnetic (or vice versa) occurs at the Néel temperature (), which is analogous to Curie temperature.
Materials with magnetic moments that change properties at the Curie temperature
Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic structures are made up of intrinsic magnetic moments. If all the electrons within the structure are paired, these moments cancel out due to their opposite spins and angular momenta. Thus, even with an applied magnetic field, these materials have different properties and no Curie temperature.
Paramagnetic
A material is paramagnetic only above its Curie temperature. Paramagnetic materials are non-magnetic when a magnetic field is absent and magnetic when a magnetic field is applied. When a magnetic field is absent, the material has disordered magnetic moments; that is, the magnetic moments are asymmetrical and not aligned. When a magnetic field is present, the magnetic moments are temporarily realigned parallel to the applied field; the magnetic moments are symmetrical and aligned. The magnetic moments being aligned in the same direction are what causes an induced magnetic field.
For paramagnetism, this response to an applied magnetic field is positive and is known as magnetic susceptibility. The magnetic susceptibility only applies above the Curie temperature for disordered states.
Sources of paramagnetism (materials which have Curie temperatures) include:
All atoms that have unpaired electrons;
Atoms that have inner shells that are incomplete in electrons;
Free radicals;
Metals.
Above the Curie temperature, the atoms are excited, and the spin orientations become randomized but can be realigned by an applied field, i.e., the material becomes paramagnetic. Below the Curie temperature, the intrinsic structure has undergone a phase transition, the atoms are ordered, and the material is ferromagnetic. The paramagnetic materials' induced magnetic fields are very weak compared with ferromagnetic materials' magnetic fields.
Ferromagnetic
Materials are only ferromagnetic below their corresponding Curie temperatures. Ferromagnetic materials are magnetic in the absence of an applied magnetic field.
When a magnetic field is absent the material has spontaneous magnetization which is a result of the ordered magnetic moments; that is, for ferromagnetism, the atoms are symmetrical and aligned in the same direction creating a permanent magnetic field.
The magnetic interactions are held together by exchange interactions; otherwise thermal disorder would overcome the weak interactions of magnetic moments. The exchange interaction has a zero probability of parallel electrons occupying the same point in time, implying a preferred parallel alignment in the material. The Boltzmann factor contributes heavily as it prefers interacting particles to be aligned in the same direction. This causes ferromagnets to have strong magnetic fields and high Curie temperatures of around .
Below the Curie temperature, the atoms are aligned and parallel, causing spontaneous magnetism; the material is ferromagnetic. Above the Curie temperature the material is paramagnetic, as the atoms lose their ordered magnetic moments when the material undergoes a phase transition.
Ferrimagnetic
Materials are only ferrimagnetic below their corresponding Curie temperature. Ferrimagnetic materials are magnetic in the absence of an applied magnetic field and are made up of two different ions.
When a magnetic field is absent the material has a spontaneous magnetism which is the result of ordered magnetic moments; that is, for ferrimagnetism one ion's magnetic moments are aligned facing in one direction with certain magnitude and the other ion's magnetic moments are aligned facing in the opposite direction with a different magnitude. As the magnetic moments are of different magnitudes in opposite directions there is still a spontaneous magnetism and a magnetic field is present.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions. The orientations of moments however are anti-parallel which results in a net momentum by subtracting their momentum from one another.
Below the Curie temperature the atoms of each ion are aligned anti-parallel with different momentums causing a spontaneous magnetism; the material is ferrimagnetic. Above the Curie temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition.
Antiferromagnetic and the Néel temperature
Materials are only antiferromagnetic below their corresponding Néel temperature or magnetic ordering temperature, TN. This is similar to the Curie temperature as above the Néel Temperature the material undergoes a phase transition and becomes paramagnetic. That is, the thermal energy becomes large enough to destroy the microscopic magnetic ordering within the material. It is named after Louis Néel (1904–2000), who received the 1970 Nobel Prize in Physics for his work in the area.
The material has equal magnetic moments aligned in opposite directions resulting in a zero magnetic moment and a net magnetism of zero at all temperatures below the Néel temperature. Antiferromagnetic materials are weakly magnetic in the absence or presence of an applied magnetic field.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions preventing thermal disorder from overcoming the weak interactions of magnetic moments. When disorder occurs it is at the Néel temperature.
Listed below are the Néel temperatures of several materials:
Curie–Weiss law
The Curie–Weiss law is an adapted version of Curie's law.
The Curie–Weiss law is a simple model derived from a mean-field approximation, this means it works well for the materials temperature, , much greater than their corresponding Curie temperature, , i.e. ; it however fails to describe the magnetic susceptibility, , in the immediate vicinity of the Curie point because of correlations in the fluctuations of neighboring magnetic moments.
Neither Curie's law nor the Curie–Weiss law holds for .
Curie's law for a paramagnetic material:
The Curie constant is defined as
The Curie–Weiss law is then derived from Curie's law to be:
where:
is the Weiss molecular field constant.
For full derivation see Curie–Weiss law.
Physics
Approaching Curie temperature from above
As the Curie–Weiss law is an approximation, a more accurate model is needed when the temperature, , approaches the material's Curie temperature, .
Magnetic susceptibility occurs above the Curie temperature.
An accurate model of critical behaviour for magnetic susceptibility with critical exponent :
The critical exponent differs between materials and for the mean-field model is taken as = 1.
As temperature is inversely proportional to magnetic susceptibility, when approaches the denominator tends to zero and the magnetic susceptibility approaches infinity allowing magnetism to occur. This is a spontaneous magnetism which is a property of ferromagnetic and ferrimagnetic materials.
Approaching Curie temperature from below
Magnetism depends on temperature and spontaneous magnetism occurs below the Curie temperature. An accurate model of critical behaviour for spontaneous magnetism with critical exponent :
The critical exponent differs between materials and for the mean-field model as taken as = where .
The spontaneous magnetism approaches zero as the temperature increases towards the materials Curie temperature.
Approaching absolute zero (0 kelvin)
The spontaneous magnetism, occurring in ferromagnetic, ferrimagnetic, and antiferromagnetic materials, approaches zero as the temperature increases towards the material's Curie temperature. Spontaneous magnetism is at its maximum as the temperature approaches 0 K. That is, the magnetic moments are completely aligned and at their strongest magnitude of magnetism due to lack of thermal disturbance.
In paramagnetic materials thermal energy is sufficient to overcome the ordered alignments. As the temperature approaches 0 K, the entropy decreases to zero, that is, the disorder decreases and the material becomes ordered. This occurs without the presence of an applied magnetic field and obeys the third law of thermodynamics.
Both Curie's law and the Curie–Weiss law fail as the temperature approaches 0 K. This is because they depend on the magnetic susceptibility, which only applies when the state is disordered.
Gadolinium sulfate continues to satisfy Curie's law at 1 K. Between 0 and 1 K the law fails to hold and a sudden change in the intrinsic structure occurs at the Curie temperature.
Ising model of phase transitions
The Ising model is mathematically based and can analyse the critical points of phase transitions in ferromagnetic order due to spins of electrons having magnitudes of ±. The spins interact with their neighbouring dipole electrons in the structure and here the Ising model can predict their behaviour with each other.
This model is important for solving and understanding the concepts of phase transitions and hence solving the Curie temperature. As a result, many different dependencies that affect the Curie temperature can be analysed.
For example, the surface and bulk properties depend on the alignment and magnitude of spins and the Ising model can determine the effects of magnetism in this system.
One should note, in 1D the Curie (critical) temperature for a magnetic order phase transition is found to be at zero temperature, i.e. the magnetic order takes over only at = 0. In 2D, the critical temperature, e.g. a finite magnetization, can be calculated by solving the inequality:
Weiss domains and surface and bulk Curie temperatures
Materials structures consist of intrinsic magnetic moments which are separated into domains called Weiss domains. This can result in ferromagnetic materials having no spontaneous magnetism as domains could potentially balance each other out. The position of particles can therefore have different orientations around the surface than the main part (bulk) of the material. This property directly affects the Curie temperature as there can be a bulk Curie temperature and a different surface Curie temperature for a material.
This allows for the surface Curie temperature to be ferromagnetic above the bulk Curie temperature when the main state is disordered, i.e. ordered and disordered states occur simultaneously.
The surface and bulk properties can be predicted by the Ising model and electron capture spectroscopy can be used to detect the electron spins and hence the magnetic moments on the surface of the material. An average total magnetism is taken from the bulk and surface temperatures to calculate the Curie temperature from the material, noting the bulk contributes more.
The angular momentum of an electron is either + or − due to it having a spin of , which gives a specific size of magnetic moment to the electron; the Bohr magneton. Electrons orbiting around the nucleus in a current loop create a magnetic field which depends on the Bohr magneton and magnetic quantum number. Therefore, the magnetic moments are related between angular and orbital momentum and affect each other. Angular momentum contributes twice as much to magnetic moments than orbital.
For terbium which is a rare-earth metal and has a high orbital angular momentum the magnetic moment is strong enough to affect the order above its bulk temperatures. It is said to have a high anisotropy on the surface, that is it is highly directed in one orientation. It remains ferromagnetic on its surface above its Curie temperature (219 K) while its bulk becomes antiferromagnetic and then at higher temperatures its surface remains antiferromagnetic above its bulk Néel Temperature (230 K) before becoming completely disordered and paramagnetic with increasing temperature. The anisotropy in the bulk is different from its surface anisotropy just above these phase changes as the magnetic moments will be ordered differently or ordered in paramagnetic materials.
Changing a material's Curie temperature
Composite materials
Composite materials, that is, materials composed from other materials with different properties, can change the Curie temperature. For example, a composite which has silver in it can create spaces for oxygen molecules in bonding which decreases the Curie temperature as the crystal lattice will not be as compact.
The alignment of magnetic moments in the composite material affects the Curie temperature. If the material's moments are parallel with each other, the Curie temperature will increase and if perpendicular the Curie temperature will decrease as either more or less thermal energy will be needed to destroy the alignments.
Preparing composite materials through different temperatures can result in different final compositions which will have different Curie temperatures. Doping a material can also affect its Curie temperature.
The density of nanocomposite materials changes the Curie temperature. Nanocomposites are compact structures on a nano-scale. The structure is built up of high and low bulk Curie temperatures, however will only have one mean-field Curie temperature. A higher density of lower bulk temperatures results in a lower mean-field Curie temperature, and a higher density of higher bulk temperature significantly increases the mean-field Curie temperature. In more than one dimension the Curie temperature begins to increase as the magnetic moments will need more thermal energy to overcome the ordered structure.
Particle size
The size of particles in a material's crystal lattice changes the Curie temperature. Due to the small size of particles (nanoparticles) the fluctuations of electron spins become more prominent, which results in the Curie temperature drastically decreasing when the size of particles decreases, as the fluctuations cause disorder. The size of a particle also affects the anisotropy causing alignment to become less stable and thus lead to disorder in magnetic moments.
The extreme of this is superparamagnetism which only occurs in small ferromagnetic particles. In this phenomenon, fluctuations are very influential causing magnetic moments to change direction randomly and thus create disorder.
The Curie temperature of nanoparticles is also affected by the crystal lattice structure: body-centred cubic (bcc), face-centred cubic (fcc), and a hexagonal structure (hcp) all have different Curie temperatures due to magnetic moments reacting to their neighbouring electron spins. fcc and hcp have tighter structures and as a results have higher Curie temperatures than bcc as the magnetic moments have stronger effects when closer together. This is known as the coordination number which is the number of nearest neighbouring particles in a structure. This indicates a lower coordination number at the surface of a material than the bulk which leads to the surface becoming less significant when the temperature is approaching the Curie temperature. In smaller systems the coordination number for the surface is more significant and the magnetic moments have a stronger effect on the system.
Although fluctuations in particles can be minuscule, they are heavily dependent on the structure of crystal lattices as they react with their nearest neighbouring particles. Fluctuations are also affected by the exchange interaction as parallel facing magnetic moments are favoured and therefore have less disturbance and disorder, therefore a tighter structure influences a stronger magnetism and therefore a higher Curie temperature.
Pressure
Pressure changes a material's Curie temperature. Increasing pressure on the crystal lattice decreases the volume of the system. Pressure directly affects the kinetic energy in particles as movement increases causing the vibrations to disrupt the order of magnetic moments. This is similar to temperature as it also increases the kinetic energy of particles and destroys the order of magnetic moments and magnetism.
Pressure also affects the density of states (DOS). Here the DOS decreases causing the number of electrons available to the system to decrease. This leads to the number of magnetic moments decreasing as they depend on electron spins. It would be expected because of this that the Curie temperature would decrease; however, it increases. This is the result of the exchange interaction. The exchange interaction favours the aligned parallel magnetic moments due to electrons being unable to occupy the same space in time and as this is increased due to the volume decreasing the Curie temperature increases with pressure. The Curie temperature is made up of a combination of dependencies on kinetic energy and the DOS.
The concentration of particles also affects the Curie temperature when pressure is being applied and can result in a decrease in Curie temperature when the concentration is above a certain percent.
Orbital ordering
Orbital ordering changes the Curie temperature of a material. Orbital ordering can be controlled through applied strains. This is a function that determines the wave of a single electron or paired electrons inside the material. Having control over the probability of where the electron will be allows the Curie temperature to be altered. For example, the delocalised electrons can be moved onto the same plane by applied strains within the crystal lattice.
The Curie temperature is seen to increase greatly due to electrons being packed together in the same plane, they are forced to align due to the exchange interaction and thus increases the strength of the magnetic moments which prevents thermal disorder at lower temperatures.
Curie temperature in ferroelectric materials
In analogy to ferromagnetic and paramagnetic materials, the term Curie temperature () is also applied to the temperature at which a ferroelectric material transitions to being paraelectric. Hence, is the temperature where ferroelectric materials lose their spontaneous polarisation as a first or second order phase change occurs. In case of a second order transition, the Curie Weiss temperature which defines the maximum of the dielectric constant is equal to the Curie temperature. However, the Curie temperature can be 10 K higher than in case of a first order transition.
Ferroelectric and dielectric
Materials are only ferroelectric below their corresponding transition temperature . Ferroelectric materials are all pyroelectric and therefore have a spontaneous electric polarisation as the structures are unsymmetrical.
Ferroelectric materials' polarization is subject to hysteresis (Figure 4); that is they are dependent on their past state as well as their current state. As an electric field is applied the dipoles are forced to align and polarisation is created, when the electric field is removed polarisation remains. The hysteresis loop depends on temperature and as a result as the temperature is increased and reaches the two curves become one curve as shown in the dielectric polarisation (Figure 5).
Relative permittivity
A modified version of the Curie–Weiss law applies to the dielectric constant, also known as the relative permittivity:
Applications
A heat-induced ferromagnetic-paramagnetic transition is used in magneto-optical storage media for erasing and writing of new data. Famous examples include the Sony Minidisc format as well as the now-obsolete CD-MO format. Curie point electro-magnets have been proposed and tested for actuation mechanisms in passive safety systems of fast breeder reactors, where control rods are dropped into the reactor core if the actuation mechanism heats up beyond the material's Curie point. Other uses include temperature control in soldering irons and stabilizing the magnetic field of tachometer generators against temperature variation.
See also
Notes
References
External links
Ferromagnetic Curie Point. Video by Walter Lewin, M.I.T.
Critical phenomena
Phase transitions
Temperature
Pierre Curie | Curie temperature | Physics,Chemistry,Materials_science,Mathematics | 4,447 |
69,816,853 | https://en.wikipedia.org/wiki/CoRoT-24c | CoRoT-24c is a transiting exoplanet found by the CoRoT space telescope in 2011 and announced in 2014. Along with CoRoT-24b, it is one of two exoplanets orbiting CoRoT-24, making it the first multiple transiting system detected by the telescope. It is a hot Neptune orbiting at a distance of 0.098 AU from its host star.
References
Transiting exoplanets
Exoplanets discovered in 2011
24b
Hot Neptunes | CoRoT-24c | Astronomy | 104 |
23,643 | https://en.wikipedia.org/wiki/Propane | Propane () is a three-carbon alkane with the molecular formula . It is a gas at standard temperature and pressure, but compressible to a transportable liquid. A by-product of natural gas processing and petroleum refining, it is often a constituent of liquefied petroleum gas (LPG), which is commonly used as a fuel in domestic and industrial applications and in low-emissions public transportation; other constituents of LPG may include propylene, butane, butylene, butadiene, and isobutylene. Discovered in 1857 by the French chemist Marcellin Berthelot, it became commercially available in the US by 1911. Propane has lower volumetric energy density than gasoline or coal, but has higher gravimetric energy density than them and burns more cleanly.
Propane gas has become a popular choice for barbecues and portable stoves because its low −42 °C boiling point makes it vaporise inside pressurised liquid containers (it exists in two phases, vapor above liquid). It retains its ability to vaporise even in cold weather, making it better-suited for outdoor use in cold climates than alternatives with higher boiling points like butane. LPG powers buses, forklifts, automobiles, outboard boat motors, and ice resurfacing machines, and is used for heat and cooking in recreational vehicles and campers. Propane is becoming popular as a replacement refrigerant (R290) for heatpumps also as it offers greater efficiency than the current refrigerants: R410A / R32, higher temperature heat output and less damage to the atmosphere for escaped gasses - at the expense of high gas flammability.
History
Propane was first synthesized by the French chemist Marcellin Berthelot in 1857 during his researches on hydrogenation. Berthelot made propane by heating propylene dibromide (C3H6Br2) with potassium iodide and water. Propane was found dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864. Walter O. Snelling of the U.S. Bureau of Mines highlighted it as a volatile component in gasoline in 1910, which marked the "birth of the propane industry" in the United States. The volatility of these lighter hydrocarbons caused them to be known as "wild" because of the high vapor pressures of unrefined gasoline. On March 31, 1912, The New York Times reported on Snelling's work with liquefied gas, saying "a steel bottle will carry enough gas to light an ordinary home for three weeks".
It was during this time that Snelling—in cooperation with Frank P. Peterson, Chester Kerr, and Arthur Kerr—developed ways to liquefy the LP gases during the refining of gasoline. Together, they established American Gasol Co., the first commercial marketer of propane. Snelling had produced relatively pure propane by 1911, and on March 25, 1913, his method of processing and producing LP gases was issued patent #1,056,845. A separate method of producing LP gas through compression was developed by Frank Peterson and its patent was granted on July 2, 1912.
The 1920s saw increased production of LP gases, with the first year of recorded production totaling in 1922. In 1927, annual marketed LP gas production reached , and by 1935, the annual sales of LP gas had reached . Major industry developments in the 1930s included the introduction of railroad tank car transport, gas odorization, and the construction of local bottle-filling plants. The year 1945 marked the first year that annual LP gas sales reached a billion gallons. By 1947, 62% of all U.S. homes had been equipped with either natural gas or propane for cooking.
In 1950, 1,000 propane-fueled buses were ordered by the Chicago Transit Authority, and by 1958, sales in the U.S. had reached annually. In 2004, it was reported to be a growing $8-billion to $10-billion industry with over of propane being used annually in the U.S.
During the COVID-19 pandemic, propane shortages were reported in the United States due to increased demand.
Etymology
The "prop-" root found in "propane" and names of other compounds with three-carbon chains was derived from "propionic acid", which in turn was named after the Greek words protos (meaning first) and pion (fat), as it was the "first" member of the series of fatty acids.
Properties and reactions
Propane is a colorless, odorless gas. Ethyl mercaptan is added as a safety precaution as an odorant, and is commonly called a "rotten egg" smell. At normal pressure it liquifies below its boiling point at −42 °C and solidifies below its melting point at −187.7 °C. Propane crystallizes in the space group P21/n. The low space-filling of 58.5% (at 90 K), due to the bad stacking properties of the molecule, is the reason for the particularly low melting point.
Propane undergoes combustion reactions in a similar fashion to other alkanes. In the presence of excess oxygen, propane burns to form water and carbon dioxide.
C3H8 + 5 O2 -> 3 CO2 + 4 H2O + heat
When insufficient oxygen is present for complete combustion, carbon monoxide, soot (carbon), or both, are formed as well:
C3H8 + 9/2 O2 -> 2 CO2 + CO + 4 H2O + heat
C3H8 + 2 O2 -> 3 C + 4 H2O + heat
The complete combustion of propane produces about 50 MJ/kg of heat.
Propane combustion is much cleaner than that of coal or unleaded gasoline. Propane's per-BTU production of CO2 is almost as low as that of natural gas. Propane burns hotter than home heating oil or diesel fuel because of the very high hydrogen content. The presence of C–C bonds, plus the multiple bonds of propylene and butylene, produce organic exhausts besides carbon dioxide and water vapor during typical combustion. These bonds also cause propane to burn with a visible flame.
Energy content
The enthalpy of combustion of propane gas where all products return to standard state, for example where water returns to its liquid state at standard temperature (known as higher heating value), is (2,219.2 ± 0.5) kJ/mol, or (50.33 ± 0.01) MJ/kg.
The enthalpy of combustion of propane gas where products do not return to standard state, for example where the hot gases including water vapor exit a chimney, (known as lower heating value) is −2043.455 kJ/mol. The lower heat value is the amount of heat available from burning the substance where the combustion products are vented to the atmosphere; for example, the heat from a fireplace when the flue is open.
Density
The density of propane gas at 25 °C (77 °F) is 1.808 kg/m3, about 1.5× the density of air at the same temperature. The density of liquid propane at 25 °C (77 °F) is 0.493 g/cm3, which is equivalent to 4.11 pounds per U.S. liquid gallon or 493 g/L. Propane expands at 1.5% per 10 °F. Thus, liquid propane has a density of approximately 4.2 pounds per gallon (504 g/L) at 60 °F (15.6 °C).
As the density of propane changes with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations.
Uses
Portable stoves
Propane is a popular choice for barbecues and portable stoves because the low boiling point of makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices.
Refrigerant
Blends of pure, dry "isopropane" [isobutane/propane mixtures of propane (R-290) and isobutane (R-600a)] can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a GWP value of 0.072, 13.9 times lower than the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22, R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability.
Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion.
Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons.
Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles.
It has also been proposed to use propane as a refrigerant in heat pumps.
Domestic and industrial fuel
Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In June 2023, Stanford researchers found propane combustion emitted detectable and repeatable levels of benzene that in some homes raised indoor benzene concentrations above well-established health benchmarks. The research also shows that gas and propane fuels appear to be the dominant source of benzene produced by cooking.
In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. , 6.2 million American households use propane as their primary heating fuel.
In North America, local delivery trucks with an average cylinder size of , fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of , transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers. In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location.
There are also community propane systems, with a central cylinder feeding individual homes.
Motor fuel
In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, behind gasoline and diesel fuel. In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas.
The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels.
Propane is also used as fuel for small engines, especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, there have been lawn-care products like string trimmers, lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution.
Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a full load of combined diesel and propane fuel means they can maintain federal hours of work rules with two fewer fuel stops in a cross-country trip. Truckers, tractor pulling competitions, and farmers have been using a propane boost system for over forty years in North America.
Other uses
Propane is the primary flammable gas in blowtorches for soldering.
Propane is used in oxy-fuel welding and cutting. Propane does not burn as hot as acetylene in its inner cone, and so it is rarely used for welding. Propane, however, has a very high number of BTUs per cubic foot in its outer cone, and so with the right torch (injector style) it can make a faster and cleaner cut than acetylene, and is much more useful for heating and bending than acetylene.
Propane is used as a feedstock for the production of base petrochemicals in steam cracking.
Propane is the primary fuel for hot-air balloons.
It is used in semiconductor manufacture to deposit silicon carbide.
Propane is commonly used in theme parks and in movie production as an inexpensive, high-energy fuel for explosions and other special effects.
Propane is used as a propellant, relying on the expansion of the gas to fire the projectile. It does not ignite the gas. The use of a liquefied gas gives more shots per cylinder, compared to a compressed gas.
Propane is also used as a cooking fuel.
Propane is used as a propellant for many household aerosol sprays, including shaving creams and air fresheners.
Propane is a promising feedstock for the production of propylene.
Liquified propane is used in the extraction of animal fats and vegetable oils.
Purity
The North American standard grade of automotive-use propane is rated HD-5 (Heavy Duty 5%). HD-5 grade has a maximum of 5 percent butane, but propane sold in Europe has a maximum allowable amount of butane of 30 percent, meaning it is not the same fuel as HD-5. The LPG used as auto fuel and cooking gas in Asia and Australia also has very high butane content.
Propylene (also called propene) can be a contaminant of commercial propane. Propane containing too much propene is not suited for most vehicle fuels. HD-5 is a specification that establishes a maximum concentration of 5% propene in propane. Propane and other LP gas specifications are established in ASTM D-1835. All propane fuels include an odorant, almost always ethanethiol, so that the gas can be smelled easily in case of a leak. Propane as HD-5 was originally intended for use as vehicle fuel. HD-5 is currently being used in all propane applications.
Typically in the United States and Canada, LPG is primarily propane (at least 90%), while the rest is mostly ethane, propylene, butane, and odorants including ethyl mercaptan. This is the HD-5 standard, (maximum allowable propylene content, and no more than 5% butanes and ethane) defined by the American Society for Testing and Materials by its Standard 1835 for internal combustion engines. Not all products labeled "LPG" conform to this standard, however. In Mexico, for example, gas labeled "LPG" may consist of 60% propane and 40% butane. "The exact proportion of this combination varies by country, depending on international prices, on the availability of components and, especially, on the climatic conditions that favor LPG with higher butane content in warmer regions and propane in cold areas".
Comparison with natural gas
Propane is bought and stored in a liquid form, LPG. It can easily be stored in a relatively small space.
By comparison, compressed natural gas (CNG) cannot be liquefied by compression at normal temperatures, as these are well above its critical temperature. As a gas, very high pressure is required to store useful quantities. This poses the hazard that, in an accident, just as with any compressed gas cylinder (such as a CO2 cylinder used for a soda concession) a CNG cylinder may burst with great force, or leak rapidly enough to become a self-propelled missile. Therefore, CNG is much less efficient to store than propane, due to the large cylinder volume required. An alternative means of storing natural gas is as a cryogenic liquid in an insulated container as liquefied natural gas (LNG). This form of storage is at low pressure and is around 3.5 times as efficient as storing it as CNG.
Unlike propane, if a spill occurs, CNG will evaporate and dissipate because it is lighter than air.
Propane is much more commonly used to fuel vehicles than is natural gas, because that equipment costs less. Propane requires just of pressure to keep it liquid at .
Hazards
Propane is a simple asphyxiant. Unlike natural gas, it is denser than air. It may accumulate in low spaces and near the floor. When abused as an inhalant, it may cause hypoxia (lack of oxygen), pneumonia, cardiac failure or cardiac arrest. Propane has low toxicity since it is not readily absorbed and is not biologically active. Commonly stored under pressure at room temperature, propane and its mixtures will flash evaporate at atmospheric pressure and cool well below the freezing point of water. The cold gas, which appears white due to moisture condensing from the air, may cause frostbite.
Propane is denser than air. If a leak in a propane fuel system occurs, the vaporized gas will have a tendency to sink into any enclosed area and thus poses a risk of explosion and fire. The typical scenario is a leaking cylinder stored in a basement; the propane leak drifts across the floor to the pilot light on the furnace or water heater, and results in an explosion or fire. This property makes propane generally unsuitable as a fuel for boats. In 2007, a heavily investigated vapor-related explosion occurred in Ghent, West Virginia, U.S., killing four people and completely destroying the Little General convenience store on Flat Top Road, causing several injuries.
Another hazard associated with propane storage and transport is known as a BLEVE or boiling liquid expanding vapor explosion. The Kingman Explosion involved a railroad tank car in Kingman, Arizona, U.S., in 1973 during a propane transfer. The fire and subsequent explosions resulted in twelve fatalities and numerous injuries.
Production
Propane is produced as a by-product of two other processes, natural gas processing and petroleum refining. The processing of natural gas involves removal of butane, propane, and large amounts of ethane from the raw gas, to prevent condensation of these volatiles in natural gas pipelines. Additionally, oil refineries produce some propane as a by-product of cracking petroleum into gasoline or heating oil.
The supply of propane cannot easily be adjusted to meet increased demand, because of the by-product nature of propane production. About 90% of U.S. propane is domestically produced. The United States imports about 10% of the propane consumed each year, with about 70% of that coming from Canada via pipeline and rail. The remaining 30% of imported propane comes to the United States from other sources via ocean transport.
After it is separated from the crude oil, North American propane is stored in huge salt caverns. Examples of these are Fort Saskatchewan, Alberta; Mont Belvieu, Texas; and Conway, Kansas. These salt caverns can store of propane.
Retail cost
United States
, the retail cost of propane was approximately $2.37 per gallon, or roughly $25.95 per 1 million BTUs. This means that filling a 500-gallon propane tank, which is what households that use propane as their main source of energy usually require, cost $948 (80% of 500 gallons or 400 gallons), a 7.5% increase on the 2012–2013 winter season average US price. However, propane costs per gallon change significantly from one state to another: the Energy Information Administration (EIA) quotes a $2.995 per gallon average on the East Coast for October 2013, while the figure for the Midwest was $1.860 for the same period.
, the propane retail cost was approximately $1.97 per gallon, which meant filling a 500-gallon propane tank to 80% capacity costed $788, a 16.9% decrease or $160 less from November 2013. Similar regional differences in prices are present with the December 2015 EIA figure for the East Coast at $2.67 per gallon and the Midwest at $1.43 per gallon.
, the average US propane retail cost was approximately $2.48 per gallon. The wholesale price of propane in the U.S. always drops in the summer as most homes do not require it for home heating. The wholesale price of propane in the summer of 2018 was between 86 cents to 96 cents per U.S. gallon, based on a truckload or railway car load. The price for home heating was exactly double that price; at 95 cents per gallon wholesale, a home-delivered price was $1.90 per gallon if ordered 500 gallons at a time. Prices in the Midwest are always less than in California. Prices for home delivery always go up near the end of August or the first few days of September when people start ordering their home tanks to be filled.
See also
Blau gas
National Propane Gas Association
Hank Hill
References
External links
Canadian Propane Association
(syngas)
International Chemical Safety Card 0319
National Propane Gas Association (U.S.)
NIOSH Pocket Guide to Chemical Hazards
Propane Education & Research Council (U.S.)
Propane Properties Explained Descriptive Breakdown of Propane Characteristics
UKLPG: Propane and Butane in the UK
US Energy Information Administration
World LP Gas Association (WLPGA)
Aerosol propellants
Alkanes
Camping equipment
E-number additives
Fossil fuels
Fuel gas
GABAA receptor positive allosteric modulators
Industrial gases
Natural gas
Refrigerants | Propane | Chemistry | 4,975 |
347,136 | https://en.wikipedia.org/wiki/List%20of%20TCP%20and%20UDP%20port%20numbers | This is a list of TCP and UDP port numbers used by protocols for operation of network applications. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) only need one port for bidirectional traffic. TCP usually uses port numbers that match the services of the corresponding UDP implementations, if they exist, and vice versa.
The Internet Assigned Numbers Authority (IANA) is responsible for maintaining the official assignments of port numbers for specific uses, However, many unofficial uses of both well-known and registered port numbers occur in practice. Similarly, many of the official assignments refer to protocols that were never or are no longer in common use. This article lists port numbers and their associated protocols that have experienced significant uptake.
Table legend
Well-known ports
The port numbers in the range from 0 to 1023 (0 to 210 − 1) are the well-known ports or system ports. They are used by system processes that provide widely used types of network services. On Unix-like operating systems, a process must execute with superuser privileges to be able to bind a network socket to an IP address using one of the well-known ports.
Registered ports
The range of port numbers from 1024 to 49151 (210 to 215 + 214 − 1) are the registered ports. They are assigned by IANA for specific service upon application by a requesting entity. On most systems, registered ports can be used without superuser privileges.
Dynamic, private or ephemeral ports
The range 49152–65535 (215 + 214 to 216 − 1), 16 384 ports, contains dynamic or private ports that cannot be registered with IANA. This range is used for private or customized services, for temporary purposes, and for automatic allocation of ephemeral ports.
Note
See also
Comparison of file transfer protocols
Internet protocol suite
Port (computer networking)
List of IP numbers
Lists of network protocols
References and notes
Further reading
External links
Computing-related lists
Internet-related lists
Lists of network protocols
Transmission Control Protocol | List of TCP and UDP port numbers | Technology | 413 |
3,839,438 | https://en.wikipedia.org/wiki/Experimental%20finance | The goals of experimental finance are to understand human and market behavior in settings relevant to finance. Experiments are synthetic economic environments created by researchers specifically to answer research questions. This might involve, for example, establishing different market settings and environments to observe experimentally and analyze agents' behavior and the resulting characteristics of trading flows, information diffusion and aggregation, price setting mechanism and returns processes.
Fields to which experimental methods have been applied include corporate finance, asset pricing, financial econometrics, international finance, personal financial decision-making, macro-finance, banking and financial intermediation, capital markets, risk management and insurance, derivatives, quantitative finance, corporate governance and compensation, investments, market mechanisms, SME and microfinance and entrepreneurial finance.
Researchers in experimental finance can study to what extent existing financial economics theory makes valid predictions and attempt to discover new principles on which theory can be extended.
Experimental finance is a branch of experimental economics and its most common use lies in the field of behavioral finance.
History
In 1948, Chamberlin reported results of the first market experiment. Since then the acceptability, recognition, role, and methods of experimental economics have evolved.
From the early 1980s on a similar pattern emerged in experimental finance.
The foundational work in experimental finance was the work of Forsythe, Palfrey and Plott (1980), Plott and Sunder (1982), and Smith, Suchanek and Williams (1988).
Scientific value
Financial economics has one of the most detailed and updated observational data available of all branches of economics. Consequently, finance is characterized by strong empirical traditions. Much analysis is done on data from international markets including bids, asks, transaction prices, volume, etc. There is also data available from information services on actions and events that may influence markets. Data from these sources is not able to report on expectations, on which theory of financial markets is built. In experimental markets the researcher is able to know expectations, and control fundamental values, trading institutions, and market parameters such as available liquidity and the total stock of the asset. This gives the researcher the ability to know the price and other predictions of alternative theories. This creates the opportunity to do powerful tests on the robustness of theories which were not possible from field data, since there is little knowledge on the parameters and expectations from field data.
Advantages
Financial data analysis is based on data drawn from settings created for a purpose other than answering a specific research question. This results in the situation where any interpretation of the results may be challenged since it ignores other variables that have changed. Traditional data analysis issues include omitted-variables biases, self-selection biases, unobservable independent variables, and unobservable dependent variables.
Properly designed experiments are able to avoid several problems:
Omitted-variables bias: Multiple experiments can be created with settings that differ from one another in exactly one independent variable. This way all other variables of the setting are controlled, which eliminates alternative explanations for observed differences in the dependent variable.
Self-selection: By randomly assigning subjects to different treatment groups, the experimenters avoid issues caused by self-selection and are able to directly observe the changes in the dependent variable by changing by altering certain independent variables.
Unobservable independent variables: Experimentalists can create experimental settings themselves. This makes them able to observe all variables. Traditional data analysis may not be able to observe some variables, but sometimes experimenters cannot directly elicit certain information from subjects either. Without directly knowing a certain independent variable, good experimental design can create measures that to a large extent reflects the unobservable independent variable and the problem is therefore avoided.
Unobservable dependent variables: In traditional data studies, extracting the cause for the dependent variable to change may prove to be difficult. Experimentalists have the ability to create certain tasks that elicit the dependent variable.
Types of experiments
Laboratory experiments
Laboratory experiments are the most common form of experimentation. Here the idea is to construct a highly controlled setting in a laboratory. The use of lab experiments increased due to growing interest in issues such as economic cooperation, trust, and neuroeconomics. In this type of experiments, treatment is assigned randomly to a group of individuals in order to compare their economic actions and behavior to an untreated control group within the artificial laboratory environment. The ability to control the variables in the experiment provides for more accurate assessment of causality.
Controlled field studies or randomized field experiments
Controlled field experiments also randomize treatments but do so in real world applications. Average effects on people's behavior can then be consistently estimated by comparing behavior before and after the allocation.
Natural experiments
A natural experiment happens when some feature of the real world is randomly changed which allows using the exogenous variation due to this change to study causal effects of an otherwise endogenous explanatory variable. Natural experiments are popular in economic and finance research since they offer intuitive interpretation of the underlying identifying assumptions and enable a broader audience to check their consistency, this compared to purely statistical identification.
Main findings
Experimental methods in finance offer complementary methodologies that have allowed for the observation and manipulation of underlying determinants of prices, such as fundamental values or insider information. Experimental studies complement empirical work, particularly in the area of theory testing and development. Exploiting this experimental methodology has revealed some important findings over the past years. These findings could not have been reached by traditional field data analysis alone and are therefore experimental finance’s main contributions to the field of finance:
Security markets can aggregate and disseminate information (there are efficient markets), but this process is less effective as the information becomes less widely held and the number of information components that must be aggregated increases.
But this is not always the case (some of them are inefficient).
When information dissemination occurs, it is rarely perfect or instantaneous. Learning takes time.
More information is not always better from the point of view of the individual trader. Only those insiders who are much better informed than others can outperform other traders.
Markets for longer-lived assets have a strong tendency to generate price bubbles and crashes, prolonged deviations from fundamental values.
Emotions of traders play a role in generating bubbles in experimental asset markets.
Asset mispricing has been largely associated with trader overconfidence.
Prices as well as bids, offers, timing, etc., convey information. There are many channels for information flow.
Well-functioning derivative markets can help to improve primary markets’ efficiency.
Statistical efficiency or inability to make money using past data does not mean informational efficiency. Not being able to earn abnormal returns from the market does not mean that the price is right.
See also
Experimental economics
Behavioral economics
Game theory
Replication crisis #In economics
References
External links
Society for Experimental Finance (SEF)
Behavioral finance
Experimental economics | Experimental finance | Biology | 1,369 |
34,807,944 | https://en.wikipedia.org/wiki/Commission%20of%20Railway%20Safety | The Commission of Railway Safety is a government commission of India. Subordinate to the Ministry of Civil Aviation, the commission is the rail safety authority in India, as directed by The Railways Act, 1989.
The agency investigates serious rail accidents. Its head office was in the North-East Railway Compound in Lucknow. Which is now relocated to New Delhi As of 2023, Shri Janak Kumar Garg (IRSEE:1987) is the Chief Commissioner of Railway Safety (CCRS).
Organisational structure and jurisdiction
The commission is headed by a Chief Commissioner of Railway Safety (CCRS), at Lucknow, who also acts as Principal Technical Advisor to the Central Government in all matters pertaining to railway safety. Working under the administrative control of CCRS are 9 Commissioners of Railway Safety (CRS), each one exercising jurisdiction over one or more of the 17 Zonal Railways. In addition, Metro Railway (Kolkata), DMRC (Delhi), MRTS (Chennai) and Konkan Railway also fall under their jurisdiction. There are 5 deputy commissioners of railway safety posted in the headquarters at Lucknow for assisting the CCRS as and when required. In addition, there are 2 field deputy commissioners, one each in Mumbai and Kolkata, to assist the commissioners of railway safety in matters concerning the signalling and telecommunication disciplines.
See also
Aircraft Accident Investigation Bureau – Indian air accident investigation agency
References
External links
Official website of The Commission of Railway Safety
Indian commissions and inquiries
Ministry of Railways (India)
Rail accident investigators
Railway safety
Government agencies established in 1961
1961 establishments in Uttar Pradesh
Organisations based in Uttar Pradesh | Commission of Railway Safety | Technology | 317 |
14,922,915 | https://en.wikipedia.org/wiki/Rec.%20709 | ITU-R Recommendation 709, usually abbreviated Rec. 709, BT.709, or ITU-R 709, is a standard developed by the Radiocommunication Sector of the International Telecommunication Union (ITU-R) for image encoding and signal characteristics of high-definition television (HDTV). The standard specifies a scheme for digital encoding of colors as triplets of small integers, a widescreen format with 1080 active lines per picture and 1920 square pixels per line (a 16:9 aspect ratio), as well as several details of signal capture, transmission, and display. While directed to HDTV, some of its specifications (such as the color encoding) have also been adopted for other uses.
Technical details
The standard is freely available at the ITU website, and that document should be used as the authoritative reference. The essentials are summarized below.
Image format and definition
Recommendation ITU-R BT.709-6 defines a common image format (CIF) where picture characteristics are independent of the frame rate. The image is 1920x1080 pixels, for a total pixel count of 2,073,600 and a 16:9 aspect ratio.
Frame rates
BT.709-6 specifies the following possible frame rates and pixel scanning order. The options for the latter are progressively scanned frame (P), progressive segmented frames (PsF), and interlaced (I)
24/P, 24/PsF, 23.976/P, 23.976/PsF
These combinations match the frame rate used for theatrical motion pictures. The fractional rates are included for compatibility with the "pull-down" rates used with NTSC.
50/P, 25/P, 25/PsF, 50/I (25 fps)
These combinations are provided for compatibility with earlier "50 Hz" TV standards, such as PAL or SECAM. There are no fractional rates as PAL and SECAM did not have the pull-down issue of NTSC.
60/P, 59.94/P, 30/P, 30/PsF, 29.97/P, 29.97/PsF, 60/I (30 fps), 59.94/I (29.97 fps)
These combinations offer compatibility with earlier "60 Hz" TV standards, as NTSC. Here again, the fractional rates are for compatibility with legacy NTSC pull-down rates.
Cameras and monitors may use any of these modes. Video captured in progressive mode can be recorded, broadcast, or streamed in progressive or progressive segmented frame modes. Video captured using an interlaced mode must be distributed as interlace unless a de-interlace process is applied in post production.
In cases where a progressive captured image is distributed in segmented frame mode, segment/field frequency must be twice the frame rate. Thus 30/PsF has the same field rate as 60/I.
The RGB color space
Colors in the BT.709 standard are basically described according to the RGB color model, namely as mixtures of three primaries, "red" (R), "green" (G) and "blue" (B). For BT.709, their coordinates in the CIE 1931 chromaticity diagram are
In the BT.709 standard, a color value is conceptually represented by three numbers between 0 and 1, where 0 means the absence of the corresponding primary color and 1 means the maximum intensity that the device can represent. If these numbers are interpreted as Cartesian coordinates in a three-dimensional space, the representable colors correspond to points in an axis-aligned cube of side 1, with corner representing the color black and representing the maximum-brightness white. More generally, points along the cube's diagonal represent shades of grey. The white point coordinates above define this white color as being CIE illuminant D65 for 2° standard observing conditions.
Non-linear encoding
The coordinates are supposed to be proportional to the physical intensity of each primary, namely emitted or received light power per unit of area. For efficiency reasons, the standard specifies a non-linear transformation of each component signal, resulting in . This optical electrical transfer transfer function, is defined as
where is the linear coordinate (, , or ), and is the corresponding non-linear value (, , or ), both in the range .
Non-linear decoding
In order to display the colors on a device, such as a HDTV monitor, the encoded values should be converted back to physical intensities of the primaries. Mathematically, the inverse of the non-linear encoding above would be
However, the BT.709 standard does not specify this conversion (sometimes referred as the "display gamma"). In practice, it depends on various factors such as the capabilities of the monitor, the viewing conditions, and desired visual effects (such as contrast or saturation stretching). The standard response for HDTV monitors is covered in standards ITU-R BT.1886 and EBU Tech 3320.
The Y'C'BC'R color space
The BT.709 standard also defines an alternative representation of colors by three coordinates which are linear combinations of the (non-linear) RGB coordinates . Namely,
The value is called "luminance" in the standard, and is roughly an approximation of the CIE Y coordinate (which is presumed to measure the perceptual brightness of the color) modified by the non-linear function above. However, since is computed from the non-linear RGB components, this equivalence is correct only for shades of gray. The other two coordinates indicate the "blueness" and "redness" of the color's hue.
According to these formulas, as , , and vary between 0 and 1, the luminance will vary between 0 and 1, while and will vary between and
Quantization
For digital storage, transmission, and processing, the BT.709 standard specifies that the non-linear color coordinates , , , , , and shall be converted into integers , , , , , and with a fixed number of bits, either 8 or 10. This quantization shall be performed by simple scaling and rounding, so as to yield integers that span a proper subset of the -bit integers. Specifically,
and similarly for , , ; whereas
and similarly for . The function should round the argument to the nearest integer, with ties rounded up (that is, and .
These quantization formulas are the same as those defined in ITU-R BT.601.
As implied by these formulas, the signals , , , and are mapped from the range to 8-bit integers in [16 .. 235]; while and are mapped from the range to integers in [16..240], with 0 mapped to 128. For bits, the quantized values range in [64..940] and [64..960], respectively.
It follows that in 8-bit R'G'B' the color black is represented as (16,16,16) while white is (235, 235, 235). In 8-bit Y'C'BC'R, black is (16, 128, 128) and white is (235, 128, 128).
Quantized color coordinates outside the nominal ranges above are allowed, but typically they would be clamped for broadcast or for display (except for Superwhite and xvYCC). However, the 8-bit values 0 and 255 and the 10 bit values 0..3 and 1020..1023 are reserved for timing marks (SAV and EAV) and may not appear in color data.
History
The creation of a worldwide HDTV standard was approved in 1989 by the Comité consultatif international pour la radio (CCIR) as "Recommendation XA/11 MOD F". The first official version of the standard was approved in 1990 by the CCIR, under the name "Recommendation 709". The CCIR became the ITU-R in 1992, and released a new version of the standard (BT.709-1) in November 1993.
These early versions still left many unanswered questions, and the lack of consensus toward a worldwide HDTV standard was evident. So much so, some early HDTV systems such as 1035i30 and 1152i25 were still a part of the standard as late as 2002 in BT.709-5.
The most recent version is BT.709-6 released in 2015.
The standard strictly determined the picture size but offered several options for the pixel scanning order and frame rate. This flexibility allows BT.709 to become the worldwide standard for HDTV. This allows manufacturers to create a single television set or display for all markets world-wide.
Justification for the non-linear encoding
The BT.709 standard calls the non-linear encoding of to the optical electrical transfer function because it was meant to resemble the conversion of light intensity into analog electrical signals implemented by older non-digital cameras. It had long been known that a non-linear encoding of colors was more efficient than a linear one because human vision is more sensitive to brightness changes at low light levels. That conversion was commonly described as a power law with exponent near 0.5 (hence the common names "gamma correction" or "camera gamma" for the encoding function). Indeed, the BT.709 encoding function is close to a power law with exponent near 1/2.35.
The BT.709 encoding function is not a simple power law because the latter has infinite slope at the origin, which emphasizes camera noise and is problematic for analog-to-digital converters. Thus the standard opted for a piecewise function that combines a simple linear function for low light levels and a shifted power law for larger values. Having chosen 0.45 as the exponent and 4.5 as the slope of the linear part, the conditions for the function to be continuous (without sudden jumps) and smooth (without sudden changes of slope) at the break point are
The solution of these equations is and These values were rounded to 0.099 and 0.018, respectively.
Standards conversion
Conversion between different standards of video frame rates and color encoding has always been a challenge for content producers distributing through regions with different standards and requirements. While BT.709 has eased the compatibility issue in terms of the consumer and television set manufacturer, broadcast facilities still use a particular frame rate based on region, such as 29.97 in North America, or 25 in Europe meaning that broadcast content still requires at least frame rate conversion.
Color gamuts
The BT.709 red and blue primaries are the same as the EBU Tech 3213 (PAL) primaries. The yG coordinate too is the same, while xG is halfway between EBU Tech 3213's xG and SMPTE C's xG.
The resulting BT.709 color space is almost identical to that of the BT.601-6 used by PAL and NTSC, and covers 35.9% of it. It also covers 33.24% of the CIE 1976 u’v’ space and 33.5% of the CIE 1931 x y diagram.
Converting standard definition
The vast legacy library of standard-definition programs and content presents further challenges. NTSC, PAL, and SECAM are all interlaced formats in a 4:3 aspect ratio, and at a relatively low resolution. Scaling them up to HD resolution with a 16:9 aspect ratio presents a number of challenges.
First is the potential for distracting motion artifacts due to interlaced video content. The solution is to either up-convert only to an interlaced BT.709 format at the same field rate, and scale the fields independently, or use motion processing to remove the inter-field motion and deinterlace, creating progressive frames. In the latter case, motion processing can introduce artifacts and can be slow to process.
Second is the issue of accommodating the SD 4:3 aspect ratio into the HD 16:9 frame. Cropping the top and/or bottom of the standard-definition frame may or may not work, depending on if the composition allows it and if there are graphics or titles that would be cut off. Alternately, pillar-boxing can show the entire 4:3 image by leaving black borders on the left and right. Sometimes this black is filled with a stretched and blurred form of the image.
In addition, the SMPTE C RGB primaries used in North American standard definition are different than those of BT.709 (SMPTE C is commonly referred to as NTSC, however it is a different set of primaries and a different white point than the 1953 NTSC.). The red and blue primaries for PAL and SECAM are the same as BT.709, with a change in the green primary. Converting the image precisely requires a LUT (lookup table) or a color managed workflow to convert the colors to the new colorspace. However, in practice this is often ignored, except in mpv, because even if the player is color managed (most of them are not, including VLC), it can see BT.709 or BT.2020 primaries only.
Luma coefficients
When encoding Y’CBCR video, BT.709 creates gamma-encoded luma (Y’) using matrix coefficients 0.2126, 0.7152, and 0.0722 (together they add to 1). BT.709-1 used slightly different 0.2125, 0.7154, 0.0721 (changed to standard ones in BT.709-2). Although worldwide agreement on a single R’G’B’ system was achieved with Rec. 709, adoption of different luma coefficients (as those are derived from primaries and white point) for Y’CBCR requires the use of different luma-chroma decoding for standard definition and high definition.
Conversion software and hardware
These problems can be handled with video processing software which can be slow, or hardware solutions which allow for realtime conversion, and often with quality improvements.
Film retransfer
A more ideal solution is to go back to original film elements for projects that originated on film. Due to the legacy issues of international distribution, many television programs that shot on film used a traditional negative cutting process, and then had a single film master that could be telecined for different formats. These projects can re-telecine their cut negative masters to a BT.709 master at a reasonable cost, and gain the benefit of the full resolution of film.
On the other hand, for projects that originated on film, but completed their online master using video online methods would need to re-telecine the individual needed film takes and then re-assemble, a significantly greater amount of labor and machine time is required in this case, versus a telecine for a conformed negative. In this case, to enjoy the benefits of the film original would entail much higher costs to conform the film originals to a new HD master.
Comparison to sRGB
sRGB was created after the early development of Rec.709. The creators of sRGB chose to use the same primaries and white point as Rec.709, but changed the tone response curve (sometimes referred to as gamma) to better suit the intended use in offices and brighter conditions than television viewing in a dark living room.
Rec. 709 and sRGB share the same primary chromaticities and white point chromaticity; however, sRGB is explicitly output (display) referred with an equivalent gamma of 2.2 (the actual function is also piecewise to avoid near black issues). Display P3 uses sRGB EOTF with its linear segment, a change of that segment from 709 is needed by either using parametric curve encoding of ICC v4 or by using slope limit.
See also
Rec. 601, a comparable standard for standard-definition television (SDTV)
Rec. 2020, a standard for ultra-high-definition television (UHDTV) with Wide Color Gamut (WCG)
Rec. 2100, a standard for high-dynamic-range television (HDR-TV) with FHD and UHD resolution
sRGB, a standard color space for web/computer graphics, based on the Rec. 709 primaries and white point
References
External links
ITU-R BT.709-6: Parameter values for the HDTV standards for production and international programme exchange. June, 2015.
Note that the -6 is the current version; previous versions were -1 through to -5.
Poynton, Charles, Perceptual uniformity, picture rendering, image state, and Rec. 709. May, 2008.
ATSC
High-definition television
Film and video technology
Digital television
ITU-R recommendations
Color space
1990 in television | Rec. 709 | Mathematics | 3,463 |
1,862,137 | https://en.wikipedia.org/wiki/Digital%20paper | Digital paper, also known as interactive paper, is patterned paper used in conjunction with a digital pen to create handwritten digital documents. The printed dot pattern identifies the position coordinates on the paper. The digital pen uses this pattern to store handwriting and upload it to a computer.
The paper
The dot pattern is a two-dimensional barcode; the most common is the proprietary Anoto dot pattern. In the Anoto dot pattern, the paper is divided into a grid with a spacing of about 0.3 mm, a dot is printed near each intersection offset slightly in one of four directions, a camera in the pen typically records a 6 x 6 groups of dots. The full pattern is claimed to consist of 669,845,157,115,773,458,169 dots, and to encompass an area exceeding 4.6 million km² (this corresponds to 73 trillion unique sheets of letter-size paper).
The complete pattern space is divided into various domains. These domains can be used to define paper types, or to indicate the paper's purpose (for example, memo formatting, personal planners, notebook paper, Post-it notes, et cetera).
The Anoto dot pattern can be printed onto almost any paper, using a standard printing process of at least 600 dpi resolution (some claim a required resolution of 1,000 dpi),
and carbon-based black ink. The paper can be any shape or size greater than 2 mm to a side. The ink absorbs infrared light transmitted from the digital pen; the pen contains a receiver that interprets the pattern of light reflected from the paper. Other colors of ink, including non-carbon-based black, can be used to print information that will be visible to the user, and invisible to the pen.
Standard black and white laser printers or color laser printers with a resolution of 600 dpi can be used to print the Anoto dot pattern.
With a typical CMYK color laser printer, it's possible use full-color text and graphics that cover the entire page by avoiding using black (i.e., under color removal is turned off) and instead use only Cyan, Magenta, Yellow, or any combination -- which are ignored by the pen -- and using the black (K component) only for the Anoto pattern.
Standard black and white laser printers or color laser printers with a resolution of 600 dpi can be used to print the DataGlyph address carpet pattern.
Redundant glyph marks support recovering the correct 2D location and angular orientation, even in the presence of overprinted text and line art.
Further reading
Signer, Beat: Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces, May 2008, Hardcover, 276 pages, (10), (13)
Signer, Beat and Norrie, Moira C.: Interactive Paper: Past, Present and Future, In Proceedings of PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen Denmark, September 2010 (Presentation)
References
Computing input devices
Paper
Display technology | Digital paper | Engineering | 619 |
3,664,073 | https://en.wikipedia.org/wiki/Model%20selection | Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one.
In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).
state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.
In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory.
Introduction
In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model .
Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially . emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.
Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model.
Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.
A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.
Two directions of model selection
There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.
In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction. The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples.
The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading. Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.
Methods to assist in choosing the set of candidate models
Data transformation (statistics)
Exploratory data analysis
Model specification
Scientific method
Criteria
Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see
for a review.
Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model
Bayes factor
Bayesian information criterion (BIC), also known as the Schwarz information criterion, a statistical criterion for model selection
Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification.
Cross-validation
Deviance information criterion (DIC), another Bayesian oriented model selection criterion
False discovery rate
Focused information criterion (FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter
Hannan–Quinn information criterion, an alternative to the Akaike and Bayesian criteria
Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix
Likelihood-ratio test
Mallows's Cp
Minimum description length
Minimum message length (MML)
PRESS statistic, also known as the PRESS criterion
Structural risk minimization
Stepwise regression
Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion
Extended Bayesian Information Criterion (EBIC) is an extension of ordinary Bayesian information criterion (BIC) for models with high parameter spaces.
Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models.
Constrained Minimum Criterion (CMC) is a frequentist criterion for selecting regression models with a geometric underpinning.
Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.
say the following:
See also
All models are wrong
Analysis of competing hypotheses
Automated machine learning (AutoML)
Bias-variance dilemma
Feature selection
Freedman's paradox
Grid search
Identifiability Analysis
Log-linear analysis
Model identification
Occam's razor
Optimal design
Parameter identification problem
Scientific modelling
Statistical model validation
Stein's paradox
Notes
References
[this has over 38000 citations on Google Scholar]
(reprinted 1965, Science 148: 754–759 )
Regression variable selection
Mathematical and quantitative methods (economics)
Management science | Model selection | Biology | 1,492 |
8,063,851 | https://en.wikipedia.org/wiki/Six%20nines%20in%20pi | A sequence of six consecutive nines occurs in the decimal representation of the number pi (), starting at the 762nd decimal place. It has become famous because of the mathematical coincidence, and because of the idea that one could memorize the digits of up to that point, and then suggest that is rational. The earliest known mention of this idea occurs in Douglas Hofstadter's 1985 book Metamagical Themas, where Hofstadter states
This sequence of six nines is colloquially known as the "Feynman point", after physicist Richard Feynman, who allegedly stated this same idea in a lecture. However it is not clear when, or even if, Feynman ever made such a statement. It is not mentioned in his memoirs and unknown to his biographer James Gleick.
Related statistics
is conjectured, but not known, to be a normal number. For a normal number sampled uniformly at random, the probability of a specific sequence of six digits occurring this early in the decimal representation is about 0.08%.
The early string of six 9s is also the first occurrence of four and five consecutive identical digits. The next sequence of six consecutive identical digits is again composed of 9s, starting at position 193,034. The next distinct sequence of six consecutive identical digits after that starts with the digit 8 at position 222,299.
The positions of the first occurrence of a string of 1, 2, 3, 4, 5, 6, 7, 8, and 9 consecutive 9s in the decimal expansion are 5; 44; 762; 762; 762; 762; 1,722,776; 36,356,642; and 564,665,206, respectively .
Decimal expansion
The first 1,001 digits of (1,000 decimal places), showing consecutive runs of three or more digits including the consecutive six 9's underlined, are as follows:
3.1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196 4428810975 6659334461 2847564823 3786783165 2712019091 4564856692 3460348610 4543266482 1339360726 0249141273 7245870066 0631558817 4881520920 9628292540 9171536436 7892590360 0113305305 4882046652 1384146951 9415116094 3305727036 5759591953 0921861173 8193261179 3105118548 0744623799 6274956735 1885752724 8912279381 8301194912 9833673362 4406566430 8602139494 6395224737 1907021798 6094370277 0539217176 2931767523 8467481846 7669405132 0005681271 4526356082 7785771342 7577896091 7363717872 1468440901 2249534301 4654958537 1050792279 6892589235 4201995611 2129021960 8640344181 5981362977 4771309960 5187072113 4837 2978049951 0597317328 1609631859 5024459455 3469083026 4252230825 3344685035 2619311881 7101000313 7838752886 5875332083 8142061717 7669147303 5982534904 2875546873 1159562863 8823537875 9375195778 1857780532 1712268066 1300192787 6611195909 2164201989
See also
0.999...
9 (number)
Mathematical coincidence
Repdigit
Ramanujan's constant
References
External links
"Feynman Point"—MathWorld article
Pi
Recreational mathematics
Richard Feynman | Six nines in pi | Mathematics | 1,025 |
850,697 | https://en.wikipedia.org/wiki/J-B%20Weld | The J-B Weld Company is an international company that produces epoxy products. The home office is based in Sulphur Springs, Texas. J-B Weld (stylized as J-B WELD) is the name of their flagship product: a specialized, high-temperature epoxy adhesive for use in bonding materials together. The company has run advertisements showing engine block repair with J-B Weld.
The J-B Weld Company, founded in 1969 by Sam Bonham in Sulphur Springs, Texas, specializes in epoxy products. Initially, the company sold to automotive shops in Texas, but now distributes its products across the United States and in 27 other countries through various retail channels. After being purchased by private investors in 2008, the company expanded its product line, which originally included J-B Weld, J-B Kwik, J-B Stik, and Waterweld.
J-B Weld epoxy is a two-part adhesive that can bond various surfaces and withstand high temperatures up to 500 °F (260 °C) constantly and 600 °F (316 °C) for short periods. It is water-resistant, petroleum/chemical-resistant, acid-resistant, and resists shock, vibration, and temperature fluctuations. The product consists of a resin and a hardener that need to be mixed before application. The mixture sets in 4-6 hours and fully cures in up to 15 hours. It can be used as an adhesive, laminate, plug, filler, sealant, or electrical insulator and can be drilled, ground, tapped, machined, sanded, and painted when cured.
J-B Kwik is a faster-curing two-part epoxy with medium-temperature resistance up to 300 °F (149 °C). Although not as strong or heat-resistant as J-B Weld, it has the same adhesion and does not shrink when hardening. J-B Kwik is waterproof, petroleum/chemical-resistant, acid-resistant, and resists shock, vibration, and extreme temperature fluctuations.
History
The company had its beginnings in 1969 in Sulphur Springs, Texas. Sam Bonham, at the time running a machine shop, discovered a way to create what he called a "tougher than steel" epoxy. In 1968, Sam's future wife Mary persuaded him to sell his invention and he founded the J-B Weld Company. Sam died suddenly in 1989. He had commented before his death, "My life's dream is for J-B Weld to be all the way around the world, and for me to see an 18-wheeler load out of here with nothing but J-B Weld." Within a year of his death, Mary had opened a European hub in London, internationalizing the J-B Weld Company and the distribution of the product.
Initially, the company sold to automotive shops and jobbers in Texas. Now, the J-B Weld Company distributes its products through multiple retail channels - including automotive chains, home improvement centers, hardware stores, and farm stores - and does business in all states in the United States, as well as in 27 other countries. In 2008 the company was purchased by a group of private investors. Led by CEO Chip Hanson, they have expanded the product lines through innovation.
Products
The J-B Weld Company's original product line focused on a small number of products: J-B Weld (original 2-tube epoxy), (4-hour epoxy), (epoxy putty), Waterweld (underwater adhesive/filler), and a few others.
Since 2008, the company has broadened the product line to add J-B SteelStik, KwikWood, PlasticWeld, MarineWeld, Perm-O-Seal, WoodWeld and ClearWeld.
J-B Weld epoxy
J-B Weld is a two-part epoxy adhesive (or filler) that can withstand high-temperature environments.
J-B Weld can be used to bond surfaces made from metal, porcelain, ceramic, glass, marble, PVC, ABS, concrete, fiberglass, wood, fabric, or paper. Alcohol should be avoided when cleaning surfaces, as it can degrade the bond.
J-B Weld is water-resistant, petroleum/chemical-resistant (when hardened), and acid-resistant. It also resists shock, vibration, and extreme temperature fluctuations. J-B Weld can withstand a constant temperature of , and the maximum temperature threshold is approximately for 10 minutes. J-B Weld can also be used inside a microwave oven, exposed to microwave radiation instead of infrared radiation (heat).
The product is contained in 2 separate tubes: the "steel" (black tube of resin) and the "hardener" (red tube). Equal amounts are squeezed from both tubes and mixed. For the best bond, surfaces should be roughened (or scratched) with fine or coarse sandpaper.
When first mixed, J-B Weld is subject to sagging or running (slow dripping); even more so at warmer temperatures. After about 20 minutes the mixture begins to thicken into a putty that can be shaped with a putty knife or wooden spatula.
The mixture will set enough for the glued parts to be handled within 4–6 hours, but requires up to 15 hours at cool temperatures to fully cure and harden. Temperatures above shorten all these times.
After the initial setting period of a few hours, heat (e.g. from a heat lamp or incandescent light bulb placed near the bond) will speed the curing time.
J-B Weld can be used as an adhesive, laminate, plug, filler, sealant, or electrical insulator. When fully cured, J-B Weld can be drilled, formed, ground, tapped, machined, sanded, and painted.
While J-B Weld Original epoxy dries to a dark grey color, J-B Weld's ClearWeld epoxy dries clear. Although its bond is not quite as strong as the Original's (3900 psi vs. 5020 psi), ClearWeld is often preferred when appearance is an important consideration.
J-B Kwik epoxy
J-B Kwik (stylized as J-B KWIK) is a two-part epoxy, intended as an adhesive or filler, that can withstand medium-temperature environments (up to ).
J-B Kwik cures much more quickly, but it is not as strong or as heat-resistant as the original J-B Weld. However, J-B Kwik has the same adhesion () as J-B Weld, and also does not shrink when hardening.
J-B Kwik can be used to bond surfaces made from any combination of iron, steel, copper, aluminum, brass, bronze, pewter, plus porcelain, wood, ceramic, glass, marble, PVC, ABS, concrete, fiberglass, fabric, or paper. J-B Kwik is waterproof, petroleum/chemical-resistant (when cured), acid-resistant; plus resists shock, vibration, and extreme temperature fluctuations.
See also
Araldite
References
External links
Adhesives
Privately held companies based in Texas
Polymers
American brands | J-B Weld | Chemistry,Materials_science | 1,537 |
31,037,820 | https://en.wikipedia.org/wiki/Hutchinson%27s%20rule | In ecological theory, the Hutchinson's ratio is the ratio of the size differences between similar species when they are living together as compared to when they are isolated. It is named after G. Evelyn Hutchinson who concluded that various key attributes in species varied according to the ratio of 1:1.1 to 1:1.4. The mean ratio 1.3 can be interpreted as the amount of separation necessary to obtain coexistence of species at the same trophic level.
The variation in trophic structures of sympatric congeneric species is presumed to lead to niche differentiation, and allowing coexistence of multiple similar species in the same habitat by the partitioning of food resources. Hutchinson concluded that this size ratio could be used as an indicator of the kind of difference necessary to permit two species to co-occur in different niches but at the same level of the food web. The rule's legitimacy has been questioned, as other categories of objects also exhibit size ratios of roughly 1.3.
Studies done on interspecific competition and niche changes in Tits (Parus spp.) show that when there are multiple species in the same community there is an expected change in foraging when they are of similar size (size ratio 1-1.2). There was no change found among the less similar species. In this paper this was strong evidence for niche differentiation for interspecific competition, and would also be a good argument for Hutchinson's rule.
The simplest and perhaps the most effective way to differentiate the ecological niches of coexisting species is their morphological differentiation (in particular, size differentiation).
Hutchinson showed that the average body size ratio in species of the same genus that belong to the same community and use the same resource is about 1.3 (from 1.1 to 1.4) and the respective body weight ratio is 2. This empirical pattern tells us that this rule does not apply to all organisms and ecological situations. And, therefore, it would be of particular interest to study the size differentiation of closely related species in different communities and reveal cases meeting Hutchinson's rule
Evidence against the Hutchinson's rule
M. Eadie. however, presents evidence that Hutchinson's constant is an artifact of the distribution of the size of animate, as well as inanimate, objects in nature. This distribution or ratio would just represent a log-normal distribution and that the variances of these distributions are small. They argue that the size ratio Hutchinson suggests doesn't tell a lot about the actual structuring of animal communities.
References
Speciation
Ecology | Hutchinson's rule | Biology | 528 |
47,460,396 | https://en.wikipedia.org/wiki/Agaricus%20bresadolanus | Agaricus bresadolanus (parkland mushroom) is a toxic species of fungus in the genus Agaricus. Its spores are ellipsoid and lack a germ pore, with dimensions of 5.5–7.5 by 4.0–5.0 μm.
It was described by Hungarian mycologist Gábor Bohus in 1969. A rare species, it has been recorded in Asia and southern Europe, where it fruits singly or in groups along paths and in grassy area of deciduous woodland.
See also
List of Agaricus species
References
External links
bresadolanus
Fungi described in 1969
Fungi of Europe
Fungus species | Agaricus bresadolanus | Biology | 136 |
44,961 | https://en.wikipedia.org/wiki/U.S.%20National%20Ice%20Center | The U.S. National Ice Center (USNIC) is a tri-agency operational center and echelon V command of the Naval Oceanographic Office (NAVOCEANO), whose mission is to provide worldwide navigational ice analyses for the armed forces of the United States, allied nations, and U.S. government agencies.
It is represented by the United States Navy (Department of Defense); the National Oceanic and Atmospheric Administration (Department of Commerce); and the United States Coast Guard (Department of Homeland Security). The U.S. National Ice Center is a subordinate command of the Naval Oceanographic Office (NAVOCEANO). Originally known as the Navy/NOAA Joint Ice Center, which was established on December 15, 1976 in a memorandum of agreement between the U.S. Navy and NOAA, the National Ice Center was formed in 1995 when the U.S. Coast Guard became a partner. The U.S. National Ice Center produces global sea ice charts and various cryospheric GIS products. They also name and track Antarctic icebergs if greater than on its longest axis.
Icebergs must be a minimum of 19 kilometers in length to be tracked by the USNIC.
See also
International Ice Charting Working Group
International Ice Patrol
References
External links
United States Coast Guard
National Oceanic and Atmospheric Administration
United States Navy
Government agencies established in 1995
Ice in transportation
1995 establishments in the United States | U.S. National Ice Center | Physics | 289 |
69,412,864 | https://en.wikipedia.org/wiki/Bottom%20simulating%20reflector | Bottom simulating reflectors (BSRs) are, on seismic reflection profiles, shallow seismic reflection events, characterized by their reflection geometry similar to seafloor bathymetry.
. They have, however, the opposite reflection polarity to the seabed reflection, and frequently intersect the primary reflections.
Cause of Reflection
Seismic reflection is a sound wave bounced back from subsurface at the interface between media with different acoustic properties (density and wave velocity). In geology, the reflections normally occur at the contacts between different rocks, for example, between layers of sedimentary rocks (stratification). The acoustic properties of sedimentary rocks are influenced by their rock materials, pore space and fluid content. Reflections are generally parallel to sedimentary layering or bedding surfaces. Fluid content in pore space, however, sometimes becomes the dominant influence factor for the acoustic properties, therefore, reflections in such case, may not be parallel to bedding surfaces. BSRs are such a case of crossing bedding surfaces.
Drilling results show BSRs approximately marking the base of gas hydrated sediments below the seafloor and the reflection is primarily caused by the free gas contained in sediments below the gas hydrated section. Gas presence in sediments is well known for its drastically lowering the sediment acoustic impedance and hence, generates high amplitude reflection at the interface of gas bearing formation. Formation of gas hydrate in deep sea sediments depends on its ambient pressure and temperature, both which are largely influenced by the depth below seafloor. This is the primary reason for BSRs grossly parallel to the seafloor reflection on seismic profiles.
Formation and Occurrence
Gas hydrates are made of molecules of natural gas, mostly biogenic or thermogenic methane, contained in solid water molecule lattice. They are formed by combining methane with water under elevated pressures and at relatively low temperatures. Hence BSRs are widespread in arctic permafrost regions and in shallow sedimentary columns below seabed in deepwater continental margins
Application
Geological hazard studies
Identification of natural gas hydrate in deep sea sediments is crucial for offshore petroleum exploration. Without adequate equipment installed prior to drilling, blowout may occur if penetrating the gas hydrate sediments. Furthermore, presence of gas hydrates in marine sediments may alter sea floor stability, and induce submarine slumping.
Alternative energy resource
Although current production technology has not been proven to be commercially viable, gas hydrates’ global occurrence in deep sea sediments, have still been considered as a potential alternative energy resource. It should be pointed out that areal distribution of BSRs alone is not adequate to properly estimate the potential reserve, since other techniques are needed to address the thickness of sedimentary columns which contain the hydrates. In addition, seismic acquisition parameters and acoustic properties of sediments with free gas in pores may all influence acoustic impedance contrast, which inevitably affects the reflection amplitude. This would cause the uncertainty of the relationship between BSRs and the presence of gas hydrate.
Climatic impact
Because gas hydrates are only stable in a range of low temperatures and moderate pressures, atmospheric and ocean warming may trigger the hydrates instability and release significant amounts of methane from both permafrost and marine sediments. This could aggravate the greenhouse effect on the earth climate.
References
Geophysics | Bottom simulating reflector | Physics | 648 |
25,865,458 | https://en.wikipedia.org/wiki/GSA%20Advantage | GSA Advantage is an online government purchasing service run by the General Services Administration (GSA).
GSA Advantage is an online shopping and ordering service created within the GSA for use by government agencies to buy commercial products and services. Its mission is to provide a streamlined purchasing portal for federal agencies to acquire goods and services.
The service is intended to benefit any federal agency that has access to the GSA Advantage program; however, two federal acts have also allowed state and local governments to access and purchase from this service. Section 833 of the John Warner National Defense Authorization Act authorizes the Administrator of General Services to provide state and local governments the use of GSA's Federal Supply Schedules for the purchase of products and services to be used to facilitate recovery from a major disaster, terrorism or nuclear, biological, chemical, or radiological attack. Section 211 of the E-Government Act of 2002 authorized GSA sales of IT products and services to state and local governments through the introduction of cooperative purchasing.
In 2021, The Intercept and video surveillance industry publication IPVN reported that GSA Advantage listed products in non-compliance with the John S. McCain National Defense Authorization Act for Fiscal Year 2019, including re-branded products manufactured by sanctioned entities such as Dahua Technology and Hikvision.
References
External links
Database of NSN on GSA Advantage
General Services Administration
E-commerce | GSA Advantage | Technology | 276 |
9,575,078 | https://en.wikipedia.org/wiki/Hollow%20Moon | The Hollow Moon and the closely related Spaceship Moon are pseudoscientific hypotheses that propose that Earth's Moon is either wholly hollow or otherwise contains a substantial interior space. No scientific evidence exists to support the idea; seismic observations and other data collected since spacecraft began to orbit or land on the Moon indicate that it has a solid, differentiated interior, with a thin crust, extensive mantle, and a dense core which is significantly smaller (in relative terms) than Earth's.
While Hollow Moon hypotheses usually propose the hollow space as the result of natural processes, the related Spaceship Moon hypothesis holds that the Moon is an artifact created by an alien civilization; this belief usually coincides with beliefs in UFOs or ancient astronauts. This idea dates from 1970, when two Soviet authors published a short piece in the popular press speculating that the Moon might be "the creation of alien intelligence"; since then, it has occasionally been endorsed by conspiracy theorists like Jim Marrs and David Icke.
An at least partially hollow Moon has made many appearances in science fiction, the earliest being H. G. Wells' 1901 novel The First Men in the Moon, which borrowed from earlier works set in a Hollow Earth, such as Ludvig Holberg's 1741 novel Niels Klim's Underground Travels.
Both the Hollow Moon and Hollow Earth theories are now universally considered to be fringe or conspiracy theories.
Claims and rebuttals
Density
The fact that the Moon is less dense than the Earth is advanced by conspiracy theorists as support for claims of a hollow Moon. The Moon's mean density is 3.3 g/cm3, whereas the Earth's is 5.5 g/cm3. Mainstream science argues this difference is due to the fact that the Earth's upper mantle and crust are less dense than its heavy, iron core.
The Moon rang like a bell
Between 1969 and 1977, seismometers installed on the Moon by the Apollo missions recorded moonquakes. The Moon was described as "ringing like a bell" during some of those quakes, specifically the shallow ones. This phrase was brought to popular attention in March 1970 in an article in Popular Science.
On November 20, 1969, Apollo 12 deliberately crashed the Ascent Stage of its Lunar Module onto the Moon's surface; NASA reported that the Moon rang 'like a bell' for almost an hour, leading to arguments that it must be hollow like a bell. Lunar seismology experiments since then have shown that the lunar body has shallow moonquakes that act differently from quakes on Earth, due to differences in texture, type and density of the planetary strata, but there is no evidence of any large empty space inside the body.
Vasin-Shcherbakov "spaceship" conjecture
In 1970, Michael Vasin and Alexander Shcherbakov, of the Soviet Academy of Sciences, advanced a hypothesis that the Moon is a spaceship created by unknown beings. The article was entitled "Is the Moon the Creation of Alien Intelligence?" and was published in Sputnik, the Soviet equivalent of Reader's Digest. The Vasin-Shcerbakov hypothesis was reported in the West that same year.
The authors reference earlier speculation by astrophysicist Iosif Shklovsky, who suggested that the Martian moon Phobos was an artificial satellite and hollow; this has since been shown not to be the case. Skeptical author Jason Colavito points out that all of their evidence is circumstantial, and that, in the 1960s, the atheistic Soviet Union promoted the ancient astronaut concept in an attempt to undermine the West's faith in religion.
"Perfect" solar eclipses
In 1965, author Isaac Asimov observed: "What makes a total eclipse so remarkable is the sheer astronomical accident that the Moon fits so snugly over the Sun. The Moon is just large enough to cover the Sun completely (at times) so that a temporary night falls and the stars spring out. [...] The Sun's greater distance makes up for its greater size and the result is that the Moon and the Sun appear to be equal in size. [...] There is no astronomical reason why Moon and Sun should fit so well. It is the sheerest of coincidence, and only the Earth among all the planets is blessed in this fashion."
Since the 1970s, conspiracy theorists have cited Asimov's observations on solar eclipses as evidence of the Moon's artificiality. Mainstream astronomers reject this interpretation. They note that the angular diameters of Sun and Moon vary by several percent over time and do not actually "perfectly" match during eclipses. Nor is Earth the only planet with such a satellite: Saturn's moon Prometheus has roughly the same angular diameter as the Sun when viewed from Saturn.
Some scholars have claimed that "the conditions required for perfect solar eclipses are the same conditions generally acknowledged to be necessary for intelligent life to emerge"; If so, the Moon's size and orbit might be best explained by the weak anthropic principle.
Scientific perspective
Multiple lines of evidence demonstrate that the Moon is a solid body which formed from an impact between Earth and a planetoid.
Origin of the Moon
Historically, it was theorized that the Moon originated when a rapidly-spinning Earth expelled a piece of its mass. This was proposed by George Darwin (son of the famous biologist Charles Darwin) in 1879 and retained some popularity until Apollo. The Austrian geologist Otto Ampferer in 1925 also suggested the emerging of the Moon as cause for continental drift. A second hypothesis argued the Earth and the Moon formed together as a double system from the primordial accretion disk of the Solar System. Finally, a third hypothesis suggested that the Moon may have been a planetoid captured by Earth's gravity.
The modern explanation for the origin of the Moon is usually the giant-impact hypothesis, which argues a Mars-sized body struck the Earth, making a debris ring that eventually collected into a single natural satellite, the Moon. The giant-impact hypothesis is currently the favored scientific hypothesis for the formation of the Moon.
Internal structure
Multiple lines of evidence disprove that the Moon is hollow. One involves moment of inertia parameters; the other involves seismic observations. The moment of inertia parameters indicate that the core of the Moon is both dense and small, with the rest of the Moon consisting of material with nearly-constant density. Seismic observations have been made, constraining the thickness of the Moon's crust, mantle and core, demonstrating it could not be hollow.
Mainstream scientific opinion on the internal structure of the Moon overwhelmingly supports a solid internal structure with a thin crust, an extensive mantle and a small denser core.
Moment of inertia factor
The moment of inertia factor is a number, ranging from 0 to .67, that represents the distribution of mass in a spherical body. A moment of inertia factor of 0 represents a body with all its mass concentrated at its central core, while a factor of .67 represents a perfectly hollow sphere. A moment of inertia factor of 0.4 corresponds to a sphere of uniform density, while factors less than 0.4 represent bodies with cores that are more dense than their surfaces. The Earth, with its dense inner core, has a moment of inertia factor of 0.3307
In 1965, astronomer Wallace John Eckert attempted to calculate the lunar moment of inertia factor using a novel analysis of the Moon's perigee and node. His calculations suggested the Moon might be hollow, a result Eckert rejected as absurd. By 1968, other methods had allowed the Moon's moment of inertia factor to be accurately calculated at its accepted value.
From 1969 to 1973, five retroreflectors were installed on the Moon during the Apollo program (11, 14, and 15) and Lunokhod 1 and 2 missions. These reflectors made it possible to measure the distance between the surfaces of the Earth and the Moon using extremely precise laser ranging. True (physical) libration of the Moon measured via Lunar laser ranging constrains the moment of inertia factor to 0.394 ± 0.002. This is very close to the value for a solid object with radially constant density, which would be 0.4.
Seismic activity
From 1969 through 1972, Apollo astronauts installed several seismographic measuring systems on the Moon and their data made available to scientists (such as those from the Apollo Lunar Surface Experiments Package). The Apollo 11 instrument functioned through August of the landing year. The instruments placed by the Apollo 12, 14, 15, and 16 missions were functional until they were switched off in 1977.
The existence of moonquakes was an unexpected discovery from seismometers. Analysis of lunar seismic data has helped constrain the thickness of the crust (~45 km) and mantle, as well as the core radius (~330 km).
Doppler Gravity Experiment
In 1998, the United States launched the Lunar Prospector, which hosted the Doppler Gravity Experiment (DGE) -- the first polar, low-altitude mapping of the lunar gravity field. The Prospector DGE obtained data constituted the "first truly operational gravity map of the Moon". The purpose of the Lunar Prospector DGE was to learn about the surface and internal mass distribution of the Moon. This was accomplished by measuring the Doppler shift in the S-band tracking signal as it reaches Earth, which can be converted to spacecraft accelerations. The accelerations can be processed to provide estimates of the lunar gravity field. Estimates of the surface and internal mass distribution give information on the crust, lithosphere, and internal structure of the Moon.
In popular culture
Fiction
H.G. Wells, The First Men in The Moon (1901). Wells describes fictional insectoids who live inside a hollow Moon.
Edgar Rice Burroughs, The Moon Maid (1926). A fantasy story set in the interior of a postulated hollow Moon which had an atmosphere and was inhabited.
Nikolay Nosov, Dunno on the Moon (1965). A Russian fairytale novel with a hollow Moon.
Isaac Asimov, Foundation and Earth (1986). Science fiction in which robot R. Daneel Olivaw is depicted living inside a partially hollow Moon.
David Weber, Mutineers' Moon (1991). Science fiction in which the Moon is a giant spaceship, which arrived 50,000 years ago.
Moonfall (2022). Science fiction film portraying the Moon as a Dyson sphere enclosing a white dwarf.
Conspiracy theory
Don Wilson, Our Mysterious Spaceship Moon (1975) and Secrets of Our Spaceship Moon (1979), inspired by Vasin-Shcherbakov, Wilson popularized the Spaceship Moon hypothesis.
George H. Leonard, Somebody Else Is On The Moon (1976) Argues the Moon is inhabited by an Alien race, but NASA has covered up this fact.
Fred Steckling, We Discovered Alien Bases on the Moon (1981)
Jim Marrs Alien Agenda (1997) Long-time JFK conspiracy theorist Marrs embraced the Spaceship Moon conspiracy theory
Christopher Knight & Alan Butler, Who Built the Moon? (2005). They suggest humans from the future traveled into the past to build the Moon in order to safeguard human evolution.
David Icke, Human Race Get off Your Knees – The Lion Sleeps No More (2010). Icke suggests that the Moon is in fact a space station from which Reptilians manipulate human thought.
References
Moon myths
Obsolete scientific theories
Pseudoscience
Science fiction themes
Moon
Space and astronomy conspiracy theories | Hollow Moon | Astronomy,Technology | 2,365 |
21,980,575 | https://en.wikipedia.org/wiki/Tibetan%20silver | Tibetan silver (Chinese Zangyin) in modern usage refers to a variety of white non-precious metal alloys used primarily in jewelry components, with an appearance similar to aged silver.
Description
Silver in Tibet
In ancient times silver was imported from regions near modern Iran (Bactria, Khorasan), and an association of silverwork with Iran appears to have developed. Silver was imported from China (as ingots), India (tankas), and from Mongolia and Siberia. Some silver was mined in Tibet, but imports were required to satisfy the country's requirements for minting.
In addition to coinage silver was used in Tibet for repousse work, and as an inlay in brass and copper statues.
Historically 'Tibetan Silver' did contain silver, and some old items may be predominantly silver.
Modern usage
'Tibetan Silver' includes copper-tin, and copper-nickel alloys; zinc alloys; and other alloy compositions, as well as base metals such as iron plated with a silver alloy. An X-ray fluorescence analysis showed that six of seven items acquired online and described as 'Tibetan silver' were alloys containing primarily copper, nickel, zinc.
There are potential health hazards associated with Tibetan Silver due to the undefined or uncertain definition of the alloy - these include allergies due to Nickel, but also could include other serious hazards including the presence of lead or arsenic in the alloy.
Zangyin
Zangyin is a Chinese term for 'Tibetan silver' - it seems to originate from a scholar's term for the inferior silver adulterated with high proportion of copper used for Tibetan coinage in the late Qing dynasty period.
References
Sources
Copper alloys
Jewellery making | Tibetan silver | Chemistry | 337 |
57,759 | https://en.wikipedia.org/wiki/Biophoton | Biophotons (from the Greek βίος meaning "life" and φῶς meaning "light") are photons of light in the ultraviolet and low visible light range that are produced by a biological system. They are non-thermal in origin, and the emission of biophotons is technically a type of bioluminescence, though the term "bioluminescence" is generally reserved for higher luminance systems (typically with emitted light visible to the naked eye, using biochemical means such as luciferin/luciferase). The term biophoton used in this narrow sense should not be confused with the broader field of biophotonics, which studies the general interaction of light with biological systems.
Biological tissues typically produce an observed radiant emittance in the visible and ultraviolet frequencies ranging from 10−17 to 10−23 W/cm2 (approx 1-1000 photons/cm2/second). This low level of light has a much weaker intensity than the visible light produced by bioluminescence, but biophotons are detectable above the background of thermal radiation that is emitted by tissues at their normal temperature.
While detection of biophotons has been reported by several groups, hypotheses that such biophotons indicate the state of biological tissues and facilitate a form of cellular communication are still under investigation, Alexander Gurwitsch, who discovered the existence of biophotons, was awarded the Stalin Prize in 1941 for his work.
Detection and measurement
Biophotons may be detected with photomultipliers or by means of an ultra low noise CCD camera to produce an image, using an exposure time of typically 15 minutes for plant materials. Photomultiplier tubes have been used to measure biophoton emissions from fish eggs, and some applications have measured biophotons from animals and humans. Electron Multiplying CCD (EM-CCD) optimized for the detection of ultraweak light have also been used to detect the bioluminescence produced by yeast cells at the onset of their growth.
The typical observed radiant emittance of biological tissues in the visible and ultraviolet frequencies ranges from 10−17 to 10−23 W/cm2 with a photon count from a few to nearly 1000 photons per cm2 in the range of 200 nm to 800 nm.
Proposed physical mechanisms
Chemi-excitation via oxidative stress by reactive oxygen species or catalysis by enzymes (i.e., peroxidase, lipoxygenase) is a common event in the biomolecular milieu. Such reactions can lead to the formation of triplet excited species, which release photons upon returning to a lower energy level in a process analogous to phosphorescence. That this process is a contributing factor to spontaneous biophoton emission has been indicated by studies demonstrating that biophoton emission can be increased by depleting assayed tissue of antioxidants or by addition of carbonyl derivatizing agents. Further support is provided by studies indicating that emission can be increased by addition of reactive oxygen species.
Plants
Imaging of biophotons from leaves has been used as a method for assaying R gene responses. These genes and their associated proteins are responsible for pathogen recognition and activation of defense signaling networks leading to the hypersensitive response, which is one of the mechanisms of the resistance of plants to pathogen infection. It involves the generation of reactive oxygen species (ROS), which have crucial roles in signal transduction or as toxic agents leading to cell death.
Biophotons have been also observed in the roots of stressed plants. In healthy cells, the concentration of ROS is minimized by a system of biological antioxidants. However, heat shock and other stresses changes the equilibrium between oxidative stress and antioxidant activity, for example, the rapid rise in temperature induces biophoton emission by ROS.
Hypothesized involvement in cellular communication
In the 1920s, the Russian embryologist Alexander Gurwitsch reported "ultraweak" photon emissions from living tissues in the UV-range of the spectrum. He named them "mitogenetic rays" because his experiments convinced him that they had a stimulating effect on cell division.
In the 1970s Fritz-Albert Popp and his research group at the University of Marburg (Germany) showed that the spectral distribution of the emission fell over a wide range of wavelengths, from 200 to 750 nm. Popp's work on the biophoton emission's statistical properties, namely the claims on its coherence, was criticised for lack of scientific rigour.
One biophoton mechanism focuses on injured cells that are under higher levels of oxidative stress, which is one source of light, and can be deemed to constitute a "distress signal" or background chemical process, but this mechanism is yet to be demonstrated. The difficulty of teasing out the effects of any supposed biophotons amid the other numerous chemical interactions between cells makes it difficult to devise a testable hypothesis. A 2010 review article discusses various published theories on this kind of signaling.
The hypothesis of cellular communication by biophotons was highly criticised for failing to explain how could cells detect photonic signals several orders of magnitude weaker than the natural background illumination.
See also
Chemiluminescence
Luminophore
Phosphorescence
References
Further reading
External links
Bioluminescence
Photons | Biophoton | Chemistry,Biology | 1,118 |
39,082,282 | https://en.wikipedia.org/wiki/Bony%E2%80%93Brezis%20theorem | In mathematics, the Bony–Brezis theorem, due to the French mathematicians Jean-Michel Bony and Haïm Brezis, gives necessary and sufficient conditions for a closed subset of a manifold to be invariant under the flow defined by a vector field, namely at each point of the closed set the vector field must have non-positive inner product with any exterior normal vector to the set. A vector is an exterior normal at a point of the closed set if there is a real-valued continuously differentiable function maximized locally at the point with that vector as its derivative at the point. If the closed subset is a smooth submanifold with boundary, the condition states that the vector field should not point outside the subset at boundary points. The generalization to non-smooth subsets is important in the theory of partial differential equations.
The theorem had in fact been previously discovered by Mitio Nagumo in 1942 and is also known as the Nagumo theorem.
Statement
Let F be closed subset of a C2 manifold M and let X be a vector field on M which is Lipschitz continuous. The following conditions are equivalent:
Every integral curve of X starting in F remains in F.
(X(m),v) ≤ 0 for every exterior normal vector v at a point m in F.
Proof
Following , to prove that the first condition implies the second, let c(t) be an integral curve with
c(0) = x in F and dc/dt= X(c). Let g have a local maximum on F at x. Then g(c(t)) ≤ g (c(0)) for t small and positive. Differentiating, this implies that g '(x)⋅X(x) ≤ 0.
To prove the reverse implication, since the result is local, it enough to check it in Rn. In that case X locally satisfies a Lipschitz condition
If F is closed, the distance function D(x) = d(x,F)2 has the following differentiability property:
where the minimum is taken over the closest points z to x in F.
To check this, let
where the minimum is taken over z in F such that d(x,z) ≤ d(x,F) + ε.
Since fε is homogeneous in h and increases uniformly to f0 on any sphere,
with a constant C(ε) tending to 0 as ε tends to 0.
This differentiability property follows from this because
and similarly if |h| ≤ ε
The differentiability property implies that
minimized over closest points z to c(t). For any such z
Since −|y − c(t)|2 has a local maximum on F at y = z, c(t) − z is an exterior normal vector at z. So the first term on the right hand side is non-negative. The Lipschitz condition for X implies the second term is bounded above by 2C⋅D(c(t)). Thus the derivative from the right of
is non-positive, so it is a non-increasing function of t. Thus if c(0) lies in F, D(c(0))=0 and hence D(c(t)) = 0 for t > 0, i.e. c(t) lies in F for t > 0.
References
Literature
, Theorem 8.5.11
See also
Barrier certificate
Ordinary differential equations
Dynamical systems
Manifolds | Bony–Brezis theorem | Physics,Mathematics | 709 |
1,781,446 | https://en.wikipedia.org/wiki/Don%20Box | Don Box is a former Microsoft Technical Fellow.
Before joining Microsoft in 2002, Box was a contributing editor and columnist at Microsoft Systems Journal, which later became MSDN Magazine, and was one of the founders of DevelopMentor, a software training company; he left DevelopMentor to join Microsoft.
Box led the Xbox One platform development team from its inception through launch on November 22, 2013. Prior to that, Box led the development team that produced the Xbox SmartGlass platform (which was the basis for what is now known as Project Rome), worked on Brad Lovering's team working on model-driven runtime and tool support at Microsoft including "Oslo", and was an architect on Windows Communication Foundation (formerly known as "Indigo") and related technologies.
In 2017–2021, Box was Vice President of Engineering for Mixed Reality, where he led the engineering team that builds HoloLens, Windows Mixed Reality, Windows Hello, and other initiatives that live at the edge of the physical and digital worlds.
In 2014–2017, he led the Silicon, Graphics and Media development team in the Windows and Devices Group, and was responsible for driving silicon/hardware/software co-engineering across the Windows product line. Leading up to the launch of Windows 10, Box drove the initiative to converge Windows, Windows Phone, Xbox, and HoloLens onto a common set of components known as OneCore.
On March 31, 2021, Box announced that he would be leaving Microsoft in May.
On May 3, 2021, Box joined Meta as VP of Engineering, AR Glasses.
Along with Bob Atkinson, Mohsen Al-Ghosein, and Dave Winer, Box was one of the original four designers of SOAP, a basic messaging layer for web services. At 2001 TechEd Europe, Box performed a talk about XML and SOAP from a bathtub.
Box was a conspicuous figure in the Component Object Model (COM) community, where he coined the phrase "COM is love". He is also a series editor for Addison Wesley where he launched two successful series targeting the Microsoft developer audience.
Books
Essential .NET, Volume I: The Common Language Runtime, with Chris Sells
Essential COM
Essential XML: Beyond MarkUp
Effective COM: 50 Ways to Improve Your COM and MTS-based Applications, with Keith Brown, Tim Ewald and Chris Sells
References
External links
Biographical interview with Don Box
NET Rocks! interview, Don Box and Chris Sells talk to Carl and Richard
Microsoft employees
Microsoft technical fellows
Microsoft evangelists
Living people
Year of birth missing (living people)
Place of birth missing (living people) | Don Box | Technology | 522 |
31,926,273 | https://en.wikipedia.org/wiki/Instituto%20Bioclon | The Instituto Bioclon S.A. de C.V. (Bioclon Institute) was formed in 1990 to research and develop F(ab’)2 antivenoms. On May 6, 2015 they received approval from the FDA to commercialize Anavip becoming their second drug approved by the FDA after ANASCORP. Both are commercialized in the US by Rare Disease Therapeutics, Inc. The company is performing clinical trials to get approval for a third drug, ANALATRO, designed to treat black widow spider envenomation.
Operations
The Instituto Bioclon is located in Mexico City, Mexico and has a Certificación Internacional de Buenas Prácticas de Manufactura (International Certificate for Good Manufacturing Practices) which was granted to Bioclon by the , (INVIMA) (National Food and Drug Monitoring Institute), of the Ministry of Health and Social Protection of Colombia, as well as by COFEPRIS, in Mexico.
Products
Anavip – timber rattlesnake antivenom
Coralmyn – coral snake antivenom
Antivipmyn – pit viper antivenom
Antivipmyn TRI – Central and South American snakes
Antivipmyn Africa – African snakes
Alacramyn – scorpion antivenom
Reclusmyn – brown recluse antivenom
Aracmyn – black widow antivenom
The Bioclon Institute is the only Mexican company that has obtained an "orphan drug" status from the Food and Drug Administration (FDA) of the United States for its products.
See also
Alejandro Alagón Cano
Lourival Possani Postay
References
External links
Instituto Bioclon
Food and Drug Administration
Laboratorios Silanes
Rare Disease Therapeuthics, Inc
Toxicology organizations
Organizations based in Mexico City
Mexican companies established in 1990
Health care companies established in 1990 | Instituto Bioclon | Environmental_science | 379 |
12,570,355 | https://en.wikipedia.org/wiki/9%20Vulpeculae | 9 Vulpeculae is a star in the northern constellation of Vulpecula, located about 560 light years away based on parallax. It is visible to the naked eye as a faint, blue-white hued star with a baseline apparent visual magnitude of 5.01. The star is moving further from the Earth with a heliocentric radial velocity of +5 km/s.
This a B-type star with a stellar classification of B8 IIIn, where the 'n' notation indicates "nebulous" lines due to rapid rotation. It has a high rate of spin with a projected rotational velocity of 185 km/s. The star is radiating 216 times the Sun's luminosity from its photosphere at an effective temperature of . This is a suspected variable star of unknown type, ranging in magnitude from 4.99 down to 5.08.
9 Vulpeculae has two reported companions: component B, with a separation of 9.3" and magnitude 13.4, and C, with a separation of 108" and a magnitude of 12.5". Both are unrelated background objects.
References
B-type giants
Suspected variables
Vulpecula
Durchmusterung objects
Vulpeculae, 9
184606
096275
7437 | 9 Vulpeculae | Astronomy | 267 |
4,786,318 | https://en.wikipedia.org/wiki/Linear%20dynamical%20system | Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.
Introduction
In a linear dynamical system, the variation of a state vector
(an -dimensional vector denoted ) equals a constant matrix
(denoted ) multiplied by
. This variation can take two forms: either
as a flow, in which varies
continuously with time
or as a mapping, in which
varies in discrete steps
These equations are linear in the following sense: if
and
are two valid solutions, then so is any linear combination
of the two solutions, e.g.,
where and
are any two scalars. The matrix
need not be symmetric.
Linear dynamical systems can be solved exactly, in contrast to most nonlinear ones. Occasionally, a nonlinear system can be solved exactly by a change of variables to a linear system. Moreover, the solutions of (almost) any nonlinear system can be well-approximated by an equivalent linear system near its fixed points. Hence, understanding linear systems and their solutions is a crucial first step to understanding the more complex nonlinear systems.
Solution of linear dynamical systems
If the initial vector
is aligned with a right eigenvector of
the matrix , the dynamics are simple
where is the corresponding eigenvalue;
the solution of this equation is
as may be confirmed by substitution.
If is diagonalizable, then any vector in an -dimensional space can be represented by a linear combination of the right and left eigenvectors (denoted ) of the matrix .
Therefore, the general solution for is
a linear combination of the individual solutions for the right
eigenvectors
Similar considerations apply to the discrete mappings.
Classification in two dimensions
The roots of the characteristic polynomial det(A - λI) are the eigenvalues of A. The sign and relation of these roots, , to each other may be used to determine the stability of the dynamical system
For a 2-dimensional system, the characteristic polynomial is of the form where is the trace and is the determinant of A. Thus the two roots are in the form:
,
and and . Thus if then the eigenvalues are of opposite sign, and the fixed point is a saddle. If then the eigenvalues are of the same sign. Therefore, if both are positive and the point is unstable, and if then both are negative and the point is stable. The discriminant will tell you if the point is nodal or spiral (i.e. if the eigenvalues are real or complex).
See also
Linear system
Dynamical system
List of dynamical system topics
Matrix differential equation
Dynamical systems | Linear dynamical system | Physics,Mathematics | 597 |
2,849,297 | https://en.wikipedia.org/wiki/Oppositional%20defiant%20disorder | Oppositional defiant disorder (ODD) is listed in the DSM-5 under Disruptive, impulse-control, and conduct disorders and defined as "a pattern of angry/irritable mood, argumentative/defiant behavior, or vindictiveness." This behavior is usually targeted toward peers, parents, teachers, and other authority figures, including law enforcement officials. Unlike conduct disorder (CD), those with ODD do not generally show patterns of aggression towards random people, violence against animals, destruction of property, theft, or deceit. One-half of children with ODD also fulfill the diagnostic criteria for ADHD.
History
Oppositional defiant disorder was first defined in the DSM-III (1980). Since the introduction of ODD as an independent disorder, the field trials to inform its definition have included predominantly male subjects. Some clinicians have debated whether the diagnostic criteria would be clinically relevant for use with women, and furthermore, some have questioned whether gender-specific criteria and thresholds should be included. Additionally, some clinicians have questioned the preclusion of ODD when conduct disorder is present. According to Dickstein, the DSM-5 attempts to:
Epidemiology
ODD is a pattern of negative, defiant, disobedient, and hostile behavior, and it is one of the most prevalent disorders from preschool age to adulthood. This can include frequent temper tantrums, excessive arguing with adults, refusing to follow rules, purposefully upsetting others, getting easily irked, having an angry attitude, and vindictive acts. Children with ODD usually begin showing symptoms around age 6 to 8, although the disorder can emerge in younger children too. Symptoms can last throughout teenage years. The pooled prevalence is 3.6% up to age 18.
Oppositional defiant disorder has a prevalence of 1–11%. The average prevalence is approximately 3%. Gender and age play an important role in the rate of the disorder. ODD gradually develops and becomes apparent in preschool years, often before the age of eight years old. However, it is very unlikely to emerge following early adolescence.
There is a difference in prevalence between boys and girls, with a ratio of 1.4 to 1 before adolescence. Other research suggests a 2:1 ratio. Prevalence in girls tends to increase after puberty. Researchers have found that the general prevalence of ODD throughout cultures remains constant. However, the gendered disparities in diagnoses is only seen in Western cultures. It is unknown whether this reflects underlying differences in incidence or under-diagnosis of girls. Physical abuse at home is a significant predictor of diagnosis for girls only, and emotional responsiveness of parents is a significant predictor of diagnosis for boys only, which may have implications for how gendered socialization and received gender roles affect ODD symptoms and outcomes.
Children from lower-income backgrounds are more likely to be diagnosed with ODD. The correlative link between low income and ODD diagnosis is direct in boys, but in girls, the link is more complex; the diagnosis is associated with specific parental techniques such as corporal punishment which are in turn linked to lower income households. This disparity may be linked to a more general tendency of boys and men to display more externalized psychiatric symptoms, and girls to display more internalized ones (such as self-harm or anorexia nervosa).
In the United States, African Americans and Latinos are more likely to receive diagnoses of ODD or other conduct disorders compared to non-Hispanic White youth with the same symptoms, who are more likely to be diagnosed with ADHD. This has wide-ranging implications about the role of racial bias in how certain behaviors are perceived and categorized as either defiant or inattentive/hyperactive.
Prevalence of ODD and conduct disorder are significantly higher among children in foster care. One survey in Norway found that 14 percent met the criteria, and other studies have found a prevalence of up to 17 or even 29 percent. Low parental attachment and parenting style are strong predictors of ODD symptoms.
Earlier conceptions of ODD had higher rates of diagnosis. When the disorder was first included in the DSM-III, the prevalence was 25% higher than when the DSM-IV revised the criteria of diagnosis. The DSM-V made more changes to the criteria, grouping certain characteristics together in order to demonstrate that people with ODD display both emotional and behavioral symptoms. In addition, criteria were added to help guide clinicians in diagnosis because of the difficulty found in identifying whether the behaviors or other symptoms are directly related to the disorder or simply a phase in a child's life. Consequently, future studies may find that there was also a decline in prevalence between the DSM-IV and the DSM-V.
Signs and symptoms
The fourth revision of the Diagnostic and Statistical Manual (DSM-IV-TR) (now replaced by DSM-5) states that a person must exhibit four out of the eight signs and symptoms to meet the diagnostic threshold for ODD. These symptoms include:
Often loses temper
Is often touchy or easily annoyed
Is often angry and resentful
Often argues with authority figures or, for children and adolescents, with adults
Often actively defies or refuses to comply with requests from authority figures or with rules
Often deliberately annoys others
Often blames others for their own mistakes or misbehavior
Has been spiteful or vindictive at least twice within the past six months
These behaviors are mostly directed towards an authority figure such as a teacher or a parent. Although these behaviors can be typical among siblings, they must be observed with individuals other than siblings for an ODD diagnosis. Children with ODD can be verbally aggressive. However, they do not display physical aggressiveness, a behavior observed in conduct disorder. Furthermore, they must be perpetuated for longer than six months and must be considered beyond a normal child's age, gender and culture to fit the diagnosis. For children under five years of age, they must occur on most days over a period of six months. For children over five years of age, they must occur at least once a week for at least six months. If symptoms are confined to only one setting, most commonly home, it is considered mild in severity. If it is observed in two settings, it is characterized as moderate, and if the symptoms are observed in three or more settings, it is considered severe.
These patterns of behavior result in impairment at school or other social venues.
Etiology
There is no specific element that has yet been identified as directly causing ODD. Research looking precisely at the etiological factors linked with ODD is limited. The literature often examines common risk factors linked with all disruptive behaviors, rather than ODD specifically. Symptoms of ODD are also often believed to be the same as CD, even though the disorders have their own respective set of symptoms. When looking at disruptive behaviors such as ODD, research has shown that the causes of behaviors are multi-factorial. However, disruptive behaviors have been identified as being mostly due either to biological or environmental factors.
Genetic influences
Research indicates that parents pass on a tendency for externalizing disorders to their children that may be displayed in multiple ways, such as inattention, hyperactivity, or oppositional and conduct problems. Research has also shown that there is a genetic overlap between ODD and other externalizing disorders. Heritability can vary by age, age of onset, and other factors. Adoption and twin studies indicate that 50% or more of the variance causing antisocial behavior is attributable to heredity for both males and females. ODD also tends to occur in families with a history of ADHD, substance use disorders, or mood disorders, suggesting that a vulnerability to develop ODD may be inherited. A difficult temperament, impulsivity, and a tendency to seek rewards can also increase the risk of developing ODD. New studies into gene variants have also identified possible gene-environment (G x E) interactions, specifically in the development of conduct problems. A variant of the gene that encodes the neurotransmitter metabolizing enzyme monoamine oxidase-A (MAOA), which relates to neural systems involved in aggression, plays a key role in regulating behavior following threatening events. Brain imaging studies show patterns of arousal in areas of the brain that are associated with aggression in response to emotion-provoking stimuli.
Prenatal factors and birth complications
Many pregnancy and birth problems are related to the development of conduct problems. Malnutrition, specifically protein deficiency, lead poisoning or exposure to lead, and mother's use of alcohol or other substances during pregnancy may increase the risk of developing ODD. In numerous research, substance use prior to birth has also been associated with developing disruptive behaviors such as ODD. Although pregnancy and birth factors are correlated with ODD, strong evidence of direct biological causation is lacking.
Neurobiological factors
Deficits and injuries to certain areas of the brain can lead to serious behavioral problems in children. Brain imaging studies have suggested that children with ODD may have hypofunction in the part of the brain responsible for reasoning, judgment, and impulse control. Children with ODD are thought to have an overactive behavioral activation system (BAS), and an underactive behavioral inhibition system (BIS). The BAS stimulates behavior in response to signals of reward or non-punishment. The BIS produces anxiety and inhibits ongoing behavior in the presence of novel events, innate fear stimuli, and signals of non-reward or punishment. Neuroimaging studies have also identified structural and functional brain abnormalities in several brain regions in youths with conduct disorders. These brain regions are the amygdala, prefrontal cortex, anterior cingulate, and insula, as well as interconnected regions.
Social-cognitive factors
As many as 40 percent of boys and 25 percent of girls with persistent conduct problems display significant social-cognitive impairments. Some of these deficits include immature forms of thinking (such as egocentrism), failure to use verbal mediators to regulate their behavior, and cognitive distortions, such as interpreting a neutral event as an intentional hostile act.
Children with ODD have difficulty controlling their emotions or behaviors. In fact, students with ODD have limited social knowledge that is based only on individual experiences, which shapes how they process information and solve problems cognitively.
This information can be linked with the social information processing model (SIP) that describes how children process information to respond appropriately or inappropriately in social settings. This model explains that children will go through five stages before displaying behaviors: encoding, mental representations, response accessing, evaluation, and enactment.
However, children with ODD have cognitive distortions and impaired cognitive processes.
This will therefore directly impact their interactions and relationship negatively. It has been shown that social and cognitive impairments result in negative peer relationships, loss of friendship, and an interruption in socially engaging in activities.
Children learn through observational learning and social learning. Therefore, observations of models have a direct impact and greatly influence children's behaviors and decision-making processes. Children often learn through modeling behavior. Modeling can act as a powerful tool to modify children's cognition and behaviors.
Environmental factors
Negative parenting practices and parent–child conflict may lead to antisocial behavior, but they may also be a reaction to the oppositional and aggressive behaviors of children. Factors such as a family history of mental illnesses and/or substance use disorders as well as a dysfunctional family and inconsistent discipline by a parent or guardian can lead to the development of behavior disorders. Parenting practices not providing adequate or appropriate adjustment to situations as well as a high ratio of conflicting events within a family are causal factors of risk for developing ODD.
Insecure parent–child attachments can also contribute to ODD. Often little internalization of parent and societal standards exists in children with conduct problems. These weak bonds with their parents may lead children to associate with delinquency and substance use. Family instability and stress can also contribute to the development of ODD. Although the association between family factors and conduct problems is well established, the nature of this association and the possible causal role of family factors continues to be debated.
School is also a significant environmental context besides family that strongly influences a child's maladaptive behaviors. Studies indicate that child and adolescent externalizing disorders like ODD are strongly linked to peer network and teacher response. Children with ODD present hostile and defiant behavior toward authority including teachers which makes teachers less tolerant toward deviant children. The way in which a teacher handles disruptive behavior has a significant influence on the behavior of children with ODD. Negative relationships from the socializing influences and support network of teachers and peers increases the risk of deviant behavior. This is because the child consequently gets affiliated with deviant peers that reinforce antisocial behavior and delinquency. Due to the significant influence of teachers in managing disruptive behaviors, teacher training is a recommended intervention to change the disruptive behavior of ODD children.
In a number of studies, low socioeconomic status has also been associated with disruptive behaviors such as ODD.
Other social factors such as neglect, abuse, parents that are not involved, and lack of supervision can also contribute to ODD.
Externalizing problems are reported to be more frequent among minority-status youth, a finding that is likely related to economic hardship, limited employment opportunities, and living in high-risk urban neighborhoods. Studies have also found that the state of being exposed to violence was a contribution factor for externalizing behaviors to occur.
Diagnosis
For a child or adolescent to qualify for a diagnosis of ODD, behaviors must cause considerable distress for the family or interfere significantly with academic or social functioning. Such interference might manifest as challenges in learning at school, making friends, or placing the individual in harmful situations. These behaviors must also persist for at least six months. It is crucial to consider the bio-socio complexity in the expression and management of ODD. Biological factors such as genetics and neurodevelopmental variations interact with social factors like family dynamics, educational practices, and societal norms to influence the manifestation and recognition of ODD symptoms. The effects of ODD can be amplified by other disorders in comorbidity such as ADHD, depression, and substance use disorders. This intricate interplay between biological predispositions and social factors can lead to diverse clinical presentations, affecting the approaches to treatment and support.
Additionally, it has been observed that adults who were diagnosed with ODD as children tend to have a higher chance of being diagnosed with other mental illnesses in their lifetime, as well as being at a higher risk of developing social and emotional problems. This suggests that longitudinal support and intervention, taking into account the individual's biological makeup and social context, are vital for improving long-term outcomes for those with ODD.
Management
Approaches to the treatment of ODD include parent management training, individual psychotherapy, family therapy, cognitive behavioral therapy, and social skills training. According to the American Academy of Child and Adolescent Psychiatry, treatments for ODD are tailored specifically to the individual child, and different treatment techniques are applied for pre-schoolers and adolescents.
Children with oppositional defiant disorder tend to exhibit problematic behavior that can be very difficult to control. An occupational therapist can recommend family based education referred to as parent management training (PMT) in order to encourage positive parents and child relationships and reduce the child's tantrums and other disruptive behaviors. Since ODD is a neurological disorder that has biological correlates, an occupational therapist can also provide problem solving training to encourage positive coping skills when difficult situations arise, as well as offer cognitive behavioral therapy.
Psychopharmacological treatment
Psychopharmacological treatment is the use of prescribed medication in managing oppositional defiant disorder. Prescribed medications to control ODD include mood stabilizers, anti-psychotics, and stimulants. In two controlled randomized trials, it was found that between administered lithium and the placebo group, administering lithium decreased aggression in children with conduct disorder in a safe manner. However, a third study found the treatment of lithium over a period of two weeks invalid. Other drugs seen in studies include haloperidol, thioridazine, and methylphenidate which also is effective in treating ADHD, as it is a common comorbidity.
The effectiveness of drug and medication treatment is not well established. Effects that can result from taking these medications include hypotension, extrapyramidal symptoms, tardive dyskinesia, obesity, and increase in weight. Psychopharmacological treatment is found to be most effective when paired with another treatment plan, such as individual intervention or multimodal intervention.
Individual interventions
Individual interventions are focused on child-specific individualized plans. These interventions include anger control/stress inoculation, assertiveness training, a child-focused problem-solving skills training program, and self-monitoring skills.
Anger control and stress inoculation help prepare the child for possible upsetting situations or events that may cause anger and stress. They include a process of steps the child may go through.
Assertiveness training educates individuals in keeping a balance between passivity and aggression. It aims to help the child respond in a controlled and fair manner.
A child-focused problem-solving skills training program aims to teach the child new skills and cognitive processes that teach how to deal with negative thoughts, feelings, and actions.
Parent and family treatment
According to randomized trials, evidence shows that parent management training is most effective. It has strong influences over a long period of time and in various environments.
Parent-child interaction training is intended to coach the parents while involving the child. This training has two phases; the first phase is child-directed interaction, where the focus is to teach the child non-directive play skills. The second phase is parent-directed interaction, where the parents are coached on aspects including clear instruction, praise for compliance, and time-out for noncompliance. The parent-child interaction training is best suited for elementary-aged children.
Parent and family treatment has a low financial cost, which can yield an increase in beneficial results.
Multimodal intervention
Multimodal intervention is an effective treatment that looks at different levels including family, peers, school, and neighborhood. It is an intervention that concentrates on multiple risk factors. The focus is on parent training, classroom social skills, and playground behavior programs. The intervention is intensive and addresses barriers to individuals' improvement such as parental substance use or parental marital conflict.
An impediment to treatment includes the nature of the disorder itself, whereby treatment is often not complied with and is not continued or adhered to for adequate periods of time.
Comorbidity
Oppositional defiant disorder can be described as a term or disorder with a variety of pathways in regard to comorbidity. High importance must be given to the representation of ODD as a distinct psychiatric disorder independent of conduct disorder.
In the context of oppositional defiant disorder and comorbidity with other disorders, researchers often conclude that ODD co-occurs with an attention deficit hyperactivity disorder (ADHD), anxiety disorders, emotional disorders as well as mood disorders. Those mood disorders can be linked to major depression or bipolar disorder. Indirect consequences of ODD can also be related or associated with a later mental disorder. For instance, conduct disorder is often studied in connection with ODD. Strong comorbidity can be observed within those two disorders, but an even higher connection with ADHD in relation to ODD can be seen. For instance, children or adolescents who have ODD with coexisting ADHD will usually be more aggressive and have more of the negative behavioral symptoms of ODD, which can inhibit them from having a successful academic life. This will be reflected in their academic path as students.
Other conditions that can be predicted in children or people with ODD are learning disorders in which the person has significant impairments with academics and language disorders, in which problems can be observed related to language production and/or comprehension.
Criticism
Oppositional defiant disorder's validity as a diagnosis has been criticized since its inclusion in the DSM III in 1980. ODD was considered to produce minor impairment insufficient to qualify as a medical diagnosis, and was difficult to separate from conduct disorder, with some estimates that over 50% of those diagnosed with conduct disorder would also meet criteria for ODD. The diagnosis of ODD was also criticized for medicalizing normal developmental behavior. To address these problems, the DSM-III-R dropped the criterion of swearing and changed the cutoff from five of nine criteria, to four of eight. Most evidence indicated a dose–response relationship between the severity of symptoms and level of functional impairment, suggesting that the diagnostic threshold was arbitrary. Early field trials of ODD used subjects who were over 75% male.
Recent criticisms of ODD suggest that the use of ODD as a diagnosis exacerbates the stigma surrounding reactive behavior and frames normal reactions to trauma as personal issues of self-control. Anti-psychiatry scholars have extensively criticized this diagnosis through a Foucauldian framework, characterizing it as a tool of the psy apparatus which pathologizes resistance to injustice. Oppositional defiant disorder has been compared to drapetomania, a now-obsolete disorder proposed by Samuel A. Cartwright which characterized slaves in the Antebellum South who repeatedly tried to escape as being mentally ill.
Race and gender bias in the US
Research has shown that African Americans and Latino Americans are disproportionately likely to be diagnosed with ODD compared to White counterparts displaying the same symptoms, who are more likely to be diagnosed with ADHD. Assessment, diagnosis and treatment of ODD may not account for contextual problems experienced by the patient, and can be influenced by cultural and personal racial bias on the part of counselors and therapists. Many children diagnosed with ODD were, upon reassessment, found to better fit diagnoses of obsessive–compulsive disorder, bipolar disorder, attention deficit hyperactivity disorder, or anxiety disorder. Diagnoses of ODD or conduct disorder are not eligible for disability accommodation at school under the Individuals with Disabilities Education Act. When parents request accommodation for a diagnosed disorder which is eligible, such as ADHD, the request can be denied on the basis that such conditions are co-morbid with ODD. This bias in perception and diagnosis leads to defiant behaviors being medicalized and rehabilitated in White children, but criminalized for Latino and African American ones. Counselors working with children diagnosed with ODD reported that it was common for them to face stigma around the diagnosis in educational and justice systems, and that the diagnosis affected patients' self image. In one study over a quarter of children placed in the foster care system in the United States were found to have been diagnosed with ODD. Over half of children in the juvenile justice system have been diagnosed with ODD.
African American males are known to be more likely to be suspended or expelled from school, receive harsher sentences for the same offenses as defendants of different races, or be searched, assaulted or killed by police officers. The disproportionately high diagnosis of ODD in AA males may be used to rationalize these outcomes. In this manner, ODD diagnoses can serve as a mechanism of the school-to-prison pipeline. From this viewpoint, the ODD diagnosis frames expected reactions to injustice or trauma as defiant or criminal.
See also
Anti-Authoritarianism
Antisocial personality disorder
Attachment disorder
Attention deficit hyperactivity disorder (ADHD)
Borderline personality disorder
Conduct disorder
Contrarian
Disruptive mood dysregulation disorder (DMDD)
Pathological demand avoidance
References
Further reading
External links
ODD Resource Center – American Academy of Child and Adolescent Psychiatry
Mental disorders diagnosed in childhood
Aggression
Disruptive behaviour or dissocial disorders | Oppositional defiant disorder | Biology | 4,800 |
200,463 | https://en.wikipedia.org/wiki/Transitive%20relation | In mathematics, a binary relation on a set is transitive if, for all elements , , in , whenever relates to and to , then also relates to .
Every partial order and every equivalence relation is transitive. For example, less than and equality among real numbers are both transitive: If and then ; and if and then .
Definition
A homogeneous relation on the set is a transitive relation if,
for all , if and , then .
Or in terms of first-order logic:
,
where is the infix notation for .
Examples
As a non-mathematical example, the relation "is an ancestor of" is transitive. For example, if Amy is an ancestor of Becky, and Becky is an ancestor of Carrie, then Amy is also an ancestor of Carrie.
On the other hand, "is the birth mother of" is not a transitive relation, because if Alice is the birth mother of Brenda, and Brenda is the birth mother of Claire, then it does not follow that Alice is the birth mother of Claire. In fact, this relation is antitransitive: Alice can never be the birth mother of Claire.
Non-transitive, non-antitransitive relations include sports fixtures (playoff schedules), 'knows' and 'talks to'.
The examples "is greater than", "is at least as great as", and "is equal to" (equality) are transitive relations on various sets.
As are the set of real numbers or the set of natural numbers:
whenever x > y and y > z, then also x > z
whenever x ≥ y and y ≥ z, then also x ≥ z
whenever x = y and y = z, then also x = z.
More examples of transitive relations:
"is a subset of" (set inclusion, a relation on sets)
"divides" (divisibility, a relation on natural numbers)
"implies" (implication, symbolized by "⇒", a relation on propositions)
Examples of non-transitive relations:
"is the successor of" (a relation on natural numbers)
"is a member of the set" (symbolized as "∈")
"is perpendicular to" (a relation on lines in Euclidean geometry)
The empty relation on any set is transitive because there are no elements such that and , and hence the transitivity condition is vacuously true. A relation containing only one ordered pair is also transitive: if the ordered pair is of the form for some the only such elements are , and indeed in this case , while if the ordered pair is not of the form then there are no such elements and hence is vacuously transitive.
Properties
Closure properties
The converse (inverse) of a transitive relation is always transitive. For instance, knowing that "is a subset of" is transitive and "is a superset of" is its converse, one can conclude that the latter is transitive as well.
The intersection of two transitive relations is always transitive. For instance, knowing that "was born before" and "has the same first name as" are transitive, one can conclude that "was born before and also has the same first name as" is also transitive.
The union of two transitive relations need not be transitive. For instance, "was born before or has the same first name as" is not a transitive relation, since e.g. Herbert Hoover is related to Franklin D. Roosevelt, who is in turn related to Franklin Pierce, while Hoover is not related to Franklin Pierce.
The complement of a transitive relation need not be transitive. For instance, while "equal to" is transitive, "not equal to" is only transitive on sets with at most one element.
Other properties
A transitive relation is asymmetric if and only if it is irreflexive.
A transitive relation need not be reflexive. When it is, it is called a preorder. For example, on set X = {1,2,3}:
R = {(1,1), (2,2), (3,3), (1,3), (3,2)} is reflexive, but not transitive, as the pair (1,2) is absent,
R = {(1,1), (2,2), (3,3), (1,3)} is reflexive as well as transitive, so it is a preorder,
R = {(1,1), (2,2), (3,3)} is reflexive as well as transitive, another preorder.
As a counter example, the relation on the real numbers is transitive, but not reflexive.
Transitive extensions and transitive closure
Let be a binary relation on set . The transitive extension of , denoted , is the smallest binary relation on such that contains , and if and then . For example, suppose is a set of towns, some of which are connected by roads. Let be the relation on towns where if there is a road directly linking town and town . This relation need not be transitive. The transitive extension of this relation can be defined by if you can travel between towns and by using at most two roads.
If a relation is transitive then its transitive extension is itself, that is, if is a transitive relation then .
The transitive extension of would be denoted by , and continuing in this way, in general, the transitive extension of would be . The transitive closure of , denoted by or is the set union of , , , ... .
The transitive closure of a relation is a transitive relation.
The relation "is the birth parent of" on a set of people is not a transitive relation. However, in biology the need often arises to consider birth parenthood over an arbitrary number of generations: the relation "is a birth ancestor of" is a transitive relation and it is the transitive closure of the relation "is the birth parent of".
For the example of towns and roads above, provided you can travel between towns and using any number of roads.
Relation types that require transitivity
Preorder – a reflexive and transitive relation
Partial order – an antisymmetric preorder
Total preorder – a connected (formerly called total) preorder
Equivalence relation – a symmetric preorder
Strict weak ordering – a strict partial order in which incomparability is an equivalence relation
Total ordering – a connected (total), antisymmetric, and transitive relation
Counting transitive relations
No general formula that counts the number of transitive relations on a finite set is known. However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – , those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult. See also Brinkmann and McKay (2005).
Since the reflexivization of any transitive relation is a preorder, the number of transitive relations an on n-element set is at most 2n time more than the number of preorders, thus it is asymptotically by results of Kleitman and Rothschild.
Related properties
A relation R is called intransitive if it is not transitive, that is, if xRy and yRz, but not xRz, for some x, y, z.
In contrast, a relation R is called antitransitive if xRy and yRz always implies that xRz does not hold.
For example, the relation defined by xRy if xy is an even number is intransitive, but not antitransitive. The relation defined by xRy if x is even and y is odd is both transitive and antitransitive.
The relation defined by xRy if x is the successor number of y is both intransitive and antitransitive. Unexpected examples of intransitivity arise in situations such as political questions or group preferences.
Generalized to stochastic versions (stochastic transitivity), the study of transitivity finds applications of in decision theory, psychometrics and utility models.
A quasitransitive relation is another generalization; it is required to be transitive only on its non-symmetric part. Such relations are used in social choice theory or microeconomics.
Proposition: If R is a univalent, then R;RT is transitive.
proof: Suppose Then there are a and b such that Since R is univalent, yRb and aRTy imply a=b. Therefore xRaRTz, hence xR;RTz and R;RT is transitive.
Corollary: If R is univalent, then R;RT is an equivalence relation on the domain of R.
proof: R;RT is symmetric and reflexive on its domain. With univalence of R, the transitive requirement for equivalence is fulfilled.
See also
Transitive reduction
Intransitive dice
Rational choice theory
Hypothetical syllogism — transitivity of the material conditional
Notes
References
Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, .
Pfeiffer, G. (2004). Counting transitive relations. Journal of Integer Sequences, 7(2), 3.
External links
Transitivity in Action at cut-the-knot
Elementary algebra | Transitive relation | Mathematics | 1,984 |
3,342,929 | https://en.wikipedia.org/wiki/Rotten%20stone | Rotten stone, sometimes spelled as rottenstone, also known as tripoli, is fine powdered porous rock used as a polishing abrasive for metal smithing, historically for the grinding of optical lenses and in woodworking. It is usually weathered limestone mixed with diatomaceous, amorphous, or crystalline silica. It has similar applications to pumice, but it is generally sold as a finer powder and used for a more glossy polish after an initial treatment with coarser pumice powder. Tripoli particles are rounded rather than sharp, making it a milder abrasive.
It is usually mixed with oil, sometimes water, and rubbed on the surface of varnished or lacquered wood with a felt pad or cloth. Rotten stone is sometimes used to buff stains out of wood. Some polishing waxes contain powdered rotten stone in a paste substrate. For larger polishing jobs, rotten stone mixed with a binder is applied to polishing wheels.
It has also been used to polish brass, such as that found on military uniforms, as well as steel and other metals. Plates used in daguerreotypes were polished using rotten stone, the finest abrasive available at the time.
It is also used to polish jewelry and in toothpastes. Its more common use is as a filler, as used in plastics, paint and rubber.
Sources
Rottenstone has been extensively worked in South Wales along the outcrop of the Carboniferous Limestone, particularly within the Brecon Beacons National Park. It occurs at the top of the sequence where the Upper Limestone Shales have been weathered. Innumerable workings were initiated and later abandoned during the course of the nineteenth century, leaving a characteristic terrain of humps and hollows. A notable example is that on the flanks of Cribarth exploited by industrial entrepreneur John Christie.
In the United States it is mainly produced in Arkansas, Illinois, and Oklahoma.
See also
Metal polishing
Pumice
Wood finishing
References
External links
On Rotten-Stone and Emery. Penny Magazine of the Society for the Diffusion of Useful Knowledge. July 15, 1843 Volume 12, p. 270
Abrasives
Metalworking
Industrial minerals | Rotten stone | Physics | 442 |
73,354,598 | https://en.wikipedia.org/wiki/Aliasing%20%28factorial%20experiments%29 | In the statistical theory of factorial experiments, aliasing is the property of fractional factorial designs that makes some effects "aliased" with each other – that is, indistinguishable from each other. A primary goal of the theory of such designs is the control of aliasing so that important effects are not aliased with each other.
In a "full" factorial experiment, the number of treatment combinations or cells (see below) can be very large. This necessitates limiting observations to a fraction (subset) of the treatment combinations.
Aliasing is an automatic and unavoidable result of observing such a fraction.
The aliasing properties of a design are often summarized by giving its
resolution. This measures the degree to which the design avoids aliasing between main effects and important interactions.
Fractional factorial experiments have long been a basic tool in
agriculture, food technology, industry, medicine and public health, and the social and behavioral sciences.
They are widely used in exploratory research, particularly in screening experiments, which have applications in industry, drug design and genetics. In all such cases, a crucial step in designing such an experiment is deciding on the desired aliasing pattern, or at least the desired resolution.
As noted below, the concept of aliasing may have influenced the identification of an analogous phenomenon in signal processing theory.
Overview
Associated with a factorial experiment is a collection of effects. Each factor determines a main effect, and each set of two or more factors determines an interaction effect (or simply an interaction) between those factors. Each effect is defined by a set of relations between cell means, as described below. In a fractional factorial design, effects are defined by restricting these relations to the cells in the fraction. It is when the restricted relations for two different effects turn out to be the same that the effects are said to be aliased.
The presence or absence of a given effect in a given data set is tested by statistical methods, most commonly analysis of variance. While aliasing has significant implications for estimation and hypothesis testing, it is fundamentally a combinatorial and algebraic phenomenon. Construction and analysis of fractional designs thus rely heavily on algebraic methods.
The definition of a fractional design is sometimes broadened to allow multiple observations of some or all treatment combinations – a multisubset of all treatment combinations. A fraction that is a subset (that is, where treatment combinations are not repeated) is called simple. The theory described below applies to simple fractions.
Contrasts and effects
In any design, full or fractional, the expected value of an observation in a given treatment combination is called a cell mean, usually denoted using the Greek letter μ. (The term cell is borrowed from its use in tables of data.)
A contrast in cell means is a linear combination of cell means in which the coefficients sum to 0. In the 2 × 3 experiment illustrated here, the expression
is a contrast that compares the mean responses of the treatment combinations 11 and 12. (The coefficients here are 1 and –1.)
The effects in a factorial experiment are expressed in terms of contrasts. In the above example, the contrast
is said to belong to the main effect of factor A as it contrasts the responses to the "1" level of factor with those for the "2" level. The main effect of A is said to be absent if this expression equals 0. Similarly,
and
are contrasts belonging to the main effect of factor B. On the other hand, the contrasts
and
belong to the interaction of A and B; setting them equal to 0 expresses the lack of interaction. These designations, which extend to arbitrary factorial experiments having three or more factors, depend on the pattern of coefficients, as explained elsewhere.
Since it is the coefficients of these contrasts that carry the essential information, they are often displayed as column vectors. For the example above, such a table might look like this:
The columns of such a table are called contrast vectors: their components add up to 0. While there are in general many possible choices of columns to represent a given effect, the number of such columns — the degrees of freedom of the effect — is fixed and is given by a well-known formula. In the 2 × 3 example above, the degrees of freedom for , and the interaction are 1, 2 and 2, respectively.
In a fractional factorial experiment, the contrast vectors belonging to a given effect are restricted to the treatment combinations in the fraction. Thus, in the half-fraction {11, 12, 13} in the 2 × 3 example, the three effects may be represented by the column vectors in the following table:
The consequence of this truncation — aliasing — is described below.
Definitions
The factors in the design are allowed to have different numbers of levels, as in a factorial experiment (an asymmetric or mixed-level experiment).
Fix a fraction of a full factorial design. Let be a set of contrast vectors representing an effect (in particular, a main effect or interaction) in the full factorial design, and let consist of the restrictions of those vectors to the fraction. One says that the effect is
preserved in the fraction if consists of contrast vectors;
completely lost in the fraction if consists of constant vectors, that is, vectors whose components are equal; and
partly lost otherwise.
Similarly, let and represent two effects and let and be their restrictions to the fraction. The two effects are said to be
unaliased in the fraction if each vector in is orthogonal (perpendicular) to all the vectors in , and vice versa;
completely aliased in the fraction if each vector in is a linear combination of vectors in , and vice versa; and
partly aliased otherwise.
Finney and Bush introduced the terms "lost" and "preserved" in the sense used here. Despite the relatively long history of this topic, though, its terminology is not entirely standardized. The literature often describes lost effects as "not estimable" in a fraction, although estimation is not the only issue at stake. Rao referred to preserved effects as "measurable from" the fraction.
Resolution
The extent of aliasing in a given fractional design is measured by the resolution of the fraction, a concept first defined by Box and Hunter:
A fractional factorial design is said to have resolution if every -factor effect is unaliased with every effect having fewer than factors.
For example, a design has resolution if main effects are unaliased with each other (taking , though it allows main effects to be aliased with two-factor interactions. This is typically the lowest resolution desired for a fraction. It is not hard to see that a fraction of resolution also has resolution , etc., so one usually speaks of the maximum resolution of a fraction.
The number in the definition of resolution is usually understood to be a positive integer, but one may consider the effect of the grand mean to be the (unique) effect with no factors (i.e., with ). This effect sometimes appears in analysis of variance tables. It has one degree of freedom, and is represented by a single vector, a column of 1's. With this understanding, an effect is
preserved in a fraction if it is unaliased with the grand mean, and
completely lost in a fraction if it is completely aliased with the grand mean.
A fraction then has resolution if all main effects are preserved in the fraction. If it has resolution then two-factor interactions are also preserved.
Computation
The definitions above require some computations with vectors, illustrated in the examples that follow. For certain fractional designs (the regular ones), a simple algebraic technique can be used that bypasses these procedures and gives a simple way to determine resolution. This is discussed below.
Examples
The 2 × 3 experiment
The fraction {11, 12, 13} of this experiment was described above along with its restricted vectors. It is repeated here along with the complementary fraction {21, 22, 23}:
In both fractions, the effect is completely lost (the column is constant) while the and interaction effects are preserved (each 3 × 1 column is a contrast vector as its components sum to 0). In addition, the and interaction effects are completely aliased in each fraction: In the first fraction, the vectors for are linear combinations of those for , viz.,
and
;
in the reverse direction, the vectors for can be written similarly in terms of those representing . The argument in the second fraction is analogous.
These fractions have maximum resolution 1. The fact that the main effect of is lost makes both of these fractions undesirable in practice. It turns out that in a 2 × 3 experiment (or in any a × b experiment in which a and b are relatively prime) there is no fraction that preserves both main effects -- that is, no fraction has resolution 2.
The 2 × 2 × 2 (or 2³) experiment
This is a "two-level" experiment with factors and . In such experiments the factor levels are often denoted by 0 and 1, for reasons explained below. A treatment combination is then denoted by an ordered triple such as 101 (more formally, (1, 0, 1), denoting the cell in which and are at level "1" and is at level "0"). The following table lists the eight cells of the full 2 × 2 × 2 factorial experiment, along with a contrast vector representing each effect, including a three-factor interaction:
Suppose that only the fraction consisting of the cells 000, 011, 101, and 110 is observed. The original contrast vectors, when restricted to these cells, are now 4 × 1, and can be seen by looking at just those four rows of the table. (Sorting the table on will bring these rows together and make the restricted contrast vectors easier to see. Sorting twice puts them at the top.) The following can be observed concerning these restricted vectors:
The column consists just of the constant 1 repeated four times.
The other columns are contrast vectors, having two 1's and two −1s.
The columns for and are equal. The same holds for and , and for and .
All other pairs of columns are orthogonal. For example, the column for is orthogonal to that for , for , for , and for , as one can see by computing dot products.
Thus
the interaction is completely lost in the fraction;
the other effects are preserved in the fraction;
the effects and are completely aliased with each other, as are and , and and .
all other pairs of effects are unaliased. For example, is unaliased with both and and with the and interactions.
Now suppose instead that the complementary fraction {001,010,100,111} is observed. The same effects as before are lost or preserved, and the same pairs of effects as before are mutually unaliased. Moreover, and are still aliased in this fraction since the and vectors are negatives of each other, and similarly for and and for and . Both of these fractions thus have maximum resolution 3.
Aliasing in regular fractions
The two half-fractions of a factorial experiment described above are of a special kind: Each is the solution set of a linear equation using modular arithmetic. More exactly:
The fraction is the solution set of the equation . For example, is a solution because .
Similarly, the fraction is the solution set to
Such fractions are said to be regular. This idea applies to fractions of "classical" designs, that is, (or "symmetric") factorial designs in which the number of levels, , of each of the factors is a prime or the power of a prime.
A fractional factorial design is regular if it is the solution set of a system of one or more equations of the form
where the equation is modulo if is prime, and is in the finite field if is a power of a prime. Such equations are called defining equations of the fraction. When the defining equation or equations are homogeneous, the fraction is said to be principal.
One defining equation yields a fraction of size , two independent equations a fraction of size and so on. Such fractions are generally denoted as designs. The half-fractions described above are designs. The notation often includes the resolution as a subscript, in Roman numerals; the above fractions are thus designs.
Associated to each expression is another, namely , which rewrites the coefficients as exponents. Such expressions are called "words", a term borrowed from group theory. (In a particular example where is a specific number, the letters are used, rather than .) These words can be multiplied and raised to powers, where the word acts as a multiplicative identity, and they thus form an abelian group , known as the effects group. When is prime, one has for every element (word) ; something similar holds in the prime-power case.
In factorial experiments, each element of represents a main effect or interaction. In experiments with , each one-letter word represents the main effect of that factor, while longer words represent components of interaction. An example below illustrates this with .
To each defining expression (the left-hand side of a defining equation) corresponds a defining word. The defining words generate a subgroup of that is variously called the alias subgroup, the defining contrast subgroup, or simply the defining subgroup of the fraction. Each element of is a defining word since it corresponds to a defining equation, as one can show. The effects represented by the defining words are completely lost in the fraction while all other effects are preserved. If , say, then the equation
is called the defining relation of the fraction. This relation is used to determine the aliasing structure of the fraction: If a given effect is represented by the word , then its aliases are computed by multiplying the defining relation by , viz.,
where the products are then simplified. This relation indicates complete (not partial) aliasing, and W is unaliased with all other effects listed in .
Example 1
In either of the fractions described above, the defining word is , since the exponents on these letters are the coefficients of . The effect is completely lost in the fraction, and the defining subgroup is simply , since squaring does not generate new elements . The defining relation is thus
,
and multiplying both sides by gives ; which simplifies to
the alias relation seen earlier. Similarly, and . Note that multiplying both sides of the defining relation by and does not give any new alias relations.
For comparison, the fraction with defining equation has the defining word (i.e., ). The effect is completely lost, and the defining relation is . Multiplying this by , by , and by gives the alias relations , , and among the six remaining effects. This fraction only has resolution 2 since all effects (except ) are preserved but two main effects are aliased. Finally, solving the defining equation yields the fraction {000, 001, 110, 111}. One may verify all of this by sorting the table above on column .
The use of arithmetic modulo 2 explains why the factor levels in such designs are labeled 0 and 1.
Example 2
In a 3-level design, factor levels are denoted 0, 1 and 2, and arithmetic is modulo 3. If there are four factors, say and , the effects group will have the relations
From these it follows, for example, that and .
A defining equation such as would produce a regular 1/3-fraction of the 81 (= ) treatment combinations, and the corresponding defining word would be . Since its powers are
and ,
the defining subgroup would be , and so the fraction would have defining relation
Multiplying by , for example, yields the aliases
For reasons explained elsewhere, though, all powers of a defining word represent the same effect, and the convention is to choose that power whose leading exponent is 1. Squaring the latter two expressions does the trick and gives the alias relations
Twelve other sets of three aliased effects are given by Wu and Hamada. Examining all of these reveals that, like , main effects are unaliased with each other and with two-factor effects, although some two-factor effects are aliased with each other. This means that this fraction has maximum resolution 4, and so is of type .
The effect is one of 4 components of the interaction, while is one of 8 components of the interaction. In a 3-level design, each component of interaction carries 2 degrees of freedom.
Example 3
A design ( of a design) may be created by solving two equations in 5 unknowns, say
modulo 2. The fraction has eight treatment combinations, such as 10000, 00110 and 11111, and is displayed in the article on fractional factorial designs. Here the coefficients in the two defining equations give defining words and . Setting and multiplying through by gives the alias relation . The second defining word similarly gives . The article uses these two aliases to describe an alternate method of construction of the fraction.
The defining subgroup has one more element, namely the product , making use of the fact that . The extra defining word is known as the generalized interaction of and , and corresponds to the equation , which is also satisfied by the fraction. With this word included, the full defining relation is
(these are the four elements of the defining subgroup), from which all the alias relations of this fraction can be derived – for example, multiplying through by yields
.
Continuing this process yields six more alias sets, each containing four effects. An examination of these sets reveals that main effects are not aliased with each other, but are aliased with two-factor interactions. This means that this fraction has maximum resolution 3. A quicker way to determine the resolution of a regular fraction is given below.
It is notable that the alias relations of the fraction depend only on the left-hand side of the defining equations, not on their constant terms. For this reason, some authors will restrict attention to principal fractions "without loss of generality", although the reduction to the principal case often requires verification.
Determining the resolution of a regular fraction
The length of a word in the effects group is defined to be the number of letters in its name, not counting repetition. For example, the length of the word is 3.
Using this result, one immediately gets the resolution of the preceding examples without computing alias relations:
In the fraction with defining word , the maximum resolution is 3 (the length of that word), while the fraction with defining word has maximum resolution 2.
The defining words of the fraction were and , both of length 4, so that the fraction has maximum resolution 4, as indicated.
In the fraction with defining words and , the maximum resolution is 3, which is the shortest "wordlength".
One could also construct a fraction from the defining words and , but the defining subgroup will also include , their product, and so the fraction will only have resolution 2 (the length of ). This is true starting with any two words of length 4. Thus resolution 3 is the best one can hope for in a fraction of type .
As these examples indicate, one must consider all the elements of the defining subgroup in applying the theorem above. This theorem is often taken to be a definition of resolution, but the Box-Hunter definition given earlier applies to arbitrary fractional designs and so is more general.
Aliasing in general fractions
Nonregular fractions are common, and have certain advantages. For example, they are not restricted to having size a power of , where is a prime or prime power. While some methods have been developed to deal with aliasing in particular nonregular designs, no overall algebraic scheme has emerged.
There is a universal combinatorial approach, however, going back to Rao. If the treatment combinations of the fraction are written as rows of a table, that table is an orthogonal array. These rows are often referred to as "runs". The columns will correspond to the factors, and the entries of the table will simply be the symbols used for factor levels, and need not be numbers. The number of levels need not be prime or prime-powered, and they may vary from factor to factor, so that the table may be a mixed-level array. In this section fractional designs are allowed to be mixed-level unless explicitly restricted.
A key parameter of an orthogonal array is its strength, the definition of which is given in the article on orthogonal arrays. One may thus refer to the strength of a fractional design. Two important facts flow immediately from its definition:
If an array (or fraction) has strength then it also has strength for every . The array's maximum strength is of particular importance.
In a fixed-level array, all factors having levels, the number of runs is a multiple of , where is the strength. Here need not be a prime or prime power.
To state the next result, it is convenient to enumerate the factors of the experiment by 1 through , and to let each nonempty subset of correspond to a main effect or interaction in the following way: corresponds to the main effect of factor
, corresponds to the interaction of factors and , and so on.
Example: Consider a fractional factorial design with factors and maximum strength . Then:
All effects up to three-factor interactions are preserved in the fraction.
Main effects are unaliased with each other and with two-factor interactions.
Two-factor interactions are unaliased with each other if they share a factor. For example, the and interactions are unaliased, but the and interactions may be at least partly aliased as the set contains 4 elements but the strength of the fraction is only 3.
The Fundamental Theorem has a number of important consequences. In particular, it follows almost immediately that if a fraction has strength then it has resolution . With additional assumptions, a stronger conclusion is possible:
This result replaces the group-theoretic condition (minimum wordlength) in regular fractions with a combinatorial condition (maximum strength) in arbitrary ones.
Example. An important class of nonregular two-level designs are Plackett-Burman designs. As with all fractions constructed from Hadamard matrices, they have strength 2, and therefore resolution 3. The smallest such design has 11 factors and 12 runs (treatment combinations), and is displayed in the article on such designs. Since 2 is its maximum strength, 3 is its maximum resolution. Some detail about its aliasing pattern is given in the next section.
Partial aliasing
In regular fractions there is no partial aliasing: Each effect is either preserved or completely lost, and effects are either unaliased or completely aliased. The same holds in regular experiments with if one considers only main effects and components of interaction. However, a limited form of partial aliasing occurs in the latter. For example, in the design described above the overall interaction is partly lost since its component is completely lost in the fraction while its other components (such as ) are preserved. Similarly, the main effect of is partly aliased with the interaction since is completely aliased with its component and unaliased with the others.
In contrast, partial aliasing is uncontrolled and pervasive in nonregular fractions. In the 12-run Plackett-Burman design described in the previous section, for example, with factors labeled through , the only complete aliasing is between "complementary effects" such as and or and . Here the main effect of factor is unaliased with the other main effects and with the interaction, but it is partly aliased with 45 of the 55 two-factor interactions, 120 of the 165 three-factor interactions, and 150 of the 330 four-factor interactions. This phenomenon is generally described as complex aliasing. Similarly, 924 effects are preserved in the fraction, 1122 effects are partly lost, and only one (the top-level interaction ) is completely lost.
Analysis of variance (ANOVA)
Wu and Hamada analyze a data set collected on the fractional design described above. Significance testing in the analysis of variance (ANOVA) requires that the error sum of squares and the degrees of freedom for error be nonzero. In order to insure this, two design decisions have been made:
Interactions of three or four factors have been assumed absent. This decision is consistent with the effect hierarchy principle.
Replication (inclusion of repeated observations) is necessary. In this case, three observations were made on each of the 27 treatment combinations in the fraction, for a total of 81 observations.
The accompanying table shows just two columns of an ANOVA table for this experiment. Only main effects and components of two-factor interactions are listed, including three pairs of aliases. Aliasing between some two-factor interactions is expected, since the maximum resolution of this design is 4.
This experiment studied two response variables. In both cases, some aliased interactions were statistically significant. This poses a challenge of interpretation, since without more information or further assumptions it is impossible to determine which interaction is responsible for significance. In some instances there may be a theoretical basis to make this determination.
This example shows one advantage of fractional designs. The full factorial experiment has 81 treatment combinations, but taking one observation on each of these would leave no degrees of freedom for error. The fractional design also uses 81 observations, but on just 27 treatment combinations, in such a way that one can make inferences on main effects and on (most) two-factor interactions. This may be sufficient for practical purposes.
History
The first statistical use of the term "aliasing" in print is the 1945 paper by Finney, which dealt with regular fractions with 2 or 3 levels. The term was imported into signal processing theory a few years later, possibly influenced by its use in factorial experiments; the history of that usage is described in the article on aliasing in signal processing.
The 1961 paper in which Box and Hunter introduced the concept of "resolution" dealt with regular two-level designs, but their initial definition makes no reference to lengths of defining words and so can be understood rather generally. Rao actually makes implicit use of resolution in his 1947 paper introducing orthogonal arrays, reflected in an important parameter inequality that he develops. He distinguishes effects in full and fractional designs by using symbols and (corresponding to and ), but makes no mention of aliasing.
The term confounded is often used as a synonym for aliased, and so one must read the literature carefully. The former term "is generally reserved for the indistinguishability of a treatment contrast and a block contrast", that is, for confounding with blocks. Kempthorne has shown how confounding with blocks in a -factor experiment may be viewed as aliasing in a fractional design with factors, but it is unclear whether one can do the reverse.
See also
The article on fractional factorial designs discusses examples in two-level experiments.
Notes
Citations
References
Design of experiments
Statistical process control | Aliasing (factorial experiments) | Engineering | 5,499 |
53,404,706 | https://en.wikipedia.org/wiki/Cost%20of%20drug%20development | The cost of drug development is the full cost of bringing a new drug (i.e., new chemical entity) to market from drug discovery through clinical trials to approval. Typically, companies spend tens to hundreds of millions of U.S. dollars on drug development. One element of the complexity is that the much-publicized final numbers often not only include the out-of-pocket expenses for conducting a series of Phase I-III clinical trials, but also the capital costs of the long period (10 or more years) during which the company must cover out-of-pocket costs for preclinical drug discovery. Additionally, companies often do not report whether a given figure includes the capitalized cost or comprises only out-of-pocket expenses, or both.
One study assessed both capitalized and out-of-pocket costs as about US$1.8 billion and $870 million, respectively.
In an analysis of the drug development costs for 98 companies over a decade, the average cost per drug developed and approved by a single-drug company was $350 million. But for companies that approved between eight and 13 drugs over 10 years, the cost per drug went as high as $5.5 billion.
A new study in 2020 estimated that the median cost of getting a new drug into the market was $985 million, and the average cost was $1.3 billion, which was much lower compared to previous studies, which have placed the average cost of drug development as $2.8 billion.
Alternatives to conventional drug development have the objective for universities, governments and pharmaceutical industry to collaborate and optimize resources.
Research and development
Severin Schwan, the CEO of the Swiss company Roche, reported that Roche's research and development costs amounted to $12.3 billion in 2018, a quarter of the entire National Institutes of Health budget. Given the profit-driven nature of pharmaceutical companies and their research and development expenses, companies use their research and development expenses as a starting point to determine appropriate yet profitable prices.
Pharmaceutical companies spend a large amount on research and development before a drug is released to the market and costs can be further divided into three major fields: the discovery into the drug’s specific medical field, clinical trials, and failed drugs.
Discovery
Drug discovery is the area of research and development that amounts to the most time and money. The process can involve scientists to determine the germs, viruses, and bacteria that cause a specific disease or illness. The time frame can range from 3–20 years and costs can range between several billion to tens of billions of dollars. Research teams attempt to break down disease components to find abnormal events/processes taking place in the body. Only then do scientists work on developing chemical compounds to treat these abnormalities with the aid of computer models.
After "discovery" and a creation of a chemical compound, pharmaceutical companies move forward with the Investigational New Drug (IND) Application from the FDA. After the investigation into the drug and given approval, pharmaceutical companies can move into pre-clinical trials and clinical trials.
Trials
Drug development and pre-clinical trials focus on non-human subjects and work on animals such as rats. This is the most inexpensive phase of testing.
The Food and Drug Administration mandates a 3 phase clinical trial testing that tests for side effects and the effectiveness of the drug with a single phase clinical trial costing upwards of $100 million.
After a drug has passed through all three phases, the pharmaceutical company can move forward with a New Drug Application from the FDA. In 2014, the FDA charged between $1 million to $2 million for an NDA.
Failed drugs
The processes of "discovery" and clinical trials amounts to approximately 12 years from research lab to the patient, in which about 10% of all drugs that start pre-clinical trials ever make it to actual human testing.> Each pharmaceutical company (which have hundreds of drugs moving in and out of these phases) will never recuperate the costs of "failed drugs". Thus, profits made from one drug need to cover the costs of previous "failed drugs".
Financial risk
Overall, research and development expenses relating to developing drugs amount to billions of dollars. A 2012 study found that research and development of a drug is riskier than product development in other industries because it is lengthy, costly, and highly uncertain, particularly due to unpredictable human physiological responses to drugs. As an example, in 2018, Roche spent $11 billion for research and developmental expenses, and had two failed Phase III trials for an Alzheimer's drug candidate.
Research on costs
Tufts Center for the Study of Drug Development has published numerous studies estimating the cost of developing new pharmaceutical drugs. In 2001, researchers from the Center estimated that the cost of doing so was $802 million, and in 2014, they released a study estimating that this amount had risen to nearly $2.6 billion. The 2014 study was criticized by Medecins Sans Frontieres, which said it was unreliable because the industry's research and development spending is not made public. Aaron Carroll of the New York Times also criticized the study, saying it "contains a lot of assumptions that tend to favor the pharmaceutical industry." The Center's 2016 estimate, published in the Journal of Health Economics, found the cost to have averaged $2.87 billion (in 2013 dollars).
A 2022 study invalidated the common argument for high medication costs that research and development investments are reflected in and necessitate the treatment costs, finding no correlation for investments in drugs (for cases where transparency was sufficient) and their costs.
References
Further reading
Drug pricing
Drug discovery | Cost of drug development | Chemistry,Biology | 1,131 |
34,040,794 | https://en.wikipedia.org/wiki/Acceleration%20voltage | In accelerator physics, the term acceleration voltage means the effective voltage surpassed by a charged particle along a defined straight line. If not specified further, the term is likely to refer to the longitudinal effective acceleration voltage .
The acceleration voltage is an important quantity for the design of microwave cavities for particle accelerators. See also shunt impedance.
For the special case of an electrostatic field that is surpassed by a particle, the acceleration voltage is directly given by integrating the electric field along its path. The following considerations are generalized for time-dependent fields.
Longitudinal voltage
The longitudinal effective acceleration voltage is given by the kinetic energy gain experienced by a particle with velocity along a defined straight path (path integral of the longitudinal Lorentz forces) divided by its charge,
.
For resonant structures, e.g. SRF cavities, this may be expressed as a Fourier integral, because the fields , and the resulting Lorentz force , are proportional to (eigenmodes)
with
Since the particles kinetic energy can only be changed by electric fields, this reduces to
Particle Phase considerations
Note that by the given definition, is a complex quantity. This is advantageous, since the relative phase between particle and the experienced field was fixed in the previous considerations (the particle travelling through experienced maximum electric force).
To account for this degree of freedom, an additional phase factor is included in the eigenmode field definition
which leads to a modified expression
for the voltage. In comparison to the former expression, only a phase factor with unit length occurs. Thus, the absolute value of the complex quantity is independent of the particle-to-eigenmode phase . It represents the maximum achievable voltage that is experienced by a particle with optimal phase to the applied field, and is the relevant physical quantity.
Transit time factor
A quantity named transit time factor
is often defined which relates the effective acceleration voltage to the time-independent acceleration voltage
.
In this notation, the effective acceleration voltage is often expressed as .
Transverse voltage
In symbolic analogy to the longitudinal voltage, one can define effective voltages in two orthogonal directions that are transversal to the particle trajectory
which describe the integrated forces that deflect the particle from its design path. Since the modes that deflect particles may have arbitrary polarizations, the transverse effective voltage may be defined using polar notation by
with the polarization angle
The tilde-marked variables are not absolute values, as one might expect, but can have positive or negative sign, to enable a range for . For example, if is defined, then must hold.
Note that this transverse voltage does not necessarily relate to a real change in the particles energy, since magnetic fields are also able to deflect particles. Also, this is an approximation for small-angle deflection of the particle, where the particles trajectory through the field can still be approximated by a straight line.
References
Accelerator physics | Acceleration voltage | Physics | 585 |
3,877,901 | https://en.wikipedia.org/wiki/V%28D%29J%20recombination | V(D)J recombination (variable–diversity–joining rearrangement) is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells, respectively. The process is a defining feature of the adaptive immune system.
V(D)J recombination in mammals occurs in the primary lymphoid organs (bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria, viruses, parasites, and worms as well as "altered self cells" as seen in cancer. The recognition can also be allergic in nature (e.g. to pollen or other allergens) or may match host tissues and lead to autoimmunity.
In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity".
Background
Human antibody molecules (including B cell receptors) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci:
The immunoglobulin heavy locus (IGH@) on chromosome 14, containing the gene segments for the immunoglobulin heavy chain.
The immunoglobulin kappa (κ) locus (IGK@) on chromosome 2, containing the gene segments for one type (κ) of immunoglobulin light chain.
The immunoglobulin lambda (λ) locus (IGL@) on chromosome 22, containing the gene segments for another type (λ) of immunoglobulin light chain.
Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy chain region contains 2 Constant (Cμ and Cδ) gene segments and 44 Variable (V) gene segments, plus 27 Diversity (D) gene segments and 6 Joining (J) gene segments. The light chain genes possess either a single (Cκ) or four (Cλ) Constant gene segments with numerous V and J gene segments but do not have D gene segments. DNA rearrangement causes one copy of each type of gene segment to go in any given lymphocyte, generating an enormous antibody repertoire; roughly 3×1011 combinations are possible, although some are removed due to self reactivity.
Most T cell receptors are composed of a variable alpha chain and a beta chain. The T cell receptor genes are similar to immunoglobulin genes in that they too contain multiple V, D, and J gene segments in their beta chains (and V and J gene segments in their alpha chains) that are rearranged during the development of the lymphocyte to provide that cell with a unique antigen receptor. The T cell receptor in this sense is the topological equivalent to an antigen-binding fragment of the antibody, both being part of the immunoglobulin superfamily.
An autoimmune response is prevented by eliminating cells that self-react. For T cells, this occurs in the thymus by testing the cell against an array of self antigens expressed through the function of the autoimmune regulator (AIRE). The immunoglobulin lambda light chain locus contains protein-coding genes that can be lost with its rearrangement. This is based on a physiological mechanism and is not pathogenetic for leukemias or lymphomas. A cell persists if it creates a successful product that does not self-react, otherwise it is pruned via apoptosis.
Immunoglobulins
Heavy chain
In the developing B cell, the first recombination event to occur is between one D and one J gene segment of the heavy chain locus. Any DNA between these two gene segments is deleted. This D-J recombination is followed by the joining of one V gene segment, from a region upstream of the newly formed DJ complex, forming a rearranged VDJ gene segment. All other gene segments between V and D segments are now deleted from the cell's genome. Primary transcript (unspliced RNA) is generated containing the VDJ region of the heavy chain and both the constant mu and delta chains (Cμ and Cδ). (i.e. the primary transcript contains the segments: V-D-J-Cμ-Cδ). The primary RNA is processed to add a polyadenylated (poly-A) tail after the Cμ chain and to remove sequence between the VDJ segment and this constant gene segment. Translation of this mRNA leads to the production of the IgM heavy chain protein.
Light chain
The kappa (κ) and lambda (λ) chains of the immunoglobulin light chain loci rearrange in a very similar way, except that the light chains lack a D segment. In other words, the first step of recombination for the light chains involves the joining of the V and J chains to give a VJ complex before the addition of the constant chain gene during primary transcription. Translation of the spliced mRNA for either the kappa or lambda chains results in formation of the Ig κ or Ig λ light chain protein.
Assembly of the Ig μ heavy chain and one of the light chains results in the formation of membrane bound form of the immunoglobulin IgM that is expressed on the surface of the immature B cell.
T cell receptors
During thymocyte development, the T cell receptor (TCR) chains undergo essentially the same sequence of ordered recombination events as that described for immunoglobulins. D-to-J recombination occurs first in the β-chain of the TCR. This process can involve either the joining of the Dβ1 gene segment to one of six Jβ1 segments or the joining of the Dβ2 gene segment to one of six Jβ2 segments. DJ recombination is followed (as above) with Vβ-to-DβJβ rearrangements. All gene segments between the Vβ-Dβ-Jβ gene segments in the newly formed complex are deleted and the primary transcript is synthesized that incorporates the constant domain gene (Vβ-Dβ-Jβ-Cβ). mRNA transcription splices out any intervening sequence and allows translation of the full length protein for the TCR β-chain.
The rearrangement of the alpha (α) chain of the TCR follows β chain rearrangement, and resembles V-to-J rearrangement described for Ig light chains (see above). The assembly of the β- and α- chains results in formation of the αβ-TCR that is expressed on a majority of T cells.
Mechanism
Key enzymes and components
The process of V(D)J recombination is mediated by VDJ recombinase, which is a diverse collection of enzymes. The key enzymes involved are recombination activating genes 1 and 2 (RAG), terminal deoxynucleotidyl transferase (TdT), and Artemis nuclease, a member of the ubiquitous non-homologous end joining (NHEJ) pathway for DNA repair. Several other enzymes are known to be involved in the process and include DNA-dependent protein kinase (DNA-PK), X-ray repair cross-complementing protein 4 (XRCC4), DNA ligase IV, non-homologous end-joining factor 1 (NHEJ1; also known as Cernunnos or XRCC4-like factor [XLF]), the recently discovered Paralog of XRCC4 and XLF (PAXX), and DNA polymerases λ and μ. Some enzymes involved are specific to lymphocytes (e.g., RAG, TdT), while others are found in other cell types and even ubiquitously (e.g., NHEJ components).
To maintain the specificity of recombination, V(D)J recombinase recognizes and binds to recombination signal sequences (RSSs) flanking the variable (V), diversity (D), and joining (J) genes segments. RSSs are composed of three elements: a heptamer of seven conserved nucleotides, a spacer region of 12 or 23 basepairs in length, and a nonamer of nine conserved nucleotides. While the majority of RSSs vary in sequence, the consensus heptamer and nonamer sequences are CACAGTG and ACAAAAACC, respectively; and although the sequence of the spacer region is poorly conserved, the length is highly conserved. The length of the spacer region corresponds to approximately one (12 basepairs) or two turns (23 basepairs) of the DNA helix. Following what is known as the 12/23 Rule, gene segments to be recombined are usually adjacent to RSSs of different spacer lengths (i.e., one has a "12RSS" and one has a "23RSS"). This is an important feature in the regulation of V(D)J recombination.
Process
V(D)J recombination begins when V(D)J recombinase (through the activity of RAG1) binds a RSS flanking a coding gene segment (V, D, or J) and creates a single-strand nick in the DNA between the first base of the RSS (just before the heptamer) and the coding segment. This is essentially energetically neutral (no need for ATP hydrolysis) and results in the formation of a free 3' hydroxyl group and a 5' phosphate group on the same strand. The reactive hydroxyl group is positioned by the recombinase to attack the phosphodiester bond of opposite strand, forming two DNA ends: a hairpin (stem-loop) on the coding segment and a blunt end on the signal segment. The current model is that DNA nicking and hairpin formation occurs on both strands simultaneously (or nearly so) in a complex known as a recombination center.
The blunt signal ends are flush ligated together to form a circular piece of DNA containing all of the intervening sequences between the coding segments known as a signal joint (although circular in nature, this is not to be confused with a plasmid). While originally thought to be lost during successive cell divisions, there is evidence that signal joints may re-enter the genome and lead to pathologies by activating oncogenes or interrupting tumor suppressor gene function(s)[Ref].
The coding ends are processed further prior to their ligation by several events that ultimately lead to junctional diversity. Processing begins when DNA-PK binds to each broken DNA end and recruits several other proteins including Artemis, XRCC4, DNA ligase IV, Cernunnos, and several DNA polymerases. DNA-PK forms a complex that leads to its autophosphorylation, resulting in activation of Artemis. The coding end hairpins are opened by the activity of Artemis. If they are opened at the center, a blunt DNA end will result; however in many cases, the opening is "off-center" and results in extra bases remaining on one strand (an overhang). These are known as palindromic (P) nucleotides due to the palindromic nature of the sequence produced when DNA repair enzymes resolve the overhang. The process of hairpin opening by Artemis is a crucial step of V(D)J recombination and is defective in the severe combined immunodeficiency (scid) mouse model.
Next, XRCC4, Cernunnos, and DNA-PK align the DNA ends and recruit terminal deoxynucleotidyl transferase (TdT), a template-independent DNA polymerase that adds non-templated (N) nucleotides to the coding end. The addition is mostly random, but TdT does exhibit a preference for G/C nucleotides. As with all known DNA polymerases, the TdT adds nucleotides to one strand in a 5' to 3' direction.
Lastly, exonucleases can remove bases from the coding ends (including any P or N nucleotides that may have formed). DNA polymerases λ and μ then insert additional nucleotides as needed to make the two ends compatible for joining. This is a stochastic process, therefore any combination of the addition of P and N nucleotides and exonucleolytic removal can occur (or none at all). Finally, the processed coding ends are ligated together by DNA ligase IV.
All of these processing events result in a paratope that is highly variable, even when the same gene segments are recombined. V(D)J recombination allows for the generation of immunoglobulins and T cell receptors to antigens that neither the organism nor its ancestor(s) need to have previously encountered, allowing for an adaptive immune response to novel pathogens that develop or to those that frequently change (e.g., seasonal influenza). However, a major caveat to this process is that the DNA sequence must remain in-frame in order to maintain the correct amino acid sequence in the final protein product. If the resulting sequence is out-of-frame, the development of the cell will be arrested, and the cell will not survive to maturity. V(D)J recombination is therefore a very costly process that must be (and is) strictly regulated and controlled.
See also
B cell receptor
T cell receptor
Basel Institute for Immunology
Charles M. Steinberg
NKT cell
Recombination-activating gene
References
Further reading
V(D)J Recombination. Series: Advances in Experimental Medicine and Biology, Vol. 650 Ferrier, Pierre (Ed.) Landes Bioscience 2009, XII, 199 p.
Immune system
Lymphocytes
Immunology | V(D)J recombination | Biology | 3,053 |
64,263,166 | https://en.wikipedia.org/wiki/Pantometrum%20Kircherianum | Pantometrum Kircherianum is a 1660 work by the Jesuit scholars Gaspar Schott and Athanasius Kircher. It was dedicated to Christian Louis I, Duke of Mecklenburg and printed in Würzburg by Johann Gottfried Schönwetter. It was a description, with building instructions, of a measuring device called the pantometer, that Kircher had developed some years before. The first edition include 32 copperplate illustrations.
Description of the pantometer
The name "pantometer" derives from Greek, in which "pan" means "all" and "metron" means "measure" - indicating that this instrument can be used to measure anything. As described in the book, it consisted of a square frame, a dioptra, and a disc that fitted within the square. The disc contained a built-in compass and a space for putting a sheet of paper. The disc could turn freely within the square, or be locked in a fixed position. Mounted on this apparatus was a movable ruler parallel to the edge of the square on which the dioptra was attached. An illustration in the book showed how the device could be used to measure the distance of objects by triangulating from two different points on a baseline.
The introduction to the book emphasised both the accuracy of the device and its ease of use, and stated that it could be used to "measure all, witness latitudes, longitudes, altitudes, depths and surfaces, terrestrial and celestial bodies, and whatever indeed we are accustomed to doing with other instruments."
Kircher's development of the pantometer
Kircher had mentioned the pantometer in his Specula Melitensis Encyclica noting that it was designed to help the Knights Hospitaller to solve "the most important mathematical and physical problems." It was a surveying tool that resembled a draughts board and could be used to calculate distances, weights and dimensions. In Magnes sive de Arte Magnetica (1643) Kircher has described an "Instrumentum, Pantometrum, Ichnographicum Magneticum" which allowed all things to be measured. It was 'magnetic' because it incorporated a compass, and 'ichnographic' because it could be used in map-making.
According to Schott, Kircher had first conceived of it in the company of father Ziegler, perhaps as early as 1623. Schott has been with Kircher in 1631 when he had first assembled the instrument and named it the 'pantometrum', sending an early example to Holy Roman Emperor Frederick III. Kircher had certainly used the pantometer himself to take scientific measurements when he was lowered into the crater of Vesuvius in 1638.
Later editions and references
Pantometrum Kircherianum was reprinted by Cholinus in Frankfurt in 1668 and again in 1669. The work was referenced in books by a number of later writers, including Jacob Leupold's Theatrum Arithmetico-Geometricum (1727) and Christian Wolff's Mathematisches Lexikon (1747).
External links
digital copy of Pantometrum Kircherianum at the Max Planck Institute Library
digital copy of Pantometrum Kircherianum at the Bayerische Staatsbibliothek
See also
Graphometer
References
1660 in science
1660 in the Holy Roman Empire
1660 books
Dimensional instruments
Athanasius Kircher | Pantometrum Kircherianum | Physics,Mathematics | 704 |
3,287,188 | https://en.wikipedia.org/wiki/Xanthene | Xanthene (9H-xanthene, 10H-9-oxaanthracene) is the organic compound with the formula CH2[C6H4]2O. It is a yellow solid that is soluble in common organic solvents. Xanthene itself is an obscure compound, but many of its derivatives are useful dyes.
Xanthene dyes
Dyes that contain a xanthene core include bikaverin, fluorescein, eosins, and rhodamines. Xanthene dyes tend to be fluorescent, yellow to pink to bluish red, brilliant dyes. Many xanthene dyes can be prepared by condensation of derivates of phthalic anhydride with derivates of resorcinol or 3-aminophenol.
Further reading
See also
Xanthone
Xanthydrol
References
Dyes
Fungicides | Xanthene | Biology | 199 |
15,162,845 | https://en.wikipedia.org/wiki/Splicing%20factor | A splicing factor is a protein involved in the removal of introns from strings of messenger RNA, so that the exons can bind together; the process takes place in particles known as spliceosomes. Genes are progressively switched off as people age, and splicing factors can reverse this trend. Splicing factors regulate the binding of the snRNPs U1 and U2 to the 3' and 5' ends of the intron during splicing and can either be splicing promoters or splicing repressors.
In a research paper, splicing factors were found to be produced upon application of resveratrol analogues, which induced senescent cells to rejuvenate.
Splicing factor 3b
Splicing factor 3b is a protein complex consisting of the following proteins: PHF5A, SF3B1, SF3B2, SF3B3, SF3B4, SF3B5, SF3B6.
Notes
Biochemistry
Spliceosome
Ageing | Splicing factor | Chemistry,Biology | 214 |
38,700,907 | https://en.wikipedia.org/wiki/56%20Ursae%20Majoris | 56 Ursae Majoris (56 UMa) is a star in the constellation Ursa Major. Its apparent magnitude is 5.03. It is a single-lined spectroscopic binary with an orbital period of about 45 years. The companion star is likely a heavy neutron star born by a supernova that exploded around 100,000 years ago.
References
Ursa Major
G-type giants
Barium stars
Spectroscopic binaries
Ursae Majoris, 56
Durchmusterung objects
098839
055560
4392 | 56 Ursae Majoris | Astronomy | 112 |
2,527,048 | https://en.wikipedia.org/wiki/Isotopes%20of%20iron | Natural iron (Fe) consists of four stable isotopes: 5.845% Fe (possibly radioactive with half-life > years), 91.754% Fe, 2.119% Fe and 0.286% Fe. There are 28 known radioisotopes and 8 nuclear isomers, the most stable of which are Fe (half-life 2.6 million years) and Fe (half-life 2.7 years).
Much of the past work on measuring the isotopic composition of iron has centered on determining Fe variations due to processes accompanying nucleosynthesis (i.e., meteorite studies) and ore formation. In the last decade however, advances in mass spectrometry technology have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work has been driven by the Earth and planetary science communities, though applications to biological and industrial systems are beginning to emerge.
List of isotopes
|-id=Iron-45
| rowspan=4|45Fe
| rowspan=4 style="text-align:right" | 26
| rowspan=4 style="text-align:right" | 19
| rowspan=4|45.01547(30)#
| rowspan=4|2.5(2) ms
| 2p (70%)
| 43Cr
| rowspan=4|3/2+#
| rowspan=4|
| rowspan=4|
|-
| β+, p (18.9%)
| 44Cr
|-
| β+, 2p (7.8%)
| 43V
|-
| β+ (3.3%)
| 45Mn
|-id=Iron-46
| rowspan=3|46Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 20
| rowspan=3|46.00130(32)#
| rowspan=3|13.0(20) ms
| β+, p (78.7%)
| 45Cr
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β+ (21.3%)
| 46Mn
|-
| β+, 2p?
| 44V
|-id=Iron-47
| rowspan=2|47Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 21
| rowspan=2|46.99235(54)#
| rowspan=2|21.9(2) ms
| β+, p (88.4%)
| 46Cr
| rowspan=2|7/2−#
| rowspan=2|
| rowspan=2|
|-
| β+ (11.6%)
| 47Mn
|-id=Iron-48
| rowspan=2|48Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 22
| rowspan=2|47.980667(99)
| rowspan=2|45.3(6) ms
| β+ (84.7%)
| 48Mn
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β+, p (15.3%)
| 47Cr
|-id=Iron-49
| rowspan=2|49Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 23
| rowspan=2|48.973429(26)
| rowspan=2|64.7(3) ms
| β+, p (56.7%)
| 48Cr
| rowspan=2|(7/2−)
| rowspan=2|
| rowspan=2|
|-
| β+ (43.3%)
| 49Mn
|-id=Iron-50
| rowspan=2|50Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 24
| rowspan=2|49.9629880(90)
| rowspan=2|152.0(6) ms
| β+
| 50Mn
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β+, p?
| 49Cr
|-id=Iron-51
| 51Fe
| style="text-align:right" | 26
| style="text-align:right" | 25
| 50.9568551(15)
| 305.4(23) ms
| β+
| 51Mn
| 5/2−
|
|
|-id=Iron-52
| 52Fe
| style="text-align:right" | 26
| style="text-align:right" | 26
| 51.94811336(19)
| 8.275(8) h
| β+
| 52Mn
| 0+
|
|
|-id=Iron-52m
| rowspan=2 style="text-indent:1em" | 52mFe
| rowspan=2 colspan="3" style="text-indent:2em" | 6960.7(3) keV
| rowspan=2|45.9(6) s
| β+ (99.98%)
| 52Mn
| rowspan=2|12+
| rowspan=2|
| rowspan=2|
|-
| IT (0.021%)
| 52Fe
|-id=Iron-53
| 53Fe
| style="text-align:right" | 26
| style="text-align:right" | 27
| 52.9453056(18)
| 8.51(2) min
| β+
| 53Mn
| 7/2−
|
|
|-id=Iron-53m
| style="text-indent:1em" | 53mFe
| colspan="3" style="text-indent:2em" | 3040.4(3) keV
| 2.54(2) min
| IT
| 53Fe
| 19/2−
|
|
|-
| 54Fe
| style="text-align:right" | 26
| style="text-align:right" | 28
| 53.93960819(37)
| colspan=3 align=center|Observationally Stable
| 0+
| 0.05845(105)
|
|-id=Iron-54m
| style="text-indent:1em" | 54mFe
| colspan="3" style="text-indent:2em" | 6527.1(11) keV
| 364(7) ns
| IT
| 54Fe
| 10+
|
|
|-
| 55Fe
| style="text-align:right" | 26
| style="text-align:right" | 29
| 54.93829116(33)
| 2.7562(4) y
| EC
| 55Mn
| 3/2−
|
|
|-
| 56Fe
| style="text-align:right" | 26
| style="text-align:right" | 30
| 55.93493554(29)
| colspan=3 align=center|Stable
| 0+
| 0.91754(106)
|
|-
| 57Fe
| style="text-align:right" | 26
| style="text-align:right" | 31
| 56.93539195(29)
| colspan=3 align=center|Stable
| 1/2−
| 0.02119(29)
|
|-
| 58Fe
| style="text-align:right" | 26
| style="text-align:right" | 32
| 57.93327358(34)
| colspan=3 align=center|Stable
| 0+
| 0.00282(12)
|
|-id=Iron-59
| 59Fe
| style="text-align:right" | 26
| style="text-align:right" | 33
| 58.93487349(35)
| 44.500(12) d
| β−
| 59Co
| 3/2−
|
|
|-
| 60Fe
| style="text-align:right" | 26
| style="text-align:right" | 34
| 59.9340702(37)
| 2.62(4)×106 y
| β−
| 60Co
| 0+
| trace
|
|-id=Iron-61
| 61Fe
| style="text-align:right" | 26
| style="text-align:right" | 35
| 60.9367462(28)
| 5.98(6) min
| β−
| 61Co
| (3/2−)
|
|
|-id=Iron-61m
| style="text-indent:1em" | 61mFe
| colspan="3" style="text-indent:2em" | 861.67(11) keV
| 238(5) ns
| IT
| 61Fe
| 9/2+
|
|
|-id=Iron-62
| 62Fe
| style="text-align:right" | 26
| style="text-align:right" | 36
| 61.9367918(30)
| 68(2) s
| β−
| 62Co
| 0+
|
|
|-id=Iron-63
| 63Fe
| style="text-align:right" | 26
| style="text-align:right" | 37
| 62.9402727(46)
| 6.1(6) s
| β−
| 63Co
| (5/2−)
|
|
|-id=Iron-64
| 64Fe
| style="text-align:right" | 26
| style="text-align:right" | 38
| 63.9409878(54)
| 2.0(2) s
| β−
| 64Co
| 0+
|
|
|-id=Iron-65
| rowspan=2|65Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 39
| rowspan=2|64.9450153(55)
| rowspan=2|805(10) ms
| β−
| 65Co
| rowspan=2|(1/2−)
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 64Co
|-id=Iron-65m1
| style="text-indent:1em" | 65m1Fe
| colspan="3" style="text-indent:2em" | 393.7(2) keV
| 1.12(15) s
| β−?
| 65Co
| (9/2+)
|
|
|-id=Iron-65m2
| style="text-indent:1em" | 65m2Fe
| colspan="3" style="text-indent:2em" | 397.6(2) keV
| 418(12) ns
| IT
| 65Fe
| (5/2+)
|
|
|-id=Iron-66
| rowspan=2|66Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 40
| rowspan=2|65.9462500(44)
| rowspan=2|467(29) ms
| β−
| 66Co
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 65Co
|-id=Iron-67
| rowspan=2|67Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 41
| rowspan=2|66.9509300(41)
| rowspan=2|394(9) ms
| β−
| 67Co
| rowspan=2|(1/2-)
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 66Co
|-id=Iron-67m1
| style="text-indent:1em" | 67m1Fe
| colspan="3" style="text-indent:2em" | 403(9) keV
| 64(17) μs
| IT
| 67Fe
| (5/2+,7/2+)
|
|
|-id=Iron-67m2
| style="text-indent:1em" | 67m2Fe
| colspan="3" style="text-indent:2em" | 450(100)# keV
| 75(21) μs
| IT
| 67Fe
| (9/2+)
|
|
|-id=Iron-68
| rowspan=2|68Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 42
| rowspan=2|67.95288(21)#
| rowspan=2|188(4) ms
| β−
| 68Co
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 67Co
|-id=Iron-69
| rowspan=3|69Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 43
| rowspan=3|68.95792(22)#
| rowspan=3|162(7) ms
| β−
| 69Co
| rowspan=3|1/2−#
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 68Co
|-
| β−, 2n?
| 67Co
|-id=Iron-70
| rowspan=2|70Fe
| rowspan=2 style="text-align:right" | 26
| rowspan=2 style="text-align:right" | 44
| rowspan=2|69.96040(32)#
| rowspan=2|61.4(7) ms
| β−
| 70Co
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 69Co
|-id=Iron-71
| rowspan=3|71Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 45
| rowspan=3|70.96572(43)#
| rowspan=3|34.3(26) ms
| β−
| 71Co
| rowspan=3|7/2+#
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 70Co
|-
| β−, 2n?
| 69Co
|-id=Iron-72
| rowspan=3|72Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 46
| rowspan=3|71.96860(54)#
| rowspan=3|17.0(10) ms
| β−
| 72Co
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 71Co
|-
| β−, 2n?
| 70Co
|-id=Iron-73
| rowspan=3|73Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 47
| rowspan=3|72.97425(54)#
| rowspan=3|12.9(16) ms
| β−
| 73Co
| rowspan=3|7/2+#
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 72Co
|-
| β−, 2n?
| 71Co
|-id=Iron-74
| rowspan=3|74Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 48
| rowspan=3|73.97782(54)#
| rowspan=3|5(5) ms
| β−
| 74Co
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 73Co
|-
| β−, 2n?
| 72Co
|-id=Iron-75
| rowspan=3|75Fe
| rowspan=3 style="text-align:right" | 26
| rowspan=3 style="text-align:right" | 49
| rowspan=3|74.98422(64)#
| rowspan=3|9# ms[>620 ns]
| β−?
| 75Co
| rowspan=3|9/2+#
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 74Co
|-
| β−, 2n?
| 73Co
|-id=Iron-76
| 76Fe
| style="text-align:right" | 26
| style="text-align:right" | 50
| 75.98863(64)#
| 3# ms[>410 ns]
| β−?
| 76Co
| 0+
|
|
Iron-54
Fe is observationally stable, but theoretically can decay to Cr, with a half-life of more than years via double electron capture (εε).
Iron-56
Fe is the most abundant isotope of iron. It is also the isotope with the lowest mass per nucleon, 930.412 MeV/c, though not the isotope with the highest nuclear binding energy per nucleon, which is nickel-62. However, because of the details of how nucleosynthesis works, Fe is a more common endpoint of fusion chains inside supernovae, where it is mostly produced as Ni. Thus, Ni is more common in the universe, relative to other metals, including Ni, Fe and Ni, all of which have a very high binding energy.
The high nuclear binding energy of Fe represents the point where further nuclear reactions become energetically unfavorable. Therefore it is among the heaviest elements formed in stellar nucleosynthesis reactions in massive stars. These reactions fuse lighter elements like magnesium, silicon, and sulfur to form heavier elements. Among the heavier elements formed is Ni, which subsequently decays to Co and then Fe.
Iron-57
Fe is widely used in Mössbauer spectroscopy and the related nuclear resonance vibrational spectroscopy due to the low natural variation in energy of the 14.4 keV nuclear transition.
The transition was famously used to make the first definitive measurement of gravitational redshift, in the 1960 Pound–Rebka experiment.
Iron-58
Iron-58 can be used to combat anemia and low iron absorption, to metabolically track iron-controlling human genes, and for tracing elements in nature. Iron-58 is also an assisting reagent in the synthesis of superheavy elements.
Iron-60
Iron-60 has a half-life of 2.6 million years, but was thought until 2009 to have a half-life of 1.5 million years. It undergoes beta decay to cobalt-60, which then decays with a half-life of about 5 years to stable nickel-60. Traces of iron-60 have been found in lunar samples.
In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of Ni, the granddaughter isotope of Fe, and the abundance of the stable iron isotopes could be found, which is evidence for the existence of Fe at the time of formation of the Solar System. Possibly the energy from the decay of Fe contributed, together with the energy from the decay of the radionuclide Al, to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of Ni in extraterrestrial material may also provide further insight into the origin of the Solar System and its early history.
Iron-60 found in fossilized bacteria in sea floor sediments suggest there was a supernova near the Solar System about 2 million years ago. Iron-60 is also found in sediments from 8 million years ago. In 2019, researchers found interstellar Fe in Antarctica, which they relate to the Local Interstellar Cloud.
The distance to the supernova of origin can be estimated by relating the amount of iron-60 intercepted as Earth passes through the expanding supernova ejecta. Assuming that the material ejected in a supernova expands uniformly out from its origin as a sphere with surface area 4πr. The fraction of the material intercepted by the Earth is dependent on its cross-sectional area (πR) as it passes through the expanding debris. Where M is the mass of ejected material.Assuming the intercepted material is distributed uniformly across the surface of the Earth (4πR), the mass surface density (Σ) of the supernova ejecta on Earth is: The number of Fe atoms per unit area found on Earth can be estimated if the typical amount of Fe ejected from a supernova is known. This can be done by dividing the surface mass density (Σ) by the atomic mass of Fe. The equation for N can be rearranged to find the distance to the supernova.An example calculation for the distance to the supernova point of origin is given below. This calculation uses speculative values for terrestrial Fe atom surface density (N ≈ 4 × 10 atoms/m) and a rough estimate of the mass of Fe ejected by a supernova (10 M). More sophisticated analyses have been reported that take into consideration the flux and deposition of Fe as well as possible interfering background sources.
Cobalt-60, the decay product of iron-60, emits 1.173 MeV and 1.333 MeV as it decays. These gamma-ray lines have long been important targets for gamma-ray astronomy, and have been detected by the gamma-ray observatory INTEGRAL. The signal traces the Galactic plane, showing that Fe synthesis is ongoing in our Galaxy, and probing element production in massive stars.
References
Isotope masses from:
Isotopic compositions and standard atomic masses from:
Half-life, spin, and isomer data selected from:
Further reading
Iron
Iron | Isotopes of iron | Chemistry | 4,941 |
62,184,291 | https://en.wikipedia.org/wiki/Limnological%20tower | A limnological tower is a structure constructed in a body of water to facilitate the study of aquatic ecosystems (limnology). They play an important role in drinking water infrastructure by allowing the prediction of algal blooms which can block filters and affect the taste of the water.
Purpose
Limnological towers provide a fixed structure to which sensors and sampling devices can be affixed. The depth of the structure below water level allows for study of the various layers of water in the lake or reservoir. The management of limnological conditions can be important in reservoirs used to supply drinking water treatment plants. In certain conditions algal blooms can occur which can block filters, change the pH of the water and cause taste and odour problems. If the sensors extend to the bed level the tower can also be used to monitor the hypolimnion (lowest layer of water) which in some conditions can become anoxic (of low oxygen content) which may affect the lake ecology.
Limnological towers have been constructed in reservoirs used to supply drinking water in the United Kingdom since algal blooms began causing problems with water quality. By providing data on water conditions and algae levels the towers can predict the behaviour of the algae and allow managers to make decisions to alter conditions to prevent algal blooms. These decisions may include altering water inflows (particularly where nutrient-rich intakes are considered), activating water jets to promote the mixing of different layers of water and altering the depth from which water is abstracted. These decisions can affect the behaviour of the reservoir over a period from a few hours to a few years.
Examples
North America
Six combined limnological and meteorological observation towers were established in the Great Lakes on the US-Canadian border in 1961. Three were installed in Lake Huron, two in Lake Ontario and one in Lake Erie by the Great Lakes Institute. These were innovative in design and cheap to construct, being built largely from water pipe. Constructed in water depths of the towers provided measurements of wind speed, air temperature and rainfall as well as water temperature and current flows at different depth. The shorter towers (in water less than of depth) were attached directly to the bed, towers in greater depths of water were floating units, with a submerged ballast tank, that were anchored to the lake bed by means of cables and weights.
A further two limnological towers were constructed near Douglas Point in Lake Huron in the 1960s. One, high was built offshore in 1961 and a second high in 1969. They are poles anchored to the lake bed by means of a gimbal and braced by tensioned cables and anchor guys. They featured a mobile thermistor sensor that could be moved to any depth on the tower as well as fixed thermometers at various depths and were intended to monitor the temperatures of different water layers in the lake.
United Kingdom
A concrete limnological tower was installed at Rutland Water, England's largest reservoir by surface area, when it was built in the early 1970s. The design of the tower was influenced by consultation with the Water Research Centre and was intended to provide the best possible tools to monitor the ecological conditions of the reservoir so that it could be best managed by its operator (the Anglian Water Authority). The tower monitors water temperature, dissolved oxygen levels and water fluorescence (which is a measure of algal content) at 2m depth intervals. The tower also has the ability to draw water samples for further testing from the various depths and also mounts an automatic weather station. The data is continuous and displayed visually in real-time at the reservoir control centre, situated at the dam. The site of the tower was chosen to best suit the needs of the operator. The reservoir consists of two arms – northern and southern – and has been designed such that all nutrient-rich water enters the southern arm. The intention being that nutrients will be depleted before the water is abstracted for use at the eastern end of the site. The northern-arm is fed by nutrient-poor sources and should be relatively unaffected by algal blooms. A secondary outlet is available that draws solely from the northern arm, in cases that the southern arm is affected by algal growth. Additionally the operators are able to draw directly from the River Nene if the reservoir water is unusable.
The Queen Mother Reservoir near London also has a limnological tower.
References
Limnology
Water supply | Limnological tower | Chemistry,Engineering,Environmental_science | 885 |
4,164,148 | https://en.wikipedia.org/wiki/Oliver%20E.%20Buckley%20Prize | The Oliver E. Buckley Condensed Matter Prize is an annual award given by the American Physical Society "to recognize and encourage outstanding theoretical or experimental contributions to condensed matter physics." It was endowed by AT&T Bell Laboratories as a means of recognizing outstanding scientific work. The prize is named in honor of Oliver Ellsworth Buckley, a former president of Bell Labs. Before 1982, it was known as the Oliver E. Buckley Solid State Prize. It is one of the most prestigious awards in the field of condensed matter physics.
The prize is normally awarded to one person but may be shared if multiple recipients contributed to the same accomplishments. Nominations are active for three years. The prize was endowed in 1952 and first awarded in 1953. Since 2012, the prize has been co-sponsored by HTC-VIA Group.
Recipients
See also
List of physics awards
References
External links
APS page on the Buckley Prize
Condensed matter physics awards
Awards of the American Physical Society
Awards established in 1953 | Oliver E. Buckley Prize | Physics,Materials_science | 194 |
48,791,271 | https://en.wikipedia.org/wiki/Maritime%20mobile-satellite%20service | Maritime mobile-satellite service (MMSS, or maritime mobile-satellite radiocommunication service) is – according to Article 1.29 of the International Telecommunication Union's Radio Regulations (RR) – "A mobile-satellite service in which mobile earth stations are located on board ships; survival craft stations and emergency position-indicating radiobeacon stations may also participate in this service", in addition to serving as navigation systems.
Classification
This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows:
Mobile service
Maritime mobile service (article 1.28)
Maritime mobile-satellite service
Port operations service (article 1.30)
Ship movement service (article 1.31)
Frequency allocation
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).
In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.
primary allocation: is indicated by writing in capital letters
secondary allocation: is indicated by small letters
exclusive or shared utilization: is within the responsibility of administrations
Example of frequency allocation
Selection of MMSS stations
See also
Radio station
Radiocommunication service
References / sources
Mobile services ITU
Maritime communication | Maritime mobile-satellite service | Technology | 288 |
63,035,175 | https://en.wikipedia.org/wiki/Behavioural%20archaeology | Behavioural archaeology is an archaeological theory that expands upon the nature and aims of archaeology in regards to human behaviour and material culture. The theory was first published in 1975 by American archaeologist Michael B. Schiffer and his colleagues J. Jefferson Reid, and William L. Rathje. The theory proposes four strategies that answer questions about past, and present cultural behaviour. It is also a means for archaeologists to observe human behaviour and the archaeological consequences that follow.
The theory was developed as a reaction to changes in archaeological thought, and expanding archaeological practise during the mid-late 20th century. It reacted to the increasing number of sub-disciplines emerging within archaeology as each came with their own unique methodologies. The theory was also a reaction to the processual thought process that emerged within the discipline some years prior.
In recent years the use of behavioural archaeology has been regarded as a significant contribution to the archaeological community. The strategies outlined by Schiffer and his colleagues have developed into sub-disciplines or methodologies that are used and well-regarded in contemporary archaeological practise. Behavioural archaeology has positive effects on the method in which archaeologists use to reconstruct human behaviour.
Background
"Behavioural Archaeology" was first published by Michael B. Schiffer, J. Jefferson Reid, and William L. Rathje in 1975 in the American Anthropologist journal. Leading up to the publication, archaeology as a discipline was expanding in its practice and theory due to the specialisation of various areas and new ideas that were being presented to the community.
Archaeology was beginning to break up into various sub-disciplines such as ethnoarchaeology, experimental archaeology, and industrial archaeology. Furthermore, Michael Schiffer challenges notions of processual archaeology (or 'New' archaeology) which was introduced prior within the discipline. The paper aimed to address the gaps within the processualist tradition and improve idea presented in processual archaeology, particularly those by James N. Hill and William A. Longacre. Rather than a paradigm shift occurring with the published paper into a new standard thought process within archaeology, Behavioural archaeology became one of many ideas within a vast and expanding theoretical landscape.
Through behavioural archaeology, Michael Schiffer and his colleagues explain the aims and nature of archaeology in relation to the new theories and forms of archaeology that were emerging during this time. They show that the fundamental concepts of archaeology can be represented as the relations between material culture and human behaviour. By examining these relationships and asking questions surrounding them, archaeologists can answer questions about human behavioural change for the past, present, and future.
Theory
The theory of behavioural archaeology outlines four strategies in which human behaviour and material culture can be examined in order to answer questions associated with archaeological inquiry. Behavioural archaeology also defines archaeology as a discipline that transcends time and space as it is the study of not only the past, but also of the present and future. It distinguishes the differences between systematic and archaeological contexts and examines how the archaeological record can be distorted through cultural and non-cultural transformation processes. Michael Schiffer stresses the importance of analysing the formation processes at various sites. This allows archaeologists to discern the most appropriate line of questioning regarding the material culture and how it relates to human behaviour.
Strategies
Strategy 1
Strategy 1 as outlined by Michael Schiffer and his colleagues examines how material culture from a past society or cultural group can be used to answer questions about past behaviour. These questions can include ones that involve the population of specific peoples, the occupation of a certain site or the resources that were used by humans at a certain location. For example, when studying the changes in technology of past societies, inferences regarding changes in diet of individuals can be made.
Strategy 2
Strategy 2 looks at how present material culture can provide archaeologists with information regarding past human behaviour. Questions within this strategy become experimentally charged as they are not confined to a specific time. Due to the nature of this questioning, this strategy relates to the sub-disciplines of experimental archaeology and ethnoarchaeology. During the time in which this theory was developed, experimental archaeology was being tested. However, in the 21st-century experimental archaeology has undergone further testing and is seen as a useful means of enquiry about the past within archaeological practise. It is often used to recreate the practises and technologies of past societies in order to understand how they operated and the strategic decisions made.
Strategy 3
Strategy 3 concerns itself with studying past material culture in order to answer questions about present human behaviour. Questions include how humans adapt to population changes, such as storage facilities and societal organisation. The past is often seen to be separate from the present, however, Michael Schiffer challenges this by examining how ancient cultures are relevant to modern social problems and issues. This theme of social relevance to contemporary society is inspired by the writings of Paul S. Martin. Most notably, Martin is credited with the theory known as the 'overkill hypothesis' theorising that humans lead to the rapid extinction of prehistoric animals. Although this theory is considered to be controversial, this can be seen as an example of how humans adapt to rising population, a situation that plagues modern society. This strategy can be seen today through the archaeological practise of ethnoarchaeology.
Strategy 4
Strategy 4 examines present-day material culture to examine contemporary human behaviour. This strategy seeks to ask specific questions about ongoing societies such as the consumption of goods by certain groups of people. This strategy can be studied in relation to industrial and non-industrial societies, however, is particularly useful for industrial societies. Additionally, this strategy is useful as by studying present material culture, archaeologists may also be able to look into future human behaviour. Strategy 4 is able to explain many modern behavioural patterns are also promote the relevance of archaeology in a 21st-century society.
Debates
With the introduction of a new theory within the archaeological community there comes a series of debates around how the ideas need to be interpreted. Michael Schiffer and his colleagues initially believed that behavioural archaeology would become a unifying principle for archaeological practise. However it has become one of many theories within archaeology. Behavioural archaeology has often been compared to other theories such as processual and evolutionary archaeology as reacts to ideas within these theories and is often compared to them when analysed in practise.
In this sense not all archaeologists believe it is a revolutionary practise, and many believe that similar to other archaeological theories they should be used in conjunction with each other when practising archaeology.
In 2010 the Society for American Archaeology held a forum concerning 'Assessing Michael B. Schiffer and his Behavioral Archaeology. At this forum, researchers such as Michael J. O'Brien, Alexander Bentley, Robert L. Kelly, Linda S. Cordell, Stephen Plog, and Diane Gifford-Gonzales discussed and raised issues about behavioural archaeology. In 2011 Michael Schiffer responded to these issues after they were published, by clarifying and addressing these points.
Applications in archaeology
Behavioural archaeology can be applied in many different contexts and situations in archaeological practice. It encourages archaeologists to examine an idea that may not be concrete such as belief systems, gender relations, or power relations. When these ideas are studied in conjunction with material culture, human behaviour and experience within various societies is revealed. For example, when examining changes within technology in the archaeological record, inferences can be made surrounding diet, environmental and social factors within human society.
In particular Strategies 2 and 4 have significant applications within modern archaeology although Strategies 1 and 3 are also generally applied.
Strategy 2
Strategy 2 also known as experimental archaeology, has developed within archaeological practise into a sub-discipline. Experimental archaeology allows an assumption of what occurred in the past to become an inference of what may have actually occurred. Although this concept is not a new idea in archaeological thought, since Michael Schiffer's 1975 paper, experimental archaeology has increasingly become an important subdiscipline within archaeology. Schiffer himself in 1990 and 1987 conducted research surrounding the properties of ceramics in order to understand the decisions of craftsmen when creating these objects. Experimental archaeology surrounding ceramics can be recreating furnaces and vessels in order to see how craftsmen made decisions surrounding the manufacture of ceramic products. Experiments such as this allow archaeologists to have a greater understanding of past human behaviour.
Strategy 4
Strategy 4 is currently being used in practise today particularly in America by William Rathje, one of the original authors of the theory. In the 1970s Rathje began the Garbage Project in Tucson, Arizona. In this project Rathje and his students examined the waste of Tucson locals in order to answer questions regarding human consumption, and decomposition of waste. Through this, they were able to examine human behaviour and make comparisons between what people claim their behaviour is against their actual consumption behaviour. For example, individuals claimed they drank less beer when they were actually consuming more of the substance. This analysis of human behaviour and consumption is useful when examining consumption in industrial societies and predicting future consumption behaviours.
Pompeii Premise
The 'Pompeii Premise' is an idea that was first proposed by Robert Ascher in 1961 that the remains an archaeologist uncovers is the representation of a group of people frozen at a certain point in time, and that inferences can only be made by the archaeologist when a site has assemblages like those at Pompeii. However rather than seeing the archaeological record as a 'preserved past', it is a combination of material culture over various points in time.
Lewis Binford suggests using the methods of behavioural archaeology in order to avoid viewing material culture in this stagnant way. One method of this is understanding the formation processes and context surrounding the creation of the archaeological record. In this respect, it is important for the archaeologist to remember the difference between the archaeological context and the systematic context of the archaeological record. In this way, cultural and non-cultural transformation processes can be determined and aid the archaeologist in determining if there is any distortion of context within the record. Within cultural transformation processes, human behaviour can be determined as it directly affects the formation of the material culture at a site.
Behavioural archaeology and memory
The concept of memory is something so pivotal for those in archaeology. It is through memory itself that an artifact can be understood. Laurent Olivier wrote "[t]he subject of archaeology is nothing other than this imprint of the past inscribed in matter." If that is all archaeology is, then the goal would be how to properly find and later portray this particular "imprint" for everyone to know about the artifact. With Behavioural Archaeology, the imprint in question is with certain artifacts, how exactly a human or multiple reacted to and with the artifact being analyzed.
Olivier also states "[f]undamentally, [archaeology] is an investigation into archives of memory, which is what remains are. Behavioural Archaeology takes the remains found by individuals and then further analyzes their meanings and what possible meanings they held for the humans that they interacted.
For example, in Bonna D. Wescoat's book, lamps found in different archaeological excavation sites "have been taken to confirm nocturnal timing". There was much discussion and deliberation before the academic community as a whole agreed that what was found was a lamp and that its function was to act as a light-bringer during the night. As such, some artifacts hold a singular, clear meaning while others found in excavation may hold multiple uses or were used in ways that the excavators cannot fathom as they were not there in the time when the artifact held much relevance. Memory should always be used in conjunction with behavioral archaeology for memory dictates how an object is seen.
Contributions to archaeology
The introduction of behavioural archaeology in 1975 followed by the work of Michael Schiffer and his students has been seen as a significant contribution to the field of archaeology. All four strategies have been significant in expanding the thought process surrounding material culture and human behaviour in various contexts. Furthermore, due to the significance of Behavioral archaeology, it is often used with other archaeological schools of thought when analysing the archaeological record. The act of looking at the relationships between material culture and human behaviour in itself is a significant thought process. In 2010 Society for American Archaeology held a forum in which archaeologists significant to the American archaeological community discussed the contributions of Michael Schiffer and Behavioral archaeology.
Behavioural archaeology is significant as it explores concepts that allow the archaeological record to characterised in terms of context and formation processes. This allows archaeologists to understand variations of different contexts in order to answer questions of inquiry.
It has also made contributions to archaeology as it looks at the creation of the archaeological record over time. This emphasises the fundamental idea of understanding a variety of contexts when examining material culture. This is an idea that was overlooked by processual thinking as processualism did not define specific contexts. Behavioral archaeology fills this gap in order to have a more thorough understanding of the archaeological record.
Behavioural archaeology supports the idea that the scientific process is a fundamental part of archaeological practise. This comes as a reaction to the introduction of postmodern ideas to archaeology and archaeological thought. As the idea of forming a narrative from the archaeological record became common, Behavioral archaeology stresses the importance of using the scientific process in order to construct a sound analysis.
Additionally, it is significant to archaeology as it places importance on creating principles or establishing relationships between human behaviour and material culture. This process is vital to archaeological practise as it allows archaeologists to identify patterns within material culture, and examine the archaeological record across cultures.
Overall behavioural archaeology challenges archaeologists to reconsider how they conduct archaeological practise and how they think about the nature and aims of archaeology.
References
Archaeological theory
Human behavior | Behavioural archaeology | Biology | 2,777 |
69,583,525 | https://en.wikipedia.org/wiki/Niesslia%20peltigerae | Niesslia peltigerae is a species of lichenicolous fungus in the family Niessliaceae. It was described as a new species in 2020 by lichenologist Sergio Pérez-Ortega. The type specimen was collected in the Hoonah-Angoon Census Area of Glacier Bay National Park, in muskeg and forest. The fungus was growing parasitically on the lichen Peltigera kristinssonii, which itself was growing on mountain hemlock (Tsuga mertensiana). The specific epithet peltigerae alludes to the genus of its host. Infection by the fungus bleaches the thallus of the host lichen.
References
Niessliaceae
Fungi described in 2020
Fungi of the United States
Lichenicolous fungi
Fungi without expected TNC conservation status
Fungus species | Niesslia peltigerae | Biology | 166 |
57,999,059 | https://en.wikipedia.org/wiki/Minna%20Palmroth | Minna Palmroth (born 10 May 1975) is a professor in computational space physics at the University of Helsinki; her particular area of interest is magnetospheric physics and solar wind - magnetosphere interactions.
Life
Palmroth is from Sahalahti, a small village in the former municipality by the same name near the city of Tampere, Finland.
Palmroth graduated high school from Kangasala High School and completed her master's degree in physics from the University of Helsinki in 1999. She received her doctorate in philosophy majoring in physics in 2003. Palmroth completed her doctoral thesis in English about: Solar wind: magnetosphere interaction as determined by observations and a global MHD Simulation dealt with the interaction of the solar wind (the jet of particles originating from the sun) with the Earth with the magnetosphere based on observational data and a global magnetohydrodynamic simulation.
At the Finnish Meteorological Institute from 2011 to 2016 she led the Earth observation research team, and from 2013 to 2016, she was a space researcher there. Since the beginning of 2017, he has been a professor of space physics at the University of Helsinki. During the years 2018–2025, she will direct the Finnish Center of Excellence for Sustainable Space Science and Technology.
She has studied the need to address the tons of "space junk" that currently orbits the Earth.
Honors and distinctions
Palmroth received the Väisälä prize in 2016. Since 2018, she has been a member of the Finnish Academy of Sciences and Letters. Palmroth is also a member of the Academy of Technical Sciences. In 2021, she was also invited to become a member of the Finnish Science Society. In 2022, she was awarded the JV Snellman Award. In 2022, she was invited to become a member of the Academia Europæa. In February 2023, Palmroth was awarded the Copernicus Medal for her pioneering work on the space environment simulator Vlasiator and her achievements in advancing space physics.
References
Living people
Members of Academia Europaea
Academic staff of the University of Helsinki
University of Helsinki alumni
1975 births
Computational physicists
20th-century Finnish physicists
21st-century Finnish physicists
Finnish women scientists
People from Pirkanmaa
Space scientists
Women space scientists | Minna Palmroth | Physics | 464 |
74,785,573 | https://en.wikipedia.org/wiki/Neutristor | A neutristor is a compact neutron generator made using solid-state electronics, invented at Sandia National Laboratories. Its primary purpose is to act as a light-weight, cheaper, and safer alternative to standard neutron generation devices, benefiting industries and processes such as oilfield operations, heavy mechanical production, neutron activation analysis, and medicine due to these reduced costs. It operates on the standard operational principles of neutron generators. Additionally, Sandia National Laboratories is creating a new generation of neutristors that do not require a vacuum environment to operate.
Advantages
A neutristor is cheaper and smaller than standard accelerator-based neutron generators. Normal neutron generators use a three-inch (7.5 cm) cylinder, too large for implanted neutron capture therapy and for neutron inspection of weld flaws.
References
Electrical components
Neutron sources | Neutristor | Technology,Engineering | 169 |
26,618,821 | https://en.wikipedia.org/wiki/GyrA%20RNA%20motif | The gyrA RNA motif is a conserved RNA structure identified by bioinformatics. The RNAs are present in multiple species of bacteria within the order Pseudomonadales. This order contains the genus Pseudomonas, which includes the opportunistic human pathogen Pseudomonas aeruginosa and Pseudomonas syringae, a plant pathogen.
gyrA RNAs are always found in the presumed 5' untranslated regions of gyrA genes, which encodes a protein forming a subunit of a DNA gyrase. Resistance to the antibiotic ciprofloxacin in Pseudomonas is often achieved via mutations in the gyrA gene. Because of its positioning, the gyrA RNA motif was hypothesized to be a cis-regulatory element acting up the downstream gyrA genes. However, gyrA was previously regarded as a gene whose level of expression is consistent in a wide variety of growth conditions.
References
External links
Cis-regulatory RNA elements | GyrA RNA motif | Chemistry | 200 |
24,377,796 | https://en.wikipedia.org/wiki/Geological%20history%20of%20oxygen | Although oxygen is the most abundant element in Earth's crust, due to its high reactivity it mostly exists in compound (oxide) forms such as water, carbon dioxide, iron oxides and silicates. Before photosynthesis evolved, Earth's atmosphere had no free diatomic elemental oxygen (O2). Small quantities of oxygen were released by geological and biological processes, but did not build up in the reducing atmosphere due to reactions with then-abundant reducing gases such as atmospheric methane and hydrogen sulfide and surface reductants such as ferrous iron.
Oxygen began building up in the prebiotic atmosphere at approximately 1.85 Ga during the Neoarchean-Paleoproterozoic boundary, a paleogeological event known as the Great Oxygenation Event (GOE). At current rates of primary production, today's concentration of oxygen could be produced by photosynthetic organisms in 2,000 years. In the absence of plants, the rate of oxygen production by photosynthesis was slower in the Precambrian, and the concentrations of O2 attained were less than 10% of today's and probably fluctuated greatly.
The increase in oxygen concentrations had wide ranging and significant impacts on Earth's biosphere. Most significantly, the rise of oxygen and the oxidative depletion of greenhouse gases (especially atmospheric methane) due to the GOE led to an icehouse Earth that caused a mass extinction of anaerobic microbes, but paved the way for the evolution of eukaryotes and later the rise of complex lifeforms.
Before the Great Oxidation Event
Photosynthetic prokaryotic organisms that produced O2 as a byproduct lived long before the first build-up of free oxygen in the atmosphere, perhaps as early as 3.5 billion years ago. The oxygen cyanobacteria produced would have been rapidly removed from the oceans by weathering of reducing minerals, most notably ferrous iron. This rusting led to the deposition of the oxidized ferric iron oxide on the ocean floor, forming banded iron formations. Thus, the oceans rusted and turned red. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event.
Effects on life
Early fluctuations in oxygen concentration had little direct effect on life, with mass extinctions not observed until around the start of the Cambrian period, . The presence of provided life with new opportunities. Aerobic metabolism is more efficient than anaerobic pathways, and the presence of oxygen created new possibilities for life to explore. Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume. 430-million-year-old fossilized charcoal produced by wildfires show that the atmospheric oxygen levels in the Silurian must have been equivalent to, or possibly above, present day levels. The maximum of 35% was reached towards the end of the Carboniferous period (about 300 million years ago), a peak which may have contributed to the large size of various arthropods, including insects, millipedes and scorpions. Whilst human activities, such as the burning of fossil fuels, affect relative carbon dioxide concentrations, their effect on the much larger concentration of oxygen is less significant.
The Great Oxygenation Event had the first major effect on the course of evolution. Due to the rapid buildup of oxygen in the atmosphere, the mostly anaerobic microbial biosphere that existed during the Archean eon was devastated, and only the aerobes that had antioxidant capabilities to neutralize oxygen thrived out in the open. This then led to symbiosis of anaerobic and aerobic organisms, who metabolically complemented each other, and eventually led to endosymbiosis and symbiogenesis, the evolution of eukaryotes, during the Proterozoic eon, who were now actually reliant on aerobic respiration to survive. After the Huronian glaciation came to an end, the Earth entered a long period of geological and climatic stability known as the Boring Billion. However, this long period was noticeably euxinic, meaning oxygen was scarce and the ocean and atmosphere were significantly sulfidic, and that evolution then was likely comparatively slow and quite conservative.
The Boring Billion ended during the Neoproterozoic period with a significant increase in photosynthetic activities, causing oxygen levels to rise 10- to 20-fold to about one-tenth of the modern level. This rise in oxygen concentration, known as the Neoproterozoic oxygenation event or "Second Great Oxygenation Event", was likely caused by the evolution of nitrogen fixation in cyanobacteria and the rise of eukaryotic photoautotrophs (green and red algae), and often cited as a possible contributor to later large-scale evolutionary radiations such as the Avalon explosion and the Cambrian explosion, which not only trended in larger but also more robust and motile multicellular organisms. The climatic changes associated with rising oxygen also produced cycles of glaciation and extinction events, each of which created disturbances that sped up ecological turnovers. During the Silurian and Devonian periods, the colonization and proliferation on land by early plants (which evolved from freshwater green algae) further increased the atmospheric oxygen concentration, leading to the historic peak during the Carboniferous period.
Data show an increase in biovolume soon after oxygenation events by more than 100-fold and a moderate correlation between atmospheric oxygen and maximum body size later in the geological record. The large size of many arthropods in the Carboniferous period, when the oxygen concentration in the atmosphere reached 35%, has been attributed to the limiting role of diffusion in these organisms' metabolism. But J.B.S. Haldane's essay points out that it would only apply to insects. However, the biological basis for this correlation is not firm, and many lines of evidence show that oxygen concentration is not size-limiting in modern insects. Ecological constraints can better explain the diminutive size of post-Carboniferous dragonflies – for instance, the appearance of flying competitors such as pterosaurs, birds, and bats.
Rising oxygen concentrations have been cited as one of several drivers for evolutionary diversification, although the physiological arguments behind such arguments are questionable, and a consistent pattern between oxygen concentrations and the rate of evolution is not clearly evident. The most celebrated link between oxygen and evolution occurred at the end of the last of the Snowball Earth glaciations, where complex multicellular life is first found in the fossil record. Under low oxygen concentrations and before the evolution of nitrogen fixation, biologically-available nitrogen compounds were in limited supply, and periodic "nitrogen crises" could render the ocean inhospitable to life. Significant concentrations of oxygen were just one of the prerequisites for the evolution of complex life. Models based on uniformitarian principles (i.e. extrapolating present-day ocean dynamics into deep time) suggest that such a concentration was only reached immediately before metazoa first appeared in the fossil record. Further, anoxic or otherwise chemically "inhospitable" oceanic conditions that resemble those supposed to inhibit macroscopic life re-occurred at intervals through the early Cambrian, and also in the late Cretaceous – with no apparent effect on lifeforms at these times. This might suggest that the geochemical signatures found in ocean sediments reflect the atmosphere in a different way before the Cambrian – perhaps as a result of the fundamentally different mode of nutrient cycling in the absence of planktivory.
An oxygen-rich atmosphere can release phosphorus and iron from rock, by weathering, and these elements then become available for sustenance of new species whose metabolisms require these elements as oxides.
See Also
Great oxygenation event
Neoproterozoic oxygenation event
Silurian-Devonian Terrestrial Revolution
References
External links
;
Biogeochemistry
Oxygen
Oxygen | Geological history of oxygen | Chemistry,Environmental_science | 1,634 |
49,249,936 | https://en.wikipedia.org/wiki/Drug%20vectorization | In pharmacology and medicine, vectorization of drugs refers to (intracellular) targeting with plastic, noble metal or silicon nanoparticles or liposomes to which pharmacologically active substances are reversibly bound or attached by adsorption.
CNRS researchers have devised a way to overcome the problem of multidrug resistance using polyalkyl cyanoacrylate (PACA) nanoparticles as "vectors".
As a developing concept, drug nanocarriers are expected to play a major role in delivering multiple drugs to tumor tissues by overcoming semi-permeable membranes and biological barriers such as the blood–brain barrier.
References
See also
Vector (molecular biology)
Cancer treatment
Nanomedicine
Nanobiotechnology
Paul Ehrlich#Magic bullet
Gold nanobeacons
Pharmacology
Nanomedicine | Drug vectorization | Chemistry,Materials_science | 177 |
17,291,981 | https://en.wikipedia.org/wiki/Arsenic%20pentafluoride | Arsenic pentafluoride is a chemical compound of arsenic and fluorine. It is a toxic, colorless gas. The oxidation state of arsenic is +5.
Synthesis
Arsenic pentafluoride can be prepared by direct combination of arsenic and fluorine:
2As + 5F2 → 2AsF5
It can also be prepared by the reaction of arsenic trifluoride and fluorine:
AsF3 + F2 → AsF5
or the addition of fluorine to arsenic pentoxide or arsenic trioxide.
2As2O5 + 10F2 → 4AsF5 + 5O2
2As2O3 + 10F2 → 4AsF5 + 3O2
Properties
Arsenic pentafluoride is a colourless gas and has a trigonal bipyramidal structure. In the solid state the axial As−F bond lengths are 171.9 pm and the equatorial 166.8 pm. Its point group is D3h.
Reactions
Arsenic pentafluoride forms halide complexes and is a powerful fluoride acceptor. An example is the reaction with sulfur tetrafluoride, forming an ionic hexafluoroarsenate complex.
AsF5 + SF4 → SF3+ + AsF6−
See also
List of highly toxic gases
References
Arsenic(V) compounds
Fluorides
Arsenic halides | Arsenic pentafluoride | Chemistry | 283 |
8,008,377 | https://en.wikipedia.org/wiki/Phase-contrast%20microscopy |
Phase-contrast microscopy (PCM) is an optical microscopy technique that converts phase shifts in light passing through a transparent specimen to brightness changes in the image. Phase shifts themselves are invisible, but become visible when shown as brightness variations.
When light waves travel through a medium other than a vacuum, interaction with the medium causes the wave amplitude and phase to change in a manner dependent on properties of the medium. Changes in amplitude (brightness) arise from the scattering and absorption of light, which is often wavelength-dependent and may give rise to colors. Photographic equipment and the human eye are only sensitive to amplitude variations. Without special arrangements, phase changes are therefore invisible. Yet, phase changes often convey important information.
Phase-contrast microscopy is particularly important in biology.
It reveals many cellular structures that are invisible with a bright-field microscope, as exemplified in the figure.
These structures were made visible to earlier microscopists by staining, but this required additional preparation and death of the cells.
The phase-contrast microscope made it possible for biologists to study living cells and how they proliferate through cell division. It is one of the few methods available to quantify cellular structure and components without using fluorescence.
After its invention in the early 1930s, phase-contrast microscopy proved to be such an advancement in microscopy that its inventor Frits Zernike was awarded the Nobel Prize in Physics in 1953. The woman who manufactured this microscope, Caroline Bleeker, often remains uncredited.
Working principle
The basic principle to make phase changes visible in phase-contrast microscopy is to separate the illuminating (background) light from the specimen-scattered light (which makes up the foreground details) and to manipulate these differently.
The ring-shaped illuminating light (green) that passes the condenser annulus is focused on the specimen by the condenser. Some of the illuminating light is scattered by the specimen (yellow). The remaining light is unaffected by the specimen and forms the background light (red). When observing an unstained biological specimen, the scattered light is weak and typically phase-shifted by −90° (due to both the typical thickness of specimens and the refractive index difference between biological tissue and the surrounding medium) relative to the background light. This leads to the foreground (blue vector) and background (red vector) having nearly the same intensity, resulting in low image contrast.
In a phase-contrast microscope, image contrast is increased in two ways: by generating constructive interference between scattered and background light rays in regions of the field of view that contain the specimen, and by reducing the amount of background light that reaches the image plane. First, the background light is phase-shifted by −90° by passing it through a phase-shift ring, which eliminates the phase difference between the background and the scattered light rays.
When the light is then focused on the image plane (where a camera or eyepiece is placed), this phase shift causes background and scattered light rays originating from regions of the field of view that contain the sample (i.e., the foreground) to constructively interfere, resulting in an increase in the brightness of these areas compared to regions that do not contain the sample. Finally, the background is dimmed ~70-90% by a gray filter ring; this method maximizes the amount of scattered light generated by the illumination light, while minimizing the amount of illumination light that reaches the image plane. Some of the scattered light that illuminates the entire surface of the filter will be phase-shifted and dimmed by the rings, but to a much lesser extent than the background light, which only illuminates the phase-shift and gray filter rings.
The above describes negative phase contrast. In its positive form, the background light is instead phase-shifted by +90°. The background light will thus be 180° out of phase relative to the scattered light. The scattered light will then be subtracted from the background light to form an image with a darker foreground and a lighter background, as shown in the first figure.
Related methods
The success of the phase-contrast microscope has led to a number of subsequent phase-imaging methods.
In 1952, Georges Nomarski patented what is today known as differential interference contrast (DIC) microscopy.
It enhances contrast by creating artificial shadows, as if the object is illuminated from the side. But DIC microscopy is unsuitable when the object or its container alter polarization. With the growing use of polarizing plastic containers in cell biology, DIC microscopy is increasingly replaced by Hoffman modulation contrast microscopy, invented by Robert Hoffman in 1975.
Traditional phase-contrast methods enhance contrast optically, blending brightness and phase information in a single image. Since the introduction of the digital camera in the mid-1990s, several new digital phase-imaging methods have been developed, collectively known as quantitative phase-contrast microscopy. These methods digitally create two separate images, an ordinary bright-field image and a so-called phase-shift image. In each image point, the phase-shift image displays the quantified phase shift induced by the object, which is proportional to the optical thickness of the object. In this way measurement of the associated optical field can remedy the halo artifacts associated with conventional phase contrast by solving an optical inverse problem to computationally reconstruct the scattering potential of the object.
See also
Live cell imaging
Phase-contrast imaging
Phase-contrast X-ray imaging
References
External links
Optical Microscopy Primer — Phase Contrast Microscopy by Florida State University
Phase contrast and dark field microscopes (Université Paris-Sud)
Microscope Parts need to know.
Dutch inventions
Cell imaging
Laboratory equipment
Optical microscopy techniques
Microscopes | Phase-contrast microscopy | Chemistry,Technology,Engineering,Biology | 1,145 |
34,457,771 | https://en.wikipedia.org/wiki/Center%20for%20Environmental%20Philosophy | The Center for Environmental Philosophy is a non-profit organization that supports a range of scholarly activities that explore philosophical aspects of environmental problems. It publishes the scholarly journal Environmental Ethics. The center best known for this leading journal in the field of environmental philosophy and is widely regarded as establishing the field of environmental ethics and is considered to be the leading scholarly forum about environmental philosophy.
In addition to the publication of its journal, the Center promotes graduate education and postdoctoral research in environmental philosophy, and supports the development of international perspectives on global environmental problems. The Center for Environmental Philosophy is located at the University of North Texas in Denton, Texas.
The center was established in 1989 by Environmental Philosophy, Inc. as a center for its various activities in publishing, research, and education. The center moved to the University of North Texas in 1990 and was given the status of an affiliated organization in 1991.
Books
The Center for Environmental Philosophy publishes books under the title "Environmental Ethics Books" to make these major works in the field of environmental ethics available in print, including the following titles:
After Earth Day, by Max Oelschlaeger
The Beauty of Environment: A General Model for Environmental Aesthetics, by Yrjo Sepanmaa
Beyond Spaceship Earth: Environmental Ethics and the Solar System, edited by Eugene C. Hargrove
Foundations of Environmental Ethics, by Eugene C. Hargrove
Is It Too Late? A Theology of Ecology, by John B. Cobb, Jr.
The Liberation of Life: From the Cell to the Community, by Charles Birch and John B. Cobb, Jr.
See also
List of environmental organizations
References
External links
Environmental ethics
Organizations established in 1989
Ethics organizations
Environmental philosophy | Center for Environmental Philosophy | Environmental_science | 333 |
36,531,196 | https://en.wikipedia.org/wiki/Equivalent%20Concrete%20Performance%20Concept | According to the Equivalent Concrete Performance Concept a concrete composition, deviating from the EN 206-1, can still be accepted, provided that certain conditions are fulfilled.
Conditions
A concrete composition not composed by the standard EN 206–1, can be acknowledged, only if the new concrete shows a performance equal to the standardized concrete concerning environmental classes. Cement content and water-cement ratio are important elements hereby.
The comparison with standardized concrete is tested according to the following properties:
Compressive strength
Resistance to carbonatation
Chloride migration
Freeze-thaw resistance
Other possible requirements
When the new concrete scores equally or better, a certificate of utilization can be obtained from certificating organizations.
Standardization
The valid standards concerning concrete are:
EN 206-1: determines minimum requirements of concrete composition for different environmental classes.
NBN B15-100: Belgian annex
CUR recommendation 48: Dutch annex
These national annexes serve to elaborate the functional description of the Equivalent Concrete Performance Concept.
Concrete composition
Standardized concrete is a highly durable material, predominantly thanks to the increasing amount of cement at stricter environmental classes. But cement is a costly component and has a relatively powerful impact on the environment.
Partly because of this, alternative binders such as fly ashes and slags are applied in the concrete sector. As a result, the content of Portland cement can be reduced in many cases. Other recycled raw materials can also contribute to a more economic or less environmental polluting concrete composition.
1. Usage of residual products from the concrete industry, for example stone dust (from crushing aggregates), concrete slurry (from washing mixers) or concrete waste
2. Usage of residual products from other industries, for example fly ash from coal plants and slags from the metallurgical industry
3. Usage of new types of cement with reduced environmental impact (mineralized cement, limestone addition, waste-derived fuels)
Durability
To respect the Kyoto Protocol, the -emission should be reduced.
Green concrete exists out of recycling or is composed is such a manner, that it is as environmental-friendly as possible.
A few conditions before the term green concrete may be used:
-emission by concrete manufacturing is reduced by 30%
Concrete contains at least 20% residual products, used as aggregates
New residual products, previously disposed of, are used in concrete production
-neutral: waste-derived fuels replace at least 10% of the fossile fuels in cement production
References
Equivalent Performance Concept: green concrete
Betonlexicon: Duurzaamheid van beton
Brancheorganisatie van de betonmortelindustrie
Maatschappelijk Verantwoord Ondernemen Nederland
Concrete | Equivalent Concrete Performance Concept | Engineering | 538 |
62,474,444 | https://en.wikipedia.org/wiki/FACOM%20128 | The FACOM 128 was a relay-based electromechanical computer built by Fujitsu. Two models were made, namely the FACOM 128A, built in 1956, and the FACOM 128B, built in 1959. , a fully working FACOM 128B is still in working order, maintained by Fujitsu staff at a facility in Numazu in Shizuoka Prefecture.
The FACOM 128B processes numbers using a bi-quinary coded decimal representation.
See also
FACOM 100
FACOM
References
External links
Electro-mechanical computers
1950s computers | FACOM 128 | Technology | 113 |
27,900,214 | https://en.wikipedia.org/wiki/Intramolecular%20Heck%20reaction | The intramolecular Heck reaction (IMHR) in chemistry is the coupling of an aryl or alkenyl halide with an alkene in the same molecule. The reaction may be used to produce carbocyclic or heterocyclic organic compounds with a variety of ring sizes. Chiral palladium complexes can be used to synthesize chiral intramolecular Heck reaction products in non-racemic form.
Introduction
The Heck reaction is the palladium-catalyzed coupling of an aryl or alkenyl halide with an alkene to form a substituted alkene. Intramolecular variants of the reaction may be used to generate cyclic products containing endo or exo double bonds. Ring sizes produced by the intramolecular Heck reaction range from four to twenty-seven atoms. Additionally, in the presence of a chiral palladium catalyst, the intramolecular Heck reaction may be used to establish tertiary or quaternary stereocenters with high enantioselectivity. A number of tandem reactions, in which the intermediate alkylpalladium complex is intercepted either intra- or intermolecularly before β-hydride elimination, have also been developed.
(1)
Mechanism and stereochemistry
The neutral pathway
As shown in Eq. 2, the neutral pathway of the Heck reaction begins with the oxidative addition of the aryl or alkenyl halide into a coordinatively unsaturated palladium(0) complex (typically bound to two phosphine ligands) to give complex I. Dissociation of a phosphine ligand followed by association of the alkene yields complex II, and migratory insertion of the alkene into the carbon-palladium bond establishes the key carbon-carbon bond.
Insertion takes place in a suprafacial fashion, but the dihedral angle between the alkene and palladium-carbon bond during insertion can vary from 0° to ~90°. After insertion, β-hydride elimination affords the product and a palladium(II)-hydrido complex IV, which is reduced by base back to palladium(0).
(2)
The cationic pathway
Most asymmetric Heck reactions employing chiral phosphines proceed by the cationic pathway, which does not require the dissociation of a phosphine ligand. Oxidative addition of an aryl perfluorosulfonate generates a cationic palladium aryl complex V. The mechanism then proceeds as in the neutral case, with the difference that an extra site of coordinative unsaturation exists on palladium throughout the process.
Thus, coordination of the alkene does not require ligand dissociation. Stoichiometric amounts of base are still required to reduce the palladium(II)-hydrido complex VIII back to palladium(0). Silver salts may be used to initiate the cationic pathway in reactions of aryl halides.
(3)
The anionic pathway
Reactions involving palladium(II) acetate and phosphine ligands proceed by a third mechanism, the anionic pathway. Base mediates the oxidation of a phosphine ligand by palladium(II) to a phosphine oxide. Oxidative addition then generates the anionic palladium complex IX. Loss of halide leads to neutral complex X, which undergoes steps analogous to the neutral pathway to regenerate anionic complex IX. A similar anionic pathway is also likely operative in reactions of bulky palladium tri(tert-butyl)phosphine complexes.
(4)
Establishing tertiary or quaternary stereocenters
Asymmetric Heck reactions establish quaternary or tertiary stereocenters. If migratory insertion generates a quaternary center adjacent to the palladium-carbon bond (as in reactions of trisubstituted or 1,1-disubstituted alkenes), β-hydride elimination toward that center is not possible and it is retained in the product. Similarly, β-hydride elimination is not possible if a hydrogen syn to the palladium-carbon bond is not available. Thus, tertiary stereocenters can be established in conformationally restricted systems.
(5)
Scope and limitations
The intramolecular Heck reaction may be used to form rings of a variety of sizes and topologies. β-Hydride elimination need not be the final step of the reaction, and tandem methods have been developed that involve the interception of palladium alkyl intermediates formed after migratory insertion by an additional reactant. This section discusses the most common ring sizes formed by the intramolecular Heck reaction and some of its tandem and asymmetric variants.
5-Exo cyclization, which establishes a five-membered ring with an exocyclic alkene, is the most facile cyclization mode in intramolecular Heck reactions. In this and many other modes of intramolecular Heck cyclization, annulations typically produce a cis ring juncture.
(6)
6-Exo cyclization is also common. The high stability of Heck reaction catalysts permits the synthesis of highly strained compounds at elevated temperatures. In the example below, the arene and alkene must both be in energetically unfavorable axial positions in order to react.
(7)
Endo cyclization is observed most often when small or large rings are involved. For instance, 5-endo cyclization is generally preferred over 4-exo cyclization. The yield of endo product increases with increasing ring size in the synthesis of cycloheptenes, -octenes, and -nonenes.
(8)
Tandem reactions initiated by IMHR have been extensively explored. Palladium alkyl intermediates generated after migratory insertion may undergo a second round of insertion in the presence of a second alkene (either intra- or intermolecular). When dienes are involved in the intramolecular Heck reaction, insertion affords π-allylpalldium intermediates, which may be intercepted by nucleophiles. This idea was applied to a synthesis of (–)-morphine.
(9)
Asymmetric IMHR may establish tertiary or quaternary stereocenters. BINAP is the most commonly chiral ligand used in this context. An interesting application of IMHR is group-selective desymmetrization (enantiotopic group selection), in which the chiral palladium aryl intermediate undergoes insertion predominantly with one of the enantiotopic double bonds.
(10)
Synthetic applications
The high functional group tolerance of the intramolecular Heck reaction allows it to be used at a very late stage in synthetic routes. In a synthesis of (±)-FR900482, IMHR establishes a tricyclic ring system in high yield without disturbing any of the sensitive functionality nearby.
(11)
Intramolecular Heck reactions have been employed for the construction of complex natural products. An example is the late-stage, macrocyclic ring closure in the total synthesis of the cytotoxic natural product (–)-Mandelalide A.
In another example a fully intramolecular tandem Heck reaction is used in a synthesis of (–)-scopadulcic acid. A 6-exo cyclization sets the quaternary center and provides a neopentyl σ-palladium intermediate, which undergoes a 5-exo reaction to provide the ring system.
(12)
Comparison with other methods
The closest competing method to IMHR is radical cyclization. Radical cyclizations are often reductive, which can cause undesired side reactions to occur if sensitive substrates are employed. The IMHR, on the other hand, can be run under reductive conditions if desired. Unlike the IMHR, radical cyclization does not require the coupling of two sp2-hybridized carbons. In some cases, the results of radical cyclization and IMHR are complementary.
Experimental conditions and procedure
Typical conditions
A variety of experimental concerns exist for IMHR reactions. Although most of the common Pd(0) catalysts are commercially available (Pd(PPh3)4, Pd2(dba)3, and derivatives), they may also be prepared by simple, high-yielding procedures. Palladium(II) acetate is cheap and may be reduced in situ to palladium(0) with phosphine. Three equivalents of phosphine per equivalent of palladium acetate are commonly used; these conditions generate Pd(PR3)2 as the active catalyst. Bidentate phosphine ligands are common in asymmetric reactions to enhance stereoselectivity.
A wide variety of bases may be used, and the base is often employed in excess. Potassium carbonate is the most common base employed, and inorganic bases are generally used more often than organic bases. A number of additives have also been identified for the Heck reaction—silver salts may be used to drive the reaction down the cationic pathway, and halide salts may be used to convert aryl triflates via the neutral pathway. Alcohols have been shown to enhance catalyst stability in some cases, and acetate salts are beneficial in reactions following the anionic pathway.
Example procedure
(13)
A solution of the amide (0.365 g, 0.809 mmol), Pd(PPh3)4 (0.187 g, 0.162 mmol), and triethylamine (1.12 mL, 8.08 mmol) in MeCN (8 mL) in a sealed tube was heated slowly to 120°. After stirring for 4 hours, the reaction mixture was cooled to room temperature, and the solvent was evaporated. The residue was chromatographed (loaded with CH2Cl2) to give the title product 316 (0.270 g, 90%) as a colorless oil; Rf 0.42 (EtOAc/petroleum ether 10:1); [α]22D +14.9 (c, 1.0, CHCl3); IR 3027, 2930, 1712, 1673, 1608, 1492, 1343, 1248 cm−1; 1H NMR (400 MHz) δ 7.33–7.21 (m, 6 H), 7.07 (dd, J = 7.3, 16.4 Hz, 1 H), 7.00 (t, J = 7.5 Hz, 1 H), 6.77 (d, J = 7.7 Hz, 1 H), 6.30 (dd, J = 8.7, 11.4 Hz, 1 H), 5.32 (d, J = 15.7 Hz, 1 H), 5.04 (s, 1 H), 4.95 (s, 1 H), 4.93 (d, J = 11.1 Hz, 1 H), 4.17 (s, 1 H), 3.98 (d, J = 15.7 Hz, 1 H), 3.62 (d, J = 8.7 Hz, 1 H), 3.17 (s, 3 H), 2.56 (dd, J = 3.5, 15.5 Hz, 1 H), 2.06 (dd, J = 2.8, 15.5 Hz, 1 H); 13C NMR (100 MHz) δ 177.4, 172.9, 147.8, 142.2, 136.5, 132.2, 131.6, 128.8, 128.4, 128.2, 127.7, 127.1, 123.7, 122.9, 107.9, 105.9, 61.0, 54.7, 49.9, 44.4, 38.2, 26.4; HRMS Calcd. for C24H22N2O2: 370.1681. Found: 370.1692.
References
Organic reactions | Intramolecular Heck reaction | Chemistry | 2,544 |
153,209 | https://en.wikipedia.org/wiki/Air-augmented%20rocket | Air-augmented rockets use the supersonic exhaust of some kind of rocket engine to further compress air collected by ram effect during flight to use as additional working mass, leading to greater effective thrust for any given amount of fuel than either the rocket or a ramjet alone.
It represents a hybrid class of rocket/ramjet engines, similar to a ramjet, but able to give useful thrust from zero speed, and is also able in some cases to operate outside the atmosphere, with fuel efficiency not worse than both a comparable ramjet or rocket at every point.
There are a wide variety of variations on the basic concept, and a wide variety of resulting names. Those that burn additional fuel downstream of the rocket are generally known as ramrockets, rocket-ejector, integral rocket/ramjets or ejector ramjets, whilst those that do not include additional burning are known as ducted rockets or shrouded rockets depending on the details of the expander.
Operation
In a conventional chemical rocket engine, the rocket carries both its fuel and oxidizer in its fuselage. The chemical reaction between the fuel and the oxidizer produces reactant products which are nominally gasses at the pressures and temperatures in the rocket's combustion chamber. The reaction is also highly energetic (exothermic) releasing tremendous energy in the form of heat; that is imparted to the reactant products in the combustion chamber giving this mass enormous internal energy which, when expanded through a nozzle is capable of producing very high exhaust velocities. The exhaust is directed rearward through the nozzle, thereby producing a thrust forward.
In this conventional design, the fuel/oxidizer mixture is both the working mass and energy source that accelerates it. It is easy to demonstrate that the best performance is had if the working mass has the lowest molecular weight possible. Hydrogen, by itself, is the theoretical best rocket fuel. Mixing this with oxygen in order to burn it lowers the overall performance of the system by raising the mass of the exhaust, as well as greatly increasing the mass that has to be carried aloft – oxygen is much heavier than hydrogen.
One potential method of increasing the overall performance of the system is to collect either the fuel or the oxidizer during flight. Fuel is hard to come by in the atmosphere, but oxidizer in the form of gaseous oxygen makes up to 20% of the air. There are a number of designs that take advantage of this fact. These sorts of systems have been explored in the liquid air cycle engine (LACE).
Another idea is to collect the working mass. With an air-augmented rocket, an otherwise conventional rocket engine is mounted in the center of a long tube, open at the front. As the rocket moves through the atmosphere the air enters the front of the tube, where it is compressed via the ram effect. As it travels down the tube it is further compressed and mixed with the fuel-rich exhaust from the rocket engine, which heats the air much as a combustor would in a ramjet. In this way a fairly small rocket can be used to accelerate a much larger working mass than normal, leading to significantly higher thrust within the atmosphere.
Advantages
The effectiveness of this simple method can be dramatic. Typical solid rockets have a specific impulse of about 260 seconds (2.5 kN·s/kg), but using the same fuel in an air-augmented design can improve this to over 500 seconds (4.9 kN·s/kg), a figure unmatched even by high specific impulse hydrolox engines. This design can even be slightly more efficient than a ramjet, as the exhaust from the rocket engine helps compress the air more than a ramjet normally would; this raises the combustion efficiency as a longer, more efficient nozzle can be employed. Another advantage is that the rocket works even at zero forward speed, whereas a ramjet requires forward motion to feed air into the engine.
Disadvantages
It might be envisaged that such an increase in performance would be widely deployed, but various issues frequently preclude this. The intakes of high-speed engines are difficult to design, and require careful positioning on the airframe in order to achieve reasonable performance – in general, the entire airframe needs to be built around the intake design. Another problem is that the air thins out as the rocket climbs. Hence, the amount of additional thrust is limited by how fast the rocket climbs. Finally, the air ducting adds quite a bit of weight which slows the vehicle considerably towards the end of the burn.
Variations
Shrouded rocket
The simplest version of an air-augmentation system is found in the shrouded rocket. This consists largely of a rocket motor or motors positioned in a duct. The rocket exhaust entrains the air, pulling it through the duct, while also mixing with it and heating it, causing the pressure to increase downstream of the rocket. The resulting hot gas is then further expanded through an expanding nozzle.
Ducted rocket
A slight variation on the shrouded rocket, the ducted rocket adds only a convergent-divergent nozzle. This ensures the combustion takes place at subsonic speeds, improving the range of vehicle speeds where the system remains useful.
Ejector ramjet (et al)
The ejector ramjet is a more complex system with potentially higher performance. Like the shrouded and ducted rocket, the system begins with a rocket engine(s) in an air intake. It differs in that the mixed exhaust enters a diffuser, slowing the speed of the airflow to subsonic speeds. Additional fuel is then injected, burning in this expanded section. The exhaust of that combustion then enters a convergent-divergent nozzle as in a conventional ramjet, or the ducted rocket case.
History
The first serious attempt to make a production air-augmented rocket was the Soviet Gnom rocket design, implemented by Decree 708-336 of the Soviet Ministers of 2 July 1958.
More recently, about 2002, NASA has re-examined similar technology for the GTX program as part of an effort to develop SSTO spacecraft.
Air-augmented rockets finally entered mass production in 2016 when the Meteor Air to Air Missile was introduced into service.
See also
Index of aviation articles
Liquid air cycle engine – collecting oxidizer instead of working mass
References
Citations
Bibliography
Gnom
NASA GTX
Rocket propulsion
Ramjet engines
Industrial design
Soviet inventions | Air-augmented rocket | Engineering | 1,311 |
549,445 | https://en.wikipedia.org/wiki/Completeness%20%28order%20theory%29 | In the mathematical area of order theory, completeness properties assert the existence of certain infima or suprema of a given partially ordered set (poset). The most familiar example is the completeness of the real numbers. A special use of the term refers to complete partial orders or complete lattices. However, many other interesting notions of completeness exist.
The motivation for considering completeness properties derives from the great importance of suprema (least upper bounds, joins, "") and infima (greatest lower bounds, meets, "") to the theory of partial orders. Finding a supremum means to single out one distinguished least element from the set of upper bounds. On the one hand, these special elements often embody certain concrete properties that are interesting for the given application (such as being the least common multiple of a set of numbers or the union of a collection of sets). On the other hand, the knowledge that certain types of subsets are guaranteed to have suprema or infima enables us to consider the evaluation of these elements as total operations on a partially ordered set. For this reason, posets with certain completeness properties can often be described as algebraic structures of a certain kind. In addition, studying the properties of the newly obtained operations yields further interesting subjects.
Types of completeness properties
All completeness properties are described along a similar scheme: one describes a certain class of subsets of a partially ordered set that are required to have a supremum or required to have an infimum. Hence every completeness property has its dual, obtained by inverting the order-dependent definitions in the given statement. Some of the notions are usually not dualized while others may be self-dual (i.e. equivalent to their dual statements).
Least and greatest elements
The easiest example of a supremum is the empty one, i.e. the supremum of the empty set. By definition, this is the least element among all elements that are greater than each member of the empty set. But this is just the least element of the whole poset, if it has one, since the empty subset of a poset P is conventionally considered to be both bounded from above and from below, with every element of P being both an upper and lower bound of the empty subset. Other common names for the least element are bottom and zero (0). The dual notion, the empty lower bound, is the greatest element, top, or unit (1).
Posets that have a bottom are sometimes called pointed, while posets with a top are called unital or topped. An order that has both a least and a greatest element is bounded. However, this should not be confused with the notion of bounded completeness given below.
Finite completeness
Further simple completeness conditions arise from the consideration of all non-empty finite sets. An order in which all non-empty finite sets have both a supremum and an infimum is called a lattice. It suffices to require that all suprema and infima of two elements exist to obtain all non-empty finite ones; a straightforward induction argument shows that every finite non-empty supremum/infimum can be decomposed into a finite number of binary suprema/infima. Thus the central operations of lattices are binary suprema and infima It is in this context that the terms meet for and join for are most common.
A poset in which only non-empty finite suprema are known to exist is therefore called a join-semilattice. The dual notion is meet-semilattice.
Further completeness conditions
The strongest form of completeness is the existence of all suprema and all infima. The posets with this property are the complete lattices. However, using the given order, one can restrict to further classes of (possibly infinite) subsets, that do not yield this strong completeness at once.
If all directed subsets of a poset have a supremum, then the order is a directed-complete partial order (dcpo). These are especially important in domain theory. The seldom-considered dual notion to a dcpo is the filtered-complete poset. Dcpos with a least element ("pointed dcpos") are one of the possible meanings of the phrase complete partial order (cpo).
If every subset that has some upper bound has also a least upper bound, then the respective poset is called bounded complete. The term is used widely with this definition that focuses on suprema and there is no common name for the dual property. However, bounded completeness can be expressed in terms of other completeness conditions that are easily dualized (see below). Although concepts with the names "complete" and "bounded" were already defined, confusion is unlikely to occur since one would rarely speak of a "bounded complete poset" when meaning a "bounded cpo" (which is just a "cpo with greatest element"). Likewise, "bounded complete lattice" is almost unambiguous, since one would not state the boundedness property for complete lattices, where it is implied anyway. Also note that the empty set usually has upper bounds (if the poset is non-empty) and thus a bounded-complete poset has a least element.
One may also consider the subsets of a poset which are totally ordered, i.e. the chains. If all chains have a supremum, the order is called chain complete. Again, this concept is rarely needed in the dual form.
Relationships between completeness properties
It was already observed that binary meets/joins yield all non-empty finite meets/joins. Likewise, many other (combinations) of the above conditions are equivalent.
The best-known example is the existence of all suprema, which is in fact equivalent to the existence of all infima. Indeed, for any subset X of a poset, one can consider its set of lower bounds B. The supremum of B is then equal to the infimum of X: since each element of X is an upper bound of B, sup B is smaller than all elements of X, i.e. sup B is in B. It is the greatest element of B and hence the infimum of X. In a dual way, the existence of all infima implies the existence of all suprema.
Bounded completeness can also be characterized differently. By an argument similar to the above, one finds that the supremum of a set with upper bounds is the infimum of the set of upper bounds. Consequently, bounded completeness is equivalent to the existence of all non-empty infima.
A poset is a complete lattice if and only if it is a cpo and a join-semilattice. Indeed, for any subset X, the set of all finite suprema (joins) of X is directed and the supremum of this set (which exists by directed completeness) is equal to the supremum of X. Thus every set has a supremum and by the above observation we have a complete lattice. The other direction of the proof is trivial.
Assuming the axiom of choice, a poset is chain complete if and only if it is a dcpo.
Completeness in terms of universal algebra
As explained above, the presence of certain completeness conditions allows to regard the formation of certain suprema and infima as total operations of a partially ordered set. It turns out that in many cases it is possible to characterize completeness solely by considering appropriate algebraic structures in the sense of universal algebra, which are equipped with operations like or . By imposing additional conditions (in form of suitable identities) on these operations, one can then indeed derive the underlying partial order exclusively from such algebraic structures. Details on this characterization can be found in the articles on the "lattice-like" structures for which this is typically considered: see semilattice, lattice, Heyting algebra, and Boolean algebra. Note that the latter two structures extend the application of these principles beyond mere completeness requirements by introducing an additional operation of negation.
Completeness in terms of adjunctions
Another interesting way to characterize completeness properties is provided through the concept of (monotone) Galois connections, i.e. adjunctions between partial orders. In fact this approach offers additional insights both into the nature of many completeness properties and into the importance of Galois connections for order theory. The general observation on which this reformulation of completeness is based is that the construction of certain suprema or infima provides left or right adjoint parts of suitable Galois connections.
Consider a partially ordered set (X, ≤). As a first simple example, let 1 = {*} be a specified one-element set with the only possible partial ordering. There is an obvious mapping j: X → 1 with j(x) = * for all x in X. X has a least element if and only if the function j has a lower adjoint j*: 1 → X. Indeed the definition for Galois connections yields that in this case j*(*) ≤ x if and only if * ≤ j(x), where the right hand side obviously holds for any x. Dually, the existence of an upper adjoint for j is equivalent to X having a greatest element.
Another simple mapping is the function q: X → X × X given by q(x) = (x, x). Naturally, the intended ordering relation for X × X is just the usual product order. q has a lower adjoint q* if and only if all binary joins in X exist. Conversely, the join operation : X × X → X can always provide the (necessarily unique) lower adjoint for q. Dually, q allows for an upper adjoint if and only if X has all binary meets. Thus the meet operation , if it exists, always is an upper adjoint. If both and exist and, in addition, is also a lower adjoint, then the poset X is a Heyting algebra—another important special class of partial orders.
Further completeness statements can be obtained by exploiting suitable completion procedures. For example, it is well known that the collection of all lower sets of a poset X, ordered by subset inclusion, yields a complete lattice D(X) (the downset-lattice). Furthermore, there is an obvious embedding e: X → D(X) that maps each element x of X to its principal ideal {y in X | y ≤ x}. A little reflection now shows that e has a lower adjoint if and only if X is a complete lattice. In fact, this lower adjoint will map any lower set of X to its supremum in X. Composing this lower adjoint with the function that maps any subset of X to its lower closure (again an adjunction for the inclusion of lower sets in the powerset), one obtains the usual supremum map from the powerset 2X to X. As before, another important situation occurs whenever this supremum map is also an upper adjoint: in this case the complete lattice X is constructively completely distributive. See also the articles on complete distributivity and distributivity (order theory).
The considerations in this section suggest a reformulation of (parts of) order theory in terms of category theory, where properties are usually expressed by referring to the relationships (morphisms, more specifically: adjunctions) between objects, instead of considering their internal structure. For more detailed considerations of this relationship see the article on the categorical formulation of order theory.
See also
Limit-preserving function on the preservation of existing suprema/infima.
Notes
References
G. Markowsky and B.K. Rosen. Bases for chain-complete posets IBM Journal of Research and Development. March 1976.
Stephen Bloom. Varieties of ordered algebras Journal of Computer and System Sciences. October 1976.
Michael Smyth. Power domains Journal of Computer and System Sciences. 1978.
Daniel Lehmann. On the algebra of order Journal of Computer and System Sciences. August 1980.
Order theory | Completeness (order theory) | Mathematics | 2,541 |
3,173,552 | https://en.wikipedia.org/wiki/18-Crown-6 | 18-Crown-6 is an organic compound with the formula [C2H4O]6 and the IUPAC name of 1,4,7,10,13,16-hexaoxacyclooctadecane. It is a white, hygroscopic crystalline solid with a low melting point. Like other crown ethers, 18-crown-6 functions as a ligand for some metal cations with a particular affinity for potassium cations (binding constant in methanol: 106 M−1). The point group of 18-crown-6 is S6. The dipole moment of 18-crown-6 is solvent- and temperature-dependent. Below 25 °C, the dipole moment of 18-crown-6 is in cyclohexane and in benzene. The synthesis of the crown ethers led to the awarding of the Nobel Prize in Chemistry to Charles J. Pedersen.
Synthesis
This compound is prepared by a modified Williamson ether synthesis in the presence of a templating cation:
(CH2OCH2CH2Cl)2 + (CH2OCH2CH2OH)2 + 2 KOH → (CH2CH2O)6 + 2 KCl + 2 H2O
It can be also prepared by the oligomerization of ethylene oxide. It can be purified by distillation, where its tendency to supercool becomes evident. 18-Crown-6 can also be purified by recrystallisation from hot acetonitrile. It initially forms an insoluble solvate. Rigorously dry material can be made by dissolving the compound in THF followed by the addition of NaK to give [K(18-crown-6)]Na, an alkalide salt.
Crystallographic analysis reveals a relatively flat molecule but one where the oxygen centres are not oriented in the idealized 6-fold symmetric geometry usually shown. The molecule undergoes significant conformational change upon complexation.
Reactions
18-Crown-6 has a high affinity for the hydronium ion H3O+, as it can fit inside the crown ether. Thus, reaction of 18-crown-6 with strong acids gives the cation [H3O·18-crown-6]+. For example, interaction of 18-crown-6 with HCl gas in toluene with a little moisture gives an ionic liquid layer with the composition [H3O·18-crown-6]+[HCl2]-·3.8C6H5Me, from which the solid [H3O·18-crown-6]+[HCl2]- can be isolated on standing. Reaction of the ionic liquid layer with two molar equivalents of water gives the crystalline product (H5O2)[H3O·18-crown-6]Cl2.
Applications
18-Crown-6 binds to a variety of small cations, using all six oxygens as donor atoms. Crown ethers can be used in the laboratory as phase transfer catalysts. Salts which are normally insoluble in organic solvents are made soluble by crown ether. For example, potassium permanganate dissolves in benzene in the presence of 18-crown-6, giving the so-called "purple benzene", which can be used to oxidize diverse organic compounds.
Various substitution reactions are also accelerated in the presence of 18-crown-6, which suppresses ion-pairing. The anions thereby become naked nucleophiles. For example, using 18-crown-6, potassium acetate is a more powerful nucleophile in organic solvents:
[K·(18-crown-6)]+AcO− + C6H5CH2Cl → C6H5CH2OAc + [K·(18-crown-6)]+Cl−
The first electride salt to be examined with X-ray crystallography, [Cs(18-crown-6)2]+·e−, was synthesized in 1983. This highly air- and moisture-sensitive solid has a sandwich molecular structure, where the electron is trapped within nearly spherical lattice cavities. However, the shortest electron-electron distance is too long (8.68 Å) to make this material a conductor of electricity.
References
External links
[https://www.sigmaaldrich.com/US/en/search/18-crown-6?focus=products&page=1&perpage=30&sort=relevance&term=18-crown-6&type=product
ISBNcrowndocx
Crown ethers
Macrocycles | 18-Crown-6 | Chemistry | 964 |
27,115,797 | https://en.wikipedia.org/wiki/Theory%20of%20regions | The Theory of regions is an approach for synthesizing a Petri net from a transition system. As such, it aims at recovering concurrent, independent behavior from transitions between global states. Theory of regions handles elementary net systems as well as P/T nets and other kinds of nets. An important point is that the approach is aimed at the synthesis of unlabeled Petri nets only.
Definition
A region of a transition system is a mapping assigning to each state a number (natural number for P/T nets, binary for ENS) and to each transition label a number such that consistency conditions holds whenever .
Intuitive explanation
Each region represents a potential place of a Petri net.
Mukund: event/state separation property, state separation property.
References
Set theory | Theory of regions | Mathematics,Technology | 157 |
45,223,566 | https://en.wikipedia.org/wiki/Phonometer | A phonometer is an instrument invented by Thomas Edison for testing the force of the human voice in speaking. It consists chiefly of a mouthpiece and diaphragm. Behind the diaphragm is placed a delicate mechanism which operates a 15-inch flywheel by means of which a hole can be bored in an ordinary pine board.
References
External links
American inventions
Thomas Edison
Physiological instruments | Phonometer | Technology,Engineering | 80 |
10,323,760 | https://en.wikipedia.org/wiki/Persephin | Persephin is a neurotrophic factor in the glial cell line-derived neurotrophic factor (GDNF) family. Persephin shares around a 40% similarity in amino acid sequence compared to GDNF and neurturin, two members of the GDNF family.
Function
Persephin has been found to be less potent than other members of the GDNF family. It has been found to support the survival and morphological differentiation of tyrosine hydroxylase immunoreactive neurons, although less so than both GDNF and neurturin. The mRNA levels of persephin in developing neurons has been low compared to other neurotrophic factors, but relatively higher levels of persephin mRNA have been found in embryonic neurons.
Similarly to the other members of the GDNF family of ligands, persephin uses a receptor that consists of the tyrosine kinase signaling component Ret and a unit of glycosylphosphatidylinsitol (GPI)-anchored receptor (GFRα). Persephin specifically binds to GFRα4.
Persephin acts on both neurons in the CNS and PNS, but also has the ability to act as a renal ramogen.
Structure
Unlike other GDNF family of ligands, persephin only contains one RXXR cleavage site, rather than multiple, indicating that it can only make one length of functional peptide.
Therapeutics
Persephin has the potential to be used as a therapeutic treatment for neurodegenerative diseases, such as Parkinson's disease and other diseases that affect motor neurons. Because persephin acts more selectively compared to other GFLs, such as GDNF, it may produce fewer mechanism-based complications, making it a stronger therapeutic target.
References
External links
TGFβ domain
Neurotrophic factors | Persephin | Chemistry,Biology | 409 |
54,389,578 | https://en.wikipedia.org/wiki/%CE%944-Abiraterone | Δ4-Abiraterone (D4A; code name CB-7627), also known as 17-(3-pyridyl)androsta-4,16-dien-3-one, is a steroidogenesis inhibitor and active metabolite of abiraterone acetate, a drug which is used in the treatment of prostate cancer and is itself a prodrug of abiraterone (another active metabolite of abiraterone acetate). D4A is formed from abiraterone by 3β-hydroxysteroid dehydrogenase/Δ5-4 isomerase (3β-HSD). It is said to be a more potent inhibitor of steroidogenesis than abiraterone, and is partially responsible for the activity of abiraterone acetate.
D4A is specifically an inhibitor of CYP17A1 (17α-hydroxylase/17,20-lyase), 3β-HSD, and 5α-reductase. In addition, it has also been found to act as a competitive antagonist of the androgen receptor (AR), with potency reportedly comparable to that of enzalutamide. However, the initial 5α-reduced metabolite of D4A, 3-keto-5α-abiraterone, is an agonist of the AR, and has been found to stimulate prostate cancer progression. The formation of this metabolite can be blocked by the coadministration of dutasteride, a selective and highly potent 5α-reductase inhibitor, and the addition of this medication may improve the effectiveness of abiraterone acetate in the treatment of prostate cancer.
References
{{DISPLAYTITLE:Δ4-Abiraterone}}
3β-Hydroxysteroid dehydrogenase inhibitors
5α-Reductase inhibitors
Androstanes
CYP17A1 inhibitors
Hormonal antineoplastic drugs
Human drug metabolites
Enones
Prostate cancer
3-Pyridyl compounds
Steroidal antiandrogens | Δ4-Abiraterone | Chemistry | 442 |
8,146,982 | https://en.wikipedia.org/wiki/Thermoproteus | Thermoproteus is a genus of archaeans in the family Thermoproteaceae. These prokaryotes are thermophilic sulphur-dependent organisms related to the genera Sulfolobus, Pyrodictium and Desulfurococcus. They are hydrogen-sulphur autotrophs and can grow at temperatures of up to 95 °C.
Description and significance
Thermoproteus is a genus of anaerobes that grow in the wild by autotrophic sulfur reduction. Like other hyperthermophiles, Thermoproteus represents a living example of some of Earth's earliest organisms, located at the base of the Archaea.
Genome structure
Genetic sequencing of Thermoproteus has revealed much about the organism's modes of metabolism. Total genome length is 1.84 Mbp, and the DNA is double-stranded and circular. Genes are arranged in co-transcribed clusters called operons. The Thermoproteus tenax genome has been completely sequenced.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
Cell structure and metabolism
A significant amount of research has been done on the metabolism of Thermoproteus and other hyperthermophiles as well. Thermoproteus metabolizes autotrophically through sulfur reduction, but it grows much faster by sulfur respiration in cultivation. In T. tenax, a number of metabolic pathways allow the cell to select a mode of metabolism depending on the energy requirements of the cell (depending, for example, on the cell's developmental or growth stage). Like all archaea, Thermoproteus possesses unique membrane lipids, which are ether-linked glycerol derivatives of 20 or 40 carbon branched lipids. The lipids' unsaturations are generally conjugated (as opposed to the unconjugation found in Bacteria and Eukaryota). In Thermosphaera, as in all members of the Crenarchaeota, the membranes are predominated by the 40-carbon lipids that span the entire membrane. This causes the membrane to be composed of monolayers with polar groups at each end. The cells are rod-shaped with diameters of up to 4 micrometres and up to 100 micrometres in length, and reproduce by developing branches on the end of the cell which grow into individual cells. They are motile by flagella.
Ecology
Members of Thermoproteus are found in acidic hot springs and water holes; they have been isolated in these habitats in Iceland, Italy, North America, New Zealand, the Azores, and Indonesia. Their optimal growth temperature is 85 °C.
See also
List of Archaea genera
References
External links
Thermoproteus at BacDive - the Bacterial Diversity Metadatabase
Archaea genera
Thermoproteota | Thermoproteus | Biology | 615 |
21,723 | https://en.wikipedia.org/wiki/Nonlinear%20optics | Nonlinear optics (NLO) is the branch of optics that describes the behaviour of light in nonlinear media, that is, media in which the polarization density P responds non-linearly to the electric field E of the light. The non-linearity is typically observed only at very high light intensities (when the electric field of the light is >108 V/m and thus comparable to the atomic electric field of ~1011 V/m) such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds.
History
The first nonlinear optical effect to be predicted was two-photon absorption, by Maria Goeppert Mayer for her PhD in 1931, but it remained an unexplored theoretical curiosity until 1961 and the almost simultaneous observation of two-photon absorption at Bell Labs
and the discovery of second-harmonic generation by Peter Franken et al. at University of Michigan, both shortly after the construction of the first laser by Theodore Maiman. However, some nonlinear effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes was first described in Bloembergen's monograph "Nonlinear Optics".
Nonlinear optical processes
Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. These nonlinear interactions give rise to a host of optical phenomena:
Frequency-mixing processes
Second-harmonic generation (SHG), or frequency doubling, generation of light with a doubled frequency (half the wavelength), two photons are destroyed, creating a single photon at two times the frequency.
Third-harmonic generation (THG), generation of light with a tripled frequency (one-third the wavelength), three photons are destroyed, creating a single photon at three times the frequency.
High-harmonic generation (HHG), generation of light with frequencies much greater than the original (typically 100 to 1000 times greater).
Sum-frequency generation (SFG), generation of light with a frequency that is the sum of two other frequencies (SHG is a special case of this).
Difference-frequency generation (DFG), generation of light with a frequency that is the difference between two other frequencies.
Optical parametric amplification (OPA), amplification of a signal input in the presence of a higher-frequency pump wave, at the same time generating an idler wave (can be considered as DFG).
Optical parametric oscillation (OPO), generation of a signal and idler wave using a parametric amplifier in a resonator (with no signal input).
Optical parametric generation (OPG), like parametric oscillation but without a resonator, using a very high gain instead.
Half-harmonic generation, the special case of OPO or OPG when the signal and idler degenerate in one single frequency,
Spontaneous parametric down-conversion (SPDC), the amplification of the vacuum fluctuations in the low-gain regime.
Optical rectification (OR), generation of quasi-static electric fields.
Nonlinear light-matter interaction with free electrons and plasmas.
Other nonlinear processes
Optical Kerr effect, intensity-dependent refractive index (a effect).
Self-focusing, an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the spatial variation in the intensity creating a spatial variation in the refractive index.
Kerr-lens modelocking (KLM), the use of self-focusing as a mechanism to mode-lock laser.
Self-phase modulation (SPM), an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the temporal variation in the intensity creating a temporal variation in the refractive index.
Optical solitons, an equilibrium solution for either an optical pulse (temporal soliton) or spatial mode (spatial soliton) that does not change during propagation due to a balance between dispersion and the Kerr effect (e.g. self-phase modulation for temporal and self-focusing for spatial solitons).
Self-diffraction, splitting of beams in a multi-wave mixing process with potential energy transfer.
Cross-phase modulation (XPM), where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect.
Four-wave mixing (FWM), can also arise from other nonlinearities.
Cross-polarized wave generation (XPW), a effect in which a wave with polarization vector perpendicular to the input one is generated.
Modulational instability.
Raman amplification
Optical phase conjugation.
Stimulated Brillouin scattering, interaction of photons with acoustic phonons
Multi-photon absorption, simultaneous absorption of two or more photons, transferring the energy to a single electron.
Multiple photoionisation, near-simultaneous removal of many bound electrons by one photon.
Chaos in optical systems.
Related processes
In these processes, the medium has a linear response to the light, but the properties of the medium are affected by other causes:
Pockels effect, the refractive index is affected by a static electric field; used in electro-optic modulators.
Acousto-optics, the refractive index is affected by acoustic waves (ultrasound); used in acousto-optic modulators.
Raman scattering, interaction of photons with optical phonons.
Parametric processes
Nonlinear effects fall into two qualitatively different categories, parametric and non-parametric effects. A parametric non-linearity
is an interaction in which the quantum state of the nonlinear material is not changed by the interaction with the optical field. As a consequence of this, the process is "instantaneous". Energy and momentum are conserved in the optical field, making phase matching important and polarization-dependent.
Theory
Parametric and "instantaneous" (i.e. material must be lossless and dispersionless through the Kramers–Kronig relations) nonlinear optical phenomena, in which the optical fields are not too large, can be described by a Taylor series expansion of the dielectric polarization density (electric dipole moment per unit volume) P(t) at time t in terms of the electric field E(t):
where the coefficients χ(n) are the n-th-order susceptibilities of the medium, and the presence of such a term is generally referred to as an n-th-order nonlinearity. Note that the polarization density P(t) and electrical field E(t) are considered as scalar for simplicity. In general, χ(n) is an (n + 1)-th-rank tensor representing both the polarization-dependent nature of the parametric interaction and the symmetries (or lack) of the nonlinear material.
Wave equation in a nonlinear material
Central to the study of electromagnetic waves is the wave equation. Starting with Maxwell's equations in an isotropic space, containing no free charge, it can be shown that
where PNL is the nonlinear part of the polarization density, and n is the refractive index, which comes from the linear term in P.
Note that one can normally use the vector identity
and Gauss's law (assuming no free charges, ),
to obtain the more familiar wave equation
For a nonlinear medium, Gauss's law does not imply that the identity
is true in general, even for an isotropic medium. However, even when this term is not identically 0, it is often negligibly small and thus in practice is usually ignored, giving us the standard nonlinear wave equation:
Nonlinearities as a wave-mixing process
The nonlinear wave equation is an inhomogeneous differential equation. The general solution comes from the study of ordinary differential equations and can be obtained by the use of a Green's function. Physically one gets the normal electromagnetic wave solutions to the homogeneous part of the wave equation:
and the inhomogeneous term
acts as a driver/source of the electromagnetic waves. One of the consequences of this is a nonlinear interaction that results in energy being mixed or coupled between different frequencies, which is often called a "wave mixing".
In general, an n-th order nonlinearity will lead to (n + 1)-wave mixing. As an example, if we consider only a second-order nonlinearity (three-wave mixing), then the polarization P takes the form
If we assume that E(t) is made up of two components at frequencies ω1 and ω2, we can write E(t) as
and using Euler's formula to convert to exponentials,
where "c.c." stands for complex conjugate. Plugging this into the expression for P gives
which has frequency components at 2ω1, 2ω2, ω1 + ω2, ω1 − ω2, and 0. These three-wave mixing processes correspond to the nonlinear effects known as second-harmonic generation, sum-frequency generation, difference-frequency generation and optical rectification respectively.
Note: Parametric generation and amplification is a variation of difference-frequency generation, where the lower frequency of one of the two generating fields is much weaker (parametric amplification) or completely absent (parametric generation). In the latter case, the fundamental quantum-mechanical uncertainty in the electric field initiates the process.
Phase matching
The above ignores the position dependence of the electrical fields. In a typical situation, the electrical fields are traveling waves described by
at position , with the wave vector , where is the velocity of light in vacuum, and is the index of refraction of the medium at angular frequency . Thus, the second-order polarization at angular frequency is
At each position within the nonlinear medium, the oscillating second-order polarization radiates at angular frequency and a corresponding wave vector . Constructive interference, and therefore a high-intensity field, will occur only if
The above equation is known as the phase-matching condition. Typically, three-wave mixing is done in a birefringent crystalline material, where the refractive index depends on the polarization and direction of the light that passes through. The polarizations of the fields and the orientation of the crystal are chosen such that the phase-matching condition is fulfilled. This phase-matching technique is called angle tuning. Typically a crystal has three axes, one or two of which have a different refractive index than the other one(s). Uniaxial crystals, for example, have a single preferred axis, called the extraordinary (e) axis, while the other two are ordinary axes (o) (see crystal optics). There are several schemes of choosing the polarizations for this crystal type. If the signal and idler have the same polarization, it is called "type-I phase matching", and if their polarizations are perpendicular, it is called "type-II phase matching". However, other conventions exist that specify further which frequency has what polarization relative to the crystal axis. These types are listed below, with the convention that the signal wavelength is shorter than the idler wavelength.
Most common nonlinear crystals are negative uniaxial, which means that the e axis has a smaller refractive index than the o axes. In those crystals, type-I and -II phase matching are usually the most suitable schemes. In positive uniaxial crystals, types VII and VIII are more suitable. Types II and III are essentially equivalent, except that the names of signal and idler are swapped when the signal has a longer wavelength than the idler. For this reason, they are sometimes called IIA and IIB. The type numbers V–VIII are less common than I and II and variants.
One undesirable effect of angle tuning is that the optical frequencies involved do not propagate collinearly with each other. This is due to the fact that the extraordinary wave propagating through a birefringent crystal possesses a Poynting vector that is not parallel to the propagation vector. This would lead to beam walk-off, which limits the nonlinear optical conversion efficiency. Two other methods of phase matching avoid beam walk-off by forcing all frequencies to propagate at a 90° with respect to the optical axis of the crystal. These methods are called temperature tuning and quasi-phase-matching.
Temperature tuning is used when the pump (laser) frequency polarization is orthogonal to the signal and idler frequency polarization. The birefringence in some crystals, in particular lithium niobate is highly temperature-dependent. The crystal temperature is controlled to achieve phase-matching conditions.
The other method is quasi-phase-matching. In this method the frequencies involved are not constantly locked in phase with each other, instead the crystal axis is flipped at a regular interval Λ, typically 15 micrometres in length. Hence, these crystals are called periodically poled. This results in the polarization response of the crystal to be shifted back in phase with the pump beam by reversing the nonlinear susceptibility. This allows net positive energy flow from the pump into the signal and idler frequencies. In this case, the crystal itself provides the additional wavevector k = 2π/Λ (and hence momentum) to satisfy the phase-matching condition. Quasi-phase-matching can be expanded to chirped gratings to get more bandwidth and to shape an SHG pulse like it is done in a dazzler. SHG of a pump and self-phase modulation (emulated by second-order processes) of the signal and an optical parametric amplifier can be integrated monolithically.
Higher-order frequency mixing
The above holds for processes. It can be extended for processes where is nonzero, something that is generally true in any medium without any symmetry restrictions; in particular resonantly enhanced sum or difference frequency mixing in gasses is frequently used for extreme or "vacuum" ultra-violet light generation. In common scenarios, such as mixing in dilute gases, the non-linearity is weak and so the light beams are focused which, unlike the plane wave approximation used above, introduces a pi phase shift on each light beam, complicating the phase-matching requirements. Conveniently, difference frequency mixing with cancels this focal phase shift and often has a nearly self-canceling overall phase-matching condition, which relatively simplifies broad wavelength tuning compared to sum frequency generation. In all four frequencies are mixing simultaneously, as opposed to sequential mixing via two processes.
The Kerr effect can be described as a as well. At high peak powers the Kerr effect can cause filamentation of light in air, in which the light travels without dispersion or divergence in a self-generated waveguide. At even high intensities the Taylor series, which led the domination of the lower orders, does not converge anymore and instead a time based model is used. When a noble gas atom is hit by an intense laser pulse, which has an electric field strength comparable to the Coulomb field of the atom, the outermost electron may be ionized from the atom. Once freed, the electron can be accelerated by the electric field of the light, first moving away from the ion, then back toward it as the field changes direction. The electron may then recombine with the ion, releasing its energy in the form of a photon. The light is emitted at every peak of the laser light field which is intense enough, producing a series of attosecond light flashes. The photon energies generated by this process can extend past the 800th harmonic order up to a few KeV. This is called high-order harmonic generation. The laser must be linearly polarized, so that the electron returns to the vicinity of the parent ion. High-order harmonic generation has been observed in noble gas jets, cells, and gas-filled capillary waveguides.
Example uses
Frequency doubling
One of the most commonly used frequency-mixing processes is frequency doubling, or second-harmonic generation. With this technique, the 1064 nm output from Nd:YAG lasers or the 800 nm output from Ti:sapphire lasers can be converted to visible light, with wavelengths of 532 nm (green) or 400 nm (violet) respectively.
Practically, frequency doubling is carried out by placing a nonlinear medium in a laser beam. While there are many types of nonlinear media, the most common media are crystals. Commonly used crystals are BBO (β-barium borate), KDP (potassium dihydrogen phosphate), KTP (potassium titanyl phosphate), and lithium niobate. These crystals have the necessary properties of being strongly birefringent (necessary to obtain phase matching, see below), having a specific crystal symmetry, being transparent for both the impinging laser light and the frequency-doubled wavelength, and having high damage thresholds, which makes them resistant against the high-intensity laser light.
Optical phase conjugation
It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a conjugate beam, and thus the technique is known as optical phase conjugation (also called time reversal, wavefront reversal and is significantly different from retroreflection).
A device producing the phase-conjugation effect is known as a phase-conjugate mirror (PCM).
Principles
One can interpret optical phase conjugation as being analogous to a real-time holographic process. In this case, the interacting beams simultaneously interact in a nonlinear optical material to form a dynamic hologram (two of the three input beams), or real-time diffraction pattern, in the material. The third incident beam diffracts at this dynamic hologram, and, in the process, reads out the phase-conjugate wave. In effect, all three incident beams interact (essentially) simultaneously to form several real-time holograms, resulting in a set of diffracted output waves that phase up as the "time-reversed" beam. In the language of nonlinear optics, the interacting beams result in a nonlinear polarization within the material, which coherently radiates to form the phase-conjugate wave.
Reversal of wavefront means a perfect reversal of photons' linear momentum and angular momentum. The reversal of angular momentum means reversal of both polarization state and orbital angular momentum. Reversal of orbital angular momentum of optical vortex is due to the perfect match of helical phase profiles of the incident and reflected beams. Optical phase conjugation is implemented via stimulated Brillouin scattering, four-wave mixing, three-wave mixing, static linear holograms and some other tools.
The most common way of producing optical phase conjugation is to use a four-wave mixing technique, though it is also possible to use processes such as stimulated Brillouin scattering.
Four-wave mixing technique
For the four-wave mixing technique, we can describe four beams (j = 1, 2, 3, 4) with electric fields:
where Ej are the electric field amplitudes. Ξ1 and Ξ2 are known as the two pump waves, with Ξ3 being the signal wave, and Ξ4 being the generated conjugate wave.
If the pump waves and the signal wave are superimposed in a medium with a non-zero χ(3), this produces a nonlinear polarization field:
resulting in generation of waves with frequencies given by ω = ±ω1 ± ω2 ± ω3 in addition to third-harmonic generation waves with ω = 3ω1, 3ω2, 3ω3.
As above, the phase-matching condition determines which of these waves is the dominant. By choosing conditions such that ω = ω1 + ω2 − ω3 and k = k1 + k2 − k3, this gives a polarization field:
This is the generating field for the phase-conjugate beam, Ξ4. Its direction is given by k4 = k1 + k2 − k3, and so if the two pump beams are counterpropagating (k1 = −k2), then the conjugate and signal beams propagate in opposite directions (k4 = −k3). This results in the retroreflecting property of the effect.
Further, it can be shown that for a medium with refractive index n and a beam interaction length l, the electric field amplitude of the conjugate beam is approximated by
where c is the speed of light. If the pump beams E1 and E2 are plane (counterpropagating) waves, then
that is, the generated beam amplitude is the complex conjugate of the signal beam amplitude. Since the imaginary part of the amplitude contains the phase of the beam, this results in the reversal of phase property of the effect.
Note that the constant of proportionality between the signal and conjugate beams can be greater than 1. This is effectively a mirror with a reflection coefficient greater than 100%, producing an amplified reflection. The power for this comes from the two pump beams, which are depleted by the process.
The frequency of the conjugate wave can be different from that of the signal wave. If the pump waves are of frequency ω1 = ω2 = ω, and the signal wave is higher in frequency such that ω3 = ω + Δω, then the conjugate wave is of frequency ω4 = ω − Δω. This is known as frequency flipping.
Angular and linear momenta in optical phase conjugation
Classical picture
In classical Maxwell electrodynamics a phase-conjugating mirror performs reversal of the Poynting vector:
("in" means incident field, "out" means reflected field) where
which is a linear momentum density of electromagnetic field.
In the same way a phase-conjugated wave has an opposite angular momentum density vector
with respect to incident field:
The above identities are valid locally, i.e. in each space point in a given moment for an ideal phase-conjugating mirror.
Quantum picture
In quantum electrodynamics the photon with energy also possesses linear momentum and angular momentum, whose projection on propagation axis is , where is topological charge of photon, or winding number, is propagation axis. The angular momentum projection on propagation axis has discrete values .
In quantum electrodynamics the interpretation of phase conjugation is much simpler compared to classical electrodynamics. The photon reflected from phase conjugating-mirror (out) has opposite directions of linear and angular momenta with respect to incident photon (in):
Nonlinear optical pattern formation
Optical fields transmitted through nonlinear Kerr media can also display pattern formation owing to the nonlinear medium amplifying spatial and temporal noise. The effect is referred to as optical modulation instability. This has been observed both in photo-refractive, photonic lattices, as well as photo-reactive systems. In the latter case, optical nonlinearity is afforded by reaction-induced increases in refractive index. Examples of pattern formation are spatial solitons and vortex lattices in framework of nonlinear Schrödinger equation.
Molecular nonlinear optics
The early studies of nonlinear optics and materials focused on the inorganic solids. With the development of nonlinear optics, molecular optical properties were investigated, forming molecular nonlinear optics. The traditional approaches used in the past to enhance nonlinearities include extending chromophore π-systems, adjusting bond length alternation, inducing intramolecular charge transfer, extending conjugation in 2D, and engineering multipolar charge distributions. Recently, many novel directions were proposed for enhanced nonlinearity and light manipulation, including twisted chromophores, combining rich density of states with bond alternation, microscopic cascading of second-order nonlinearity, etc. Due to the distinguished advantages, molecular nonlinear optics have been widely used in the biophotonics field, including bioimaging, phototherapy, biosensing, etc.
Connecting bulk properties to microscopic properties
Molecular nonlinear optics relate optical properties of bulk matter to their microscopic molecular properties. Just as the polarizability can be described as a Taylor series expansion, one can expand the induced dipole moment in powers of the electric field: , where μ is the polarizability, α is the first hyperpolarizability, β is the second hyperpolarizability, and so on.
Novel Nonlinear Media
Certain molecular materials have the ability to be optimized for their optical nonlinearity at the microscopic and bulk levels. Due to the delocalization of electrons in π bonds electrons are more easily responsive to applied optical fields and tend to produce larger linear and nonlinear optical responses than those in single (𝜎) bonds. In these systems linear response scales with the length of the conjugated pi system, while nonlinear response scales even more rapidly.
One of the many applications of molecular nonlinear optics is the use in nonlinear bioimaging. These nonlinear materials, like multi-photon chromophores, are used as biomarkers for two-photon spectroscopy, in which the attenuation of incident light intensity as it passes through the sample is written as .
where N is the number of particles per unit volume, I is intensity of light, and δ is the two photon absorption cross section. The resulting signal adopts a Lorentzian lineshape with a cross-section proportional to the difference in dipole moments of ground and final states.
Similar highly conjugated chromophores with strong donor-acceptor characteristics are used due to their large difference in the dipole moments, and current efforts in extending their pi-conjugated systems to enhance their nonlinear optical properties are being made.
Common second-harmonic-generating (SHG) materials
Ordered by pump wavelength:
800 nm: BBO
806 nm: lithium iodate (LiIO3)
860 nm: potassium niobate (KNbO3)
980 nm: KNbO3
1064 nm: monopotassium phosphate (KH2PO4, KDP), lithium triborate (LBO) and β-barium borate (BBO)
1300 nm: gallium selenide (GaSe)
1319 nm: KNbO3, BBO, KDP, potassium titanyl phosphate (KTP), lithium niobate (LiNbO3), LiIO3, and ammonium dihydrogen phosphate (ADP)
1550 nm: potassium titanyl phosphate (KTP), lithium niobate (LiNbO3)
See also
Born–Infeld model
Filament propagation
:Category:Nonlinear optical materials
Further reading
Encyclopedia of laser physics and technology , with content on nonlinear optics, by Rüdiger Paschotta
An Intuitive Explanation of Phase Conjugation
SNLO - Nonlinear Optics Design Software
Robert Boyd plenary presentation: Quantum Nonlinear Optics: Nonlinear Optics Meets the Quantum World SPIE Newsroom
References
Optics | Nonlinear optics | Physics,Chemistry | 5,528 |
72,826,846 | https://en.wikipedia.org/wiki/Hydrogen-bonded%20organic%20framework | Hydrogen-bonded organic frameworks (HOFs) are a class of porous polymers formed by hydrogen bonds among molecular monomer units to afford porosity and structural flexibility. There are diverse hydrogen bonding pair choices that could be used in HOFs construction, including identical or nonidentical hydrogen bonding donors and acceptors. For organic groups acting as hydrogen bonding units, species like carboxylic acid, amide, 2,4-diaminotriazine, and imidazole, etc., are commonly used for the formation of hydrogen bonding interaction. Compared with other organic frameworks, like COF and MOF, the binding force of HOFs is relatively weaker, and the activation of HOFs is more difficult than other frameworks, while the reversibility of hydrogen bonds guarantees a high crystallinity of the materials. Though the stability and pore size expansion of HOFs has potential problems, HOFs still show strong potential for applications in different areas.
An important consequence of the natural porous architecture of hydrogen-bonded organic frameworks is to realize the adsorption of guest molecules. This character accelerates the emergence of various applications of different HOFs structures, including gas removal/storage/separation, molecule recognition, proton conduction, and biomedical applications, etc.
History
Reports of extended 2D hydrogen-bonding-based porous frameworks can be traced back to the 1960s. In 1969, Duchamp and Marsh reported a 2D interpenetrated nonporous crystal structure with a honeycomb network constructed by benzene-1,3,5-tricarboxylic acid (trimesic acid or TMA). Then Ermer reported an adamantane-1,3,5,7-tetracarboxylic acid (ADTA) based hydrogen-bonded network with interpenetrated diamond topology. Meanwhile diverse works of guest-induced hydrogen-bonded frameworks were reported successively, which gradually developed the concept of hydrogen-bonded organic frameworks. Another milestone in the evolution of hydrogen-bonded organic frameworks was set by Chen. In 2011, Chen reported a porous organic framework with hydrogen bonding as binding force and demonstrated its porosity by gas adsorption for the first time. Since then, numerous HOF structures have been designed and constructed, meanwhile various applications related to porous frameworks have been attempted and applied to HOFs, whose effectiveness has been proved.
Hydrogen bonding pairs in HOFs
Hydrogen bonds formed among various monomers guarantee the construction of hydrogen-bonded organic frameworks with different assembly architectures. The constitution of the hydrogen pairs is based on the structural and functional design of the HOFs, therefore different hydrogen bonding pairs should be selected following systematic requirements. The hydrogen bonding pairs generally include 2,4-diaminotriazine, carboxylic acid, amide, imide, imidazole, imidazolone and resorcinol, etc. Assorting with appropriate backbones, in every crystallization condition, the hydrogen-bonding pairs will exhibit specific assembly states, which means the morphologies with favored energy for this crystallization condition could be assembled by the monomers. In order to realize 2D or 3D HOFs, monomers with more than one hydrogen bonding pair are generally considered: the rigidity and directionality are also in favor of HOF construction.
Backbones of HOF monomer
Rigidity and directionality of the constructional units offer HOFs various pore structures, topologies, and further applications. Therefore, a proper choice of monomer backbones plays an important role in the construction of HOFs. These backbones not only can combine with different hydrogen bonding pairs mentioned above to realize stable HOF structural design and expand pore size, but also give opportunities to offer more topologies of HOFs. Also, by using backbones with similar geometry and same connection pattern to generate the monomers and HOFs, the isoreticular expansion of the frameworks becomes a reliable method to expand the pore size effectively. As mentioned, for the sake of constructing porous and stable HOFs, multiple aspects should be considered simultaneously, such as the rigidity of the backbones, the orientation and binding strength of the hydrogen pairs, and other intermolecular interactions for orderly stacking. Therefore, the design of HOF monomers should focus on their H-bonds orientations and structural rigidity, and consequent framework stability and porosity.
Synthetic methods
In principle, HOFs could be crystallized from solvents. However, the factors of solvent types, precursor concentration, crystallization time and temperature, etc., can have significant influence on HOFs crystallization process. Generally, the crystal products can correspond to kinetics through high concentration and short crystallization time, while slowing down the crystallization rate might yield thermodynamic crystals. One common method to produce HOF crystal is to slowly evaporate the solvent of the solution, which benefits the stacking of the monomers. Another widely used method is to diffuse low boiling point poor solvents into monomer solution with higher boiling point good solvents, in order to induce the assembly of the monomers. Depending on different crystallization systems, other methods have also been applied to HOF construction.
Characterization methods
There are various methods to characterize HOF materials and their monomers. Nuclear magnetic resonance (NMR) spectroscopy and high-resolution mass spectrometry (HR-MS) are generally used for characterizing the synthesis of monomers. Single crystal X-ray diffraction (SCXRD) is the powerful tool for determining the structure of the HOF crystal packing. Powder X-ray diffraction (PXRD) is also a supported technique to demonstrate the pure phase formation of HOFs. The gas adsorption and desorption study through Brunauer-Emmett-Teller (BET) method could reasonably demonstrate some key parameters of HOFs, like pore size, specific gas adsorption amount and surface area from the adsorption isotherms. Depending on application directions and study fields, diverse techniques have been applied to the characterization of HOFs.
Applications
The porous structures and unique properties guarantee HOFs good application performance in practical fields. The applications include but are not limited to gas adsorption, hydrocarbon separation, proton conductivity, and molecular recognition, etc.
Gas adsorption
As a kind of networks with tailorable pore size, HOFs could serve as storage containers for gas molecules with certain sizes and interactions. The relatively constrained pore size in HOFs could help to store, capture, or separate different small gas molecules, including H2, N2, CO2, CH4, C2H2, C2H4, C2H6 and so on. Mastalerz and Oppel reported a special 3D HOF with triptycene trisbenzimidazolone (TTBI) as constitutional monomers. Because of the molecular rigidity and stereo construction, 1D channels were formed through the frameworks and the surface area was largely enhanced, to the extent of 2796 m2/g as shown by BET. The HOF also presented good adsorption ability of H2 and CO2, as 243 and 80.7 cm3/g at 1 bar at 77 and 273 K, separately.
CO2 adsorption
As a typical greenhouse gas that could cause serious problems in many aspects, the capture of carbon dioxide is always under big concern. Meanwhile, carbon dioxide has also been widely used as a gas resource or emitted as waste gas in manufacturing and industry, therefore the storage and separation of CO2 have always been emphasized as an important application. Chen and co-workers reported a structural transformation HOF with high CO2 adsorption capacity in 2015. The N–H···N hydrogen bond is formed between the units to realize the assembly of the HOF architecture with binodal topology. The CO2 uptake capacity of the HOF could reach 117.1 cm3/g at 273 K.
Hydrocarbon separation
The hydrogen-bonded organic framework used for C2H2/C2H4 separation was reported by Chen and coworkers. In the structure of this HOF, each 4,4',4'',4'''-tetra(4,6-diamino-s-triazin-2-yl)tetraphenylmethane unit connected with eight other units by N–H···N hydrogen bonds. Due to certain structural flexibility, the framework was able to uptake C2H2 up to 63.2 cm3/g while the adsorption amount of C2H4 was 8.3 cm3/g at 273 K, showing effective C2H2/C2H4 separation.
Molecules recognition
The non-covalent interactions existing in the hydrogen-bonded organic frameworks, e.g., hydrogen bonding, π-π interaction and Van der Waals force, are considered as important intermolecular interactions for molecules recognition. Meanwhile, the multiple binding sites and adaptable structures also make HOFs good molecules recognition platform. By exploiting these features, so far different kinds of recognition have been realized, including gas molecules recognition, fullerene recognition, aniline recognition, pyridine recognition, etc.
Optical materials
Some luminescence molecules with large π conjugation structures are also used for HOFs construction. Therefore, various luminescent HOFs are designed and assembled in order to realize the non-covalent controlled luminescence adjustment which could introduce more functions to the HOF materials. For example, by using tetraphenylethylene (TPE) as backbones, a series of HOFs combined with solvents presenting different color emission have been reported.
Proton conduction
The hydrogen-bonded organic frameworks constructed with proton carriers have been widely used for proton conduction. The hydrogen bonds can also serve as proton sources in the frameworks to transfer protons. As an example, porphyrin-based structures and guanidinium sulfonate salt monomers have been studied and included in HOF design and construction for proton conduction since the certain conductivity they have.
Biological applications
As kinds of metal-free porous materials, hydrogen-bonded organic frameworks are also ideal platform for drug delivery and disease treatment. Meanwhile, with proper monomer selection and reasonable arrangement, Cao reported a robust HOF which could effectively encapsulate a cancer drug Doxorubicin and yield singlet oxygen by embedded photoactive pyrene moiety in order to realize dual functions of drug release and photodynamic therapy for cancer remedy.
References | Hydrogen-bonded organic framework | Chemistry,Materials_science | 2,175 |
41,043,655 | https://en.wikipedia.org/wiki/Scandium%20bromide | Scandium bromide, or ScBr3, is a trihalide, hygroscopic, water-soluble chemical compound of scandium and bromine.
Preparation and properties
ScBr3 is produced through the burning of scandium in bromine gas.
2 Sc(s) + 3 Br2(g) → 2 ScBr3(s)
Scandium bromide can also be prepared by reacting excess hydrobromic acid with scandium oxide, and the hexahydrate can be crystallized from the solution. The thermal decomposition of hexahydrate can only yield scandium oxybromide (ScOBr) and scandium oxide. The anhydrous form can be produced by the reaction of bromine, scandium oxide and graphite in nitrogen gas.
Heating reaction between ammonium bromide and scandium oxide or scandium bromide hexahydrate, through (NH4)3ScBr6 intermediate, decomposes to obtain anhydrous scandium bromide.
Uses
Scandium bromide is used for solid state synthesis of unusual clusters such as Sc19Br28Z4, (Z=Mn, Fe, Os or Ru). These clusters are of interest for their structure and magnetic properties.
References
Bromides
Scandium compounds
Metal halides | Scandium bromide | Chemistry | 268 |
61,906,365 | https://en.wikipedia.org/wiki/Librem%205 | The Librem 5 is a smartphone manufactured by Purism that is part of their Librem line of products. The phone is designed with the goal of using free software whenever possible and includes PureOS, a Linux operating system, by default. Like other Librem products, the Librem 5 focuses on privacy and freedom and includes features like hardware kill switches and easily-replaceable components. Its name, with a numerical "5", refers to its screen size, not a release version. After an announcement on 24 August 2017, the distribution of developer kits and limited pre-release models occurred throughout 2019 and most of 2020. The first mass-production version of the Librem 5 was shipped on 18 November 2020.
History
On August 24, 2017, Purism started a crowdfunding campaign for the Librem 5, a smartphone aimed not only to run purely on free software provided in PureOS but to "[focus] on security by design and privacy protection by default". Purism claimed that the phone would become "the world's first ever IP-native mobile handset, using end-to-end encrypted decentralized communication". Purism has cooperated with GNOME in its development of the Librem 5 software. It is planned that KDE and Ubuntu Touch will also be offered as optional interfaces.
The release of the Librem 5 was delayed several times. It was originally planned to launch in January 2019. Purism announced on September 4, 2018 that the launch date would be postponed until April 2019, due to two power management bugs in the silicon and the Europe/North America holiday season. Development kits for software developers, which were shipped out in December 2018 were unaffected by the bugs, since developers normally connect the device to a power outlet rather than rely on the phone battery. In February, the launch date was postponed again to the third quarter of 2019, because of the necessity of further CPU tests.
Specifications and pre-orders, for $649, to increase to $699, were announced in July 2019. On September 5, 2019, Purism announced that shipping was scheduled to occur later that month, but that it would be done as an "iterative" process. The iterative release plan included the announcement of six different "batches" of Librem 5 releases, of which the first four would be limited pre-production models. Each consecutive batch, which consisted of different arboreal-themed code names and release dates, would feature hardware, mechanical, and software improvements. Purism contacted each customer that had pre-ordered to allow them to choose which batch they'd prefer to receive. Pre-mass production batches, in order of release, included code names "Aspen", "Birch", "Chestnut", and "Dogwood". The fifth batch, "Evergreen", would be the official mass-production model, while the sixth batch, "Fir", would be the second mass-production model.
On September 24, 2019, Purism announced that the first batch of limited-production Librem 5 phones (Aspen) had started shipping. A video of an early phone was produced and a shipping and status update was released soon after. However, it was later reported that the Aspen batch had been shipped only to employees and developers. On November 22, 2019, it was reported that the second batch (Birch) would consist of around 100 phones and would be in the hands of backers by the first week of December. In December 2019, Jim Salter of Ars Technica reported "prototype" devices were being received; however, they were not really a "phone" yet. There was no audio when attempting to place a phone call (which was fixed with a software update a few weeks later), and cameras didn't work yet. Reports of the third batch of limited pre-mass-production models (Chestnut) being received by customers and reviewers occurred in January 2020. By May 2020, TechRadar reported that the call quality was fine, though the speaker mode was "a bit quiet", and volume adjustment did not work. According to TechRadar, the 3 to 5-hour battery time and the inability of the phone to charge while turned on was "A stark reminder of the Librem 5's beta status".
On November 18, 2020, Purism announced via press release that they had begun shipping the finished version of the Librem 5, known as "Evergreen". Following its release, in December 2019, Purism announced that it will offer a "Librem 5 USA" version of the phone for the price of $1999, which is assembled in the United States for extra supply chain security. According to Purism CEO Todd Weaver, "having a secure auditable US based supply chain including parts procurement, fabrication, testing, assembly, and fulfillment all from within the same facility is the best possible security story."
Hardware
The Librem 5 features an i.MX 8M Quad Core processor with an integrated GPU which supports OpenGL 3.0, OpenGL ES 3.1, Vulkan 1.0 and OpenCL 1.2 with default drivers; however, since the driver used is the open source Etnaviv driver, it currently only supports OpenGL 2.1 and OpenGL ES 2.0. It has 3 GB of RAM, 32 GB of eMMC storage, a 13 MP rear camera, and an 8 MP front camera. The left side of the phone features three hardware kill switches, which cut power to the camera and microphone, Wi-Fi and Bluetooth modem, and the baseband modem.) The device uses a USB-C connector for charging. The 144 mm (5.7-inch) IPS display has a resolution of 1440×720 pixels. It also has a 3.5 mm TRRS headphone/mic jack, a single SIM slot, and a microSD card slot.
Battery
The Librem 5 is powered by a lithium-ion battery. The capacity of the battery was 2000 mAh in earliest development batches, which was increased to 4500 mAh in the mass-production batch. The battery is designed to be user-replaceable. The battery is unique to Librem 5 and cannot be replaced by any other battery type. In addition, Purism ships replacement batteries only within the US unless combined with another device.
Mobile security
The hardware features three hardware kill switches that physically cut off power from both cameras and the microphone, Wi-Fi and Bluetooth, and baseband processor, respectively. Further precautionary measures can be used with Lockdown Mode, which, in addition to powering off the cameras, microphone, WiFi, Bluetooth and cellular baseband, also cuts power to the GNSS, IMU, ambient light and proximity sensor. This is possible due to the fact that these components are not integrated into the system on a chip (SoC) like they are in conventional smartphones. Instead, the cellular baseband and Wi-Fi/Bluetooth components are located on two replaceable M.2 cards, which means that they can be changed to support different wireless standards. The kill switch to cut the circuit to the microphone will prevent the 3.5 mm audio jack being used for acoustic cryptanalysis.
In place of an integrated mobile SoC found in most smartphones, the Librem 5 uses six separate chips: i.MX 8M Quad, Silicon Labs RS9116, Broadmobi BM818 / Gemalto PLS8, STMicroelectronics Teseo-LIV3F, Wolfson Microelectronics WM8962, and Texas Instruments bq25895.
The downside to having dedicated chips instead of an integrated system-on-chip is that it takes more energy to operate separate chips, and the phone's circuit boards are much larger. On the other hand, using separate components means longer support from the manufacturers than with mobile SoCs, which have short support timelines. According to Purism, the Librem 5 is designed to avoid planned obsolescence and will receive lifetime software updates.
The Librem 5 is the first phone to contain a smartcard reader, in which an OpenPGP card can be inserted for secure cryptographic operations. Purism plans to use OpenPGP cards to implement storage of GPG keys, disk unlocking, secure authentication, a local password vault, protection of sensitive files, user persons, and travel persons.
To promote better security, all the source code in the root file system is free/open source software and can be reviewed by the user. Purism publishes the schematics of the Librem 5's printed circuit boards (PCBs) under the GPL 3.0+ license, and publishes x-rays of the phone, so that the user can verify that there haven't been any changes to the hardware, such as inserted spy chips.
Software
The Librem 5 ships with Purism's PureOS, a Debian GNU/Linux derivative. The operating system uses a new mobile user interface developed by Purism called Phosh, a portmanteau from "phone shell". It is based on Wayland, wlroots, GTK 3, and GNOME. Unlike other mobile Linux interfaces, such as Ubuntu Touch and KDE Plasma Mobile, Phosh is based on tight integration with the desktop Linux software stack, which Purism developers believe will make it easier to maintain in the long-term and incorporate into existing desktop Linux distributions. Phosh has been packaged in a number of desktop distros (Debian, Arch, Manjaro, Fedora and openSUSE) and is used by eight of the sixteen Linux ports for the PinePhone.
The phone is a convergence device: if connected to a keyboard, monitor, and mouse, it can run Linux applications as a desktop computer would. Many desktop Linux applications can run on the phone as well, albeit possibly without a touch-friendly UI.
Purism is taking a unique approach to convergence by downsizing existing desktop software to reuse it in a mobile environment. Purism has developed the libhandy library (now replaced with Libadwaita) to make GTK software adaptive so its interface elements adjust to smaller mobile screens. In contrast, other companies such as Microsoft and Samsung with Ubuntu (and Canonical before Unity8) tried to achieve convergence by having separate sets of software for the mobile and desktop PC environments. Most iOS apps, Android apps and Plasma Mobile's Kirigami implement convergence by upsizing existing mobile apps to use them in a desktop interface.
Purism claims that the "Librem 5 will be the first ever Matrix-powered smartphone, natively using end-to-end encrypted decentralised communication in its dialer and messaging app".
Purism was unable to find a free/open-source cellular modem, so the phone uses a modem with proprietary hardware, but isolates it from the rest of the components rather than having it integrated with the system on a chip (SoC). This prevents code on the modem from being able to read or modify data going to and from the SoC.
See also
Comparison of open-source mobile phones
List of open-source mobile phones
Microphone blocker
Modular smartphone
PinePhone
Libadwaita
References
External links
Librem 5
Linux-based devices
Mobile Linux
Mobile security
Mobile/desktop convergence
Modular smartphones
Open-source mobile phones
Secure communication
Mobile phones introduced in 2020
Mobile phones with user-replaceable battery
Right to repair | Librem 5 | Technology,Engineering | 2,387 |
1,187,923 | https://en.wikipedia.org/wiki/Eqn%20%28software%29 | Part of the troff suite of Unix document layout tools, eqn is a preprocessor that formats equations for printing. A similar program, neqn, accepted the same input as eqn, but produced output tuned to look better in nroff. The eqn program was created in 1974 by Brian Kernighan and Lorinda Cherry.
It was implemented using yacc compiler-compiler.
The input language used by eqn allows the user to write mathematical expressions in much the same way as they would be spoken aloud. The language is defined by a context-free grammar, together with operator precedence and operator associativity rules. The eqn language is similar to the mathematical component of TeX, which appeared several years later, but is simpler and less complete.
An independent compatible implementation of the eqn preprocessor has been developed by GNU as part of groff, the GNU version of troff. The GNU implementation extends the original language by adding a number of new keywords such as smallover and accent. mandoc, a specialised compiler for UNIX man pages, also contains a standalone eqn parser/formatter.
History
Eqn was written using the yacc parser generator.
Syntax examples
Here is how some examples would be written in eqn (with equivalents in TeX for comparison):
Spaces are important in eqn; tokens are delimited only by whitespace characters, tildes ~, braces {} and double-quotes "". Thus f(pi r sup 2) results in , whereas f( pi r sup 2 ) is needed to give the intended .
References
External links
Typesetting Mathematics, User's Guide (Second Edition)
eqn
Plan 9 commands | Eqn (software) | Mathematics,Technology | 365 |
13,885,084 | https://en.wikipedia.org/wiki/Bruno%20Souza%20%28programmer%29 | Bruno Souza is a Brazilian Java programmer and open source software advocate. He was President of SouJava, a Brazilian Java User Group he helped establish which became the world's largest.
He was one of the initiators of the Apache Harmony project to create a non-proprietary Java virtual machine. He's known as the "Brazilian JavaMan"
Bruno is a member of the board of directors at the Open Source Initiative representing Affiliate members. This is his second term on the OSI Board. He is also a member of the executive committee of the Java Community Process. In 2010, he co-founded ToolsCloud, a developer tools provider.
References
External links
Bruno Souza's homepage and weblog
Year of birth missing (living people)
Living people
Java (programming language)
Brazilian computer specialists
Members of the Open Source Initiative board of directors | Bruno Souza (programmer) | Technology | 172 |
3,172,978 | https://en.wikipedia.org/wiki/Plasterwork | Plasterwork is construction or ornamentation done with plaster, such as a layer of plaster on an interior or exterior wall structure, or plaster decorative moldings on ceilings or walls. This is also sometimes called pargeting. The process of creating plasterwork, called plastering or rendering, has been used in building construction for centuries. For the art history of three-dimensional plaster, see stucco.
History
The earliest plasters known to us were lime-based. Around 7500 BC, the people of 'Ain Ghazal in Jordan used lime mixed with unheated crushed limestone to make plaster which was used on a large scale for covering walls, floors, and hearths in their houses. Often, walls and floors were decorated with red, finger-painted patterns and designs. In ancient India and China, renders in clay and gypsum plasters were used to produce a smooth surface over rough stone or mud brick walls, while in early Egyptian tombs, walls were coated with lime and gypsum plaster and the finished surface was often painted or decorated.
Modelled stucco was employed throughout the Roman Empire. The Romans used mixtures of lime and sand to build up preparatory layers over which finer applications of gypsum, lime, sand and marble dust were made; pozzolanic materials were sometimes added to produce a more rapid set. Following the fall of the Roman Empire, the addition of marble dust to plaster to allow the production of fine detail and a hard, smooth finish in hand-modelled and moulded decoration was not used until the Renaissance. Around the 4th century BC, the Romans discovered the principles of the hydraulic set of lime, which by the addition of highly reactive forms of silica and alumina, such as volcanic earths, could solidify rapidly even under water. There was little use of hydraulic mortar after the Roman period until the 18th century.
Plaster decoration was widely used in Europe in the Middle Ages where, from the mid-13th century, gypsum plaster was used for internal and external plaster. Hair was employed as reinforcement, with additives to assist set or plasticity including malt, urine, beer, milk and eggs.
14th century
In the 14th century, decorative plasterwork called pargeting was being used in South-East England to decorate the exterior of timber-framed buildings. This is a form of incised, moulded or modelled ornament, executed in lime putty or mixtures of lime and gypsum plaster. During this same period, terracotta was reintroduced into Europe and was widely used for the production of ornament.
15th century
In the mid-15th century, Venetian skilled workers developed a new type of external facing, called marmorino made by applying lime directly onto masonry.
16th century
In the 16th century, a new highly decorative type of decorative internal plasterwork, called scagliola, was invented by stuccoists working in Bavaria. This was composed of gypsum plaster, animal glue and pigments, used to imitate coloured marbles and pietre dure ornament. Sand or marble dust, and lime, were sometimes added. In this same century, the sgraffito technique, also known as graffito or scratchwork was introduced into Germany by Italian artists, combining it with modelled stucco decoration. This technique was practised in antiquity and was described by Vasari as being a quick and durable method for decorating building facades. Here, layers of contrasting lime plaster were applied and a design scratched through the upper layer to reveal the colour beneath.
17th century
The 17th century saw the introduction of different types of internal plasterwork. Stucco marble was an artificial marble made using gypsum (sometimes with lime), pigments, water and glue. Stucco lustro was another a form of imitation marble (sometimes called stucco lucido) where a thin layer of lime or gypsum plaster was applied over a scored support of lime, with pigments scattered on surface of the wet plaster.
18th century
The 18th century gave rise to renewed interest in innovative external plasters. Oil mastics introduced in the UK in this period included a "Composition or stone paste" patented in 1765 by David Wark. This was a lime-based mix and included "oyls of tar, turpentine and linseed" besides many other ingredients. Another "Composition or cement", including drying oil, was patented in 1773 by Rev. John Liardet. A similar product was patented in 1777 by John Johnson. Widely used by the architect Robert Adam who in turn commissioned George Jackson to produce reverse-cut boxwood moulds (many of which to Adam designs). Jackson formed an independent company which still today produces composition pressings and retains a very large boxwood mould collection.
In 1774, in France, a mémoire was published on the composition of ancient mortars. This was translated into English as "A Practical Essay on a Cement, and Artificial Stone, justly supposed to be that of the Greeks and Romans" and was published in the same year. Following this, and as a backlash to the disappointment felt due to the repeated failure of oil mastics, in the second half of the 18th century water-based renders gained popularity once more. Mixes for renders were patented, including a "Water Cement, or Stucco" consisting of lime, sand, bone ash and lime-water (Dr Bryan Higgins, 1779). Various experiments mixing different limes with volcanic earths took place in the 18th century. John Smeaton (from 1756) experimented with hydraulic limes and concluded that the best limes were those fired from limestones containing a considerable quantity of clay]ey material. In 1796, Revd James Parker patented Parker's "Roman Cement". This was a hydraulic cement which, when mixed with sand, could be used for stucco. It could also be cast to form mouldings and other ornaments. It was however of an unattractive brown colour, which needed to be disguised by surface finishes.
19th century
Natural cements were frequently used in stucco mixes during the 1820s. The popularisation of Portland cement changed the composition of stucco, as well as mortar, to a harder material. The development of artificial cements had started early in the 19th century. In 1811, James Frost took out a patent for an artificial cement obtained by lightly calcining ground chalk and clay together. The French Engineer Louis Vicat in 1812–1813 experimented with calcining synthetic mixtures of limestone and clay, a product he introduced in 1818. In 1822, in the UK, James Frost patented (another?) process, similar to Vicat's, producing what he called "British cement". Portland cement, patented in 1824 by Joseph Aspdin, was called so because it was supposed to resemble Portland stone. Aspdinís son William, and later Isaac Johnson, improved the production process. A product, very similar to modern Portland cement, was available from about 1845, with other improvements taking place in the following years.
Thus, after about 1860, most stucco was composed primarily of Portland cement, mixed with some lime. This made it even more versatile and durable. No longer used just as a coating for a substantial material like masonry or log, stucco could now be applied over wood or metal lath attached to a light wood frame. With this increased strength, it ceased to be just a veneer and became a more integral part of the building structure. Early 19th century rendered façades were colour-washed with distemper; oil paint for external walls was introduced around 1840.
The 19th century also saw the revival of the use of oil mastics. In the UK, patents were obtained for "compositions" in 1803 (Thomas Fulchner), 1815 (Christopher Dihl) and 1817 (Peter Hamelin). These oil mastics, as the ones before them, also proved to be short-lived.
Moulded or cast masonry substitutes, such as cast stone and poured concrete, became popular in place of quarried stone during the 19th century. However, this was not the first time "artificial stone" had been widely used. Coade Stone, a brand name for a cast stone made from fired clay, had been developed and manufactured in England from 1769 to 1843 and was used for decorative architectural elements. Following the closure of the factory in South London, Coade stone stopped being produced, and the formula was lost. By the mid 19th century manufacturing centres were preparing cast stones based on cement for use in buildings. These were made primarily with a cement mix often incorporating fine and coarse aggregates for texture, pigments or dyes to imitate colouring and veining of natural stones, as well as other additives.
Also in the 19th century, various mixtures of modified gypsum plasters, such as Keene's cement, appeared. These materials were developed for use as internal wall plasters, increasing the usefulness of simple plaster of Paris as they set more slowly and were thus easier to use.
Tools and materials
Tools and materials include trowels, floats, hammers, screeds, hawk, scratching tools, utility knives, laths, lath nails, lime, sand, hair, plaster of Paris, a variety of cements, and various ingredients to form color washes.
While most tools have remained unchanged over the centuries, developments in modern materials have led to some changes. Trowels, originally constructed from steel, are now available in a polycarbonate material that allows the application of certain new, acrylic-based materials without staining the finish. Floats, traditionally made of timber (ideally straight-grained, knot-free, yellow pine), are often finished with a layer of sponge or expanded polystyrene.
Laths
Traditionally, plaster was laid onto laths, rather than plasterboard as is more commonplace nowadays.
Wooden laths are narrow strips of straight-grained wood depending on availability of species in lengths of from two to four or five feet to suit the distances at which the timbers of a floor or partition are set. Laths are about an inch wide, and are made in three thicknesses; single ( thick), lath and a half ( thick), and double ( thick).
The thicker laths should be used in ceilings, to stand the extra strain (sometimes they were doubled for extra strength), and the thinner variety in vertical work such as partitions, except where the latter will be subjected to rough usage, in which case thicker laths become necessary. Laths are usually nailed with a space of about between them to form a key for the plaster.
Laths were formerly all made by hand. Most are now made by machinery and are known as sawn laths, those made by hand being called rent or riven laths. Rent laths give the best results, as they split in a line with the grain of the wood, and are stronger and not so liable to twist as machine-made laths, some of the fibers of which are usually cut in the process of sawing.
Laths must be nailed so as to break joint in bays three or four feet wide with ends butted one against the other. By breaking the joints of the lathing in this way, the tendency for the plaster to crack along the line of joints is diminished and a better key is obtained. Every lath should be nailed at each end and wherever it crosses a joist or stud. All timbers over wide should be counter-lathed, that is, have a fillet or double lath nailed along the centre upon which the laths are then nailed. This is done to preserve a good key for the plaster.
Walls liable to damp are sometimes battened and lathed to form an air cavity between the damp wall and the plastering.
Lathing in metal, either in wire or in the form of perforated galvanised sheets, is now extensively used on account of its fireproof and lasting quality. There are many kinds of this material in different designs, the best known in England being the Jhilmil, the Bostwick, Lathing, and Expanded Metal lathing. The two last-named are also widely used in the United States.
Lathing nails are usually of iron, cut, wrought or cast, and in the better class of work they are galvanized to prevent rusting. Zinc nails are sometimes used, but are costly.
Lime plastering
Lime plastering is composed of lime, sand, hair and water in proportions varying according to the nature of the work to be done.
The lime mortar principally used for internal plastering is that calcined from chalk, oyster shells or other nearly pure limestone, and is known as fat, pure, chalk or rich lime. Hydraulic limes are also used by the plasterer, but chiefly for external work.
Perfect slaking of the calcined lime before being used is very important as, if used in a partially slaked condition, it will "blow" when in position and blister the work. Lime should therefore be run as soon as the building is begun, and at least three weeks should elapse between the operation of running the lime and its use.
Hair
Hair is used in plaster as a binding medium, and gives tenacity to the material. Traditionally horsehair was the most commonly used binder, as it was easily available before the development of the motor-car. Hair functions in much the same way as the strands in fiberglass resin, by controlling and containing any small cracks within the mortar while it dries or when it is subject to flexing.
Ox-hair, which is sold in three qualities, is now the kind usually specified; but horsehair, which is shorter, is sometimes substituted or mixed with the ox-hair in the lower qualities. Good hair should be long (In the UK cow and horse hair of short and long lengths is used), and left greasey (lanolin grease) because this protects against some degradation when introduced into the very high alkaline plaster. Before use it must be well beaten, or teased, to separate the lumps. In America, goats' hair is frequently used, though it is not so strong as ox-hair. The quantity used in good work is one pound of hair to two or three cubic feet of coarse stuff (in the UK up to 12 kg per metric cube). Hair reinforcement in lime plaster is common and many types of hair and other organic fibres can be found in historic plasters [4]. However, organic material in lime will degrade in damp environments particularly on damp external renders.[5] This problem has given rise to the use of polyprolene fibres and cellulose wood fibres in new lime renders [6]
Manila hemp fiber has been used as a substitute for hair. Plaster for hair slabs made with manila hemp fiber broke at , plaster mixed with sisal hemp at , jute at , and goats' hair at . Another test was made in the following manner. Two barrels of mortar were made up of equal proportions of lime and sand, one containing the usual quantity of goats' hair, and the other Manila fiber. After remaining in a dry cellar for nine months the barrels were opened. It was found that the hair had been almost entirely eaten away by the action of the lime, and the mortar consequently broke up and crumbled quite easily. The mortar containing the Manila hemp, on the other hand, showed great cohesion, and required some effort to pull it apart, the hemp fiber being undamaged.
Sand/aggregate
For fine plasterer's sand-work, special sands are used, such as silver sand, which is used when a light color and fine texture are required. In the United Kingdom this fine white sand is procured chiefly from Leighton Buzzard; also in the UK many traditional plasters had crushed chalk as the aggregate, this made a very flexible plaster suitable for timber-frame buildings.
For external work Portland cement is the best material on account of its strength, durability, and weather resisting external properties, but not on historic structures that are required to flex and breathe; for this, lime without cement is used.
Sawdust has been used as a substitute for hair and also instead of sand as an aggregate. Sawdust will enable mortar to stand the effects of frost and rough weather. It is useful sometimes for heavy cornices and similar work, as it renders the material light and strong. The sawdust should be used dry. The sawdust is used to bind the mix sometimes to make it go further.
Methods
The first coat or rendering is from to inches thick, and is mixed in the proportions of from one part of cement to two of sand to one part to five of sand. The finishing or setting coat is about inches thick, and is worked with a hand float on the surface of the rendering, which must first be well wetted.
External plastering
Stucco is a term loosely applied to nearly all kinds of external plastering, whether composed of lime or of cement. At the present time it has fallen into disfavor, but in the early part of the 19th century a great deal of this work was done. Cement has largely superseded lime for this work. The principal varieties of stucco are common, rough, trowelled and bastard. .
Common stucco for external work is usually composed of one part hydraulic lime and three parts sand. The wall should be sufficiently rough to form a key and well wetted to prevent the moisture being absorbed from the plaster.
Rough stucco is used to imitate stonework. It is worked with a hand float covered with rough felt (a stiff bristled brush can also be used), which forms a sand surface on the plaster. Lines are ruled before the stuff is set to represent the joints of stonework.
Trowelled stucco, the finishing coat of this work, consists of three parts sand to two parts fine stuff. A very fine smooth surface is produced by means of the hand float.
Bastard stucco is of similar composition, but less labor is expended on it. It is laid on in two coats with a skimming float, scoured off at once, and then trowelled.
Colored stucco: lime stucco may be executed in colors, the desired tints being obtained by mixing with the lime various oxides. Black and grays are obtained by using forge ashes in varying proportions, greens by green enamel, reds by using litharge or red lead, and blues by mixing oxide or carbonate of copper with the other materials.
Roughcast or pebbledash plastering is a rough form of external plastering in much use for country houses. In Scotland it is termed "harling". It is one of the oldest forms of external plastering. In Tudor times it was employed to fill in between the woodwork of half-timbered framing. When well executed with good material this kind of plastering is very durable.
Roughcasting is performed by first rendering the wall or laths with a coat of well-haired coarse stuff composed either of good hydraulic lime or of Portland cement. This layer is well scratched to give a key for the next coat. The second coat is also composed of coarse stuff knocked up to a smooth and uniform consistency. Two finish two techniques can be used:
dry dash: while the first coat is still soft, gravel, shingle or other small stones are evenly thrown on with a small scoop and then brushed over with thin lime mortar to give a uniform surface. The shingle is often dipped in hot lime paste, well stirred up, and used as required.
wet dash: the traditional roughcast, harling the scratch or undercoat is left to cure and in the final coat the gravel/agrigate is mixed with the lime and sand and thrown on with the plaster spoon/scoop.
Sgraffito (scratched ornament)
Sgraffito is the name for scratched ornament in plaster. Scratched ornament is the oldest form of surface decoration, and is much used on the continent of Europe, especially in Germany and Italy, in both external and internal situations.
Properly treated, the work is durable, effective and inexpensive. A first coat or rendering of Portland cement and sand, in the proportion of one to three, is laid on about an inch thick; then follows the color coat, sometimes put on in patches of different tints as required for the finished design. When this coat is nearly dry, it is finished with a smooth-skimming, thick, of Parian, selenitic or other fine cement or lime, only as much as can be finished in one day being laid on.
Then by pouncing through the pricked cartoon, the design is transferred to the plastered surface. Broad spaces of background are now exposed by removing the finishing coat, thus revealing the colored plaster beneath, and following this the outlines of the rest of the design are scratched with an iron knife through the outer skimming to the underlying tinted surface.
Sometimes the coats are in three different colors, such as brown for the first, red for the second, and white or grey for the final coat. The pigments used for this work include Indian red, Turkey red, Antwerp blue, German blue, umber, ochre, purple brown, bone black or oxide of manganese for black. Combinations of these colors are made to produce any desired tone.
Coats
Plasters are applied in successive coats or layers on walls or lathing and gains its name from the number of these coats.
One coat work is the coarsest and cheapest class of plastering, and is limited to inferior buildings, such as outhouses, where merely a rough coating is required to keep out the weather and draughts. This is described as render on brickwork, and lath and lay or lath and plaster one coat on studding.
Two-coat work is often used for factories or warehouses and the less important rooms of residences. The first coat is of coarse stuff finished fair with the darby float and scoured. A thin coat of setting stuff is then laid on, and trowelled and brushed smooth. Two-coat work is described as render and set on walls, and lath, plaster and set, or lath, lay and set on laths.
Three-coat work is usually specified for high specification work. It consists, as its name implies, of three layers of material, and is described as render, float and set on walls and lath, plaster, float and set, or lath, lay, float and set, on lathwork. This makes a strong, straight, sanitary coating for walls and ceilings.
The process for three coat work is as follows:
For the first coat a layer of well-haired coarse stuff, about 1 inch thick, is put on with the laying trowel. This is termed "pricking up" in London, and in America "scratch coating". It should be laid on diagonally, each trowelful overlapping the previous one. When on laths the stuff should be plastic enough to be worked through the spaces between the laths to form a key, yet so firm as not to drop off. The surface while still soft is scratched with a lath to give a key for the next coat. In Scotland this part of the process is termed "straightening" and in America "browning", and is performed when the first coat is dry, so as to form a straight surface to receive the finishing coat.
The second or "floating coat", and is 1/4 to 3/8 inches thick. Four operations are involved in laying the second coat, namely, forming the screeds; filling in the spaces between the screeds; scouring the surface; keying the face for finishing.
Wall screeds are plumbed and ceiling screeds leveled. Screeds are narrow strips of plastering, carefully plumbed and leveled, so as to form a guide upon which the floating rule is run, thus securing a perfectly horizontal or vertical surface, or, in the case of circular work, a uniform curve.
The filling in, or flanking, consists of laying the spaces between the screeds with coarse stuff, which is brought flush with the level of the screeds with the floating rule.
The scouring of the floating coat is of great importance, for it consolidates the material, and, besides hardening it, prevents it from cracking. It is done by the plasterer with a hand float that he applies vigorously with a rapid circular motion, at the same time sprinkling the work with water from a stock brush in the other hand. Any small holes or inequalities are filled up as he proceeds. The whole surface should be uniformly scoured two or three times, with an interval between each operation of from six to twenty-four hours. This process leaves the plaster with a close-grained and fairly smooth surface, offering little or no key to the coat that is to follow.
To obtain proper cohesion, however, a roughened face is necessary, and this is obtained by keying the surface with a wire brush or nail float, that is, a hand float with the point of a nail sticking through and projecting about 1/8 inch; sometimes a point is put at each corner of the float.
After the floating is finished to the walls and ceiling, the next part of internal plastering is the running of the cornice, followed by the finishing of the ceiling and walls.
The third and final coat is the setting coat, which should be about 1/8 inch thick. In Scotland it is termed the "finishing coat", and in America the "hard finish coat" or "putty coat". Setting stuff should not be applied until the floating is quite firm and nearly dry, but it must not be too dry or the moisture will be drawn from the setting stuff.
The composition of an interior three coat plaster:
The coarse stuff applied as the first coat is composed of sand and lime, usually in proportions approximating to two to one, with hair mixed into it in quantities of about a pound to two or three cubic feet of mortar. It should be mixed with clean water to such a consistency that a quantity picked up on the point of a trowel holds well together and does not drop.
Floating stuff is of finer texture than that used for pricking up, and is used in a softer state, enabling it to be worked well into the keying of the first coat. A smaller proportion of hair is also used.
Fine stuff mixed with sand is used for the setting coat. Fine stuff, or lime putty, is pure lime that has been slaked and then mixed with water to a semi-fluid consistency, and allowed to stand until it has developed into a soft paste.
For use in setting it is mixed with fine washed sand in the ratio of one to three.
For cornices and for setting when the second coat is not allowed time to dry properly, a special compound must be used. This is often gauged stuff, composed of three or four parts of lime putty and one part of plaster of Paris, mixed up in small quantities immediately before use. The plaster in the material causes it to set rapidly, but if it is present in too large a proportion the work will crack in setting.
The hard cements used for plastering, such as Parian, Keene's, and Martin's, are laid generally in two coats, the first of cement and sand 1/2 to 3/4 inches thick, the second or setting coat of neat cement about 1/8 inch thick. These and similar cements have gypsum as a base, to which a certain proportion of another substance, such as alum, borax or carbonate of soda, is added, and the whole baked or calcined at a low temperature. The plaster they contain causes them to set quickly with a very hard smooth surface, which may be painted or papered within a few hours of its being finished.
In Australia, plaster or cement render that is applied to external brickwork on dwellings or commercial buildings can be one or two coats. In two coat render a base coat is applied with a common mix of 4 parts sand to one part cement and one part dehydrated lime and water to make a consistent mortar. Render is applied using a hawk and trowel and pushed on about 12 mm thick to begin. For two coat, some plasterers apply two full depth bands of render (one at the base of the wall and one around chest height) which are screeded plumb and square and allowed to dry while applying the first coat over the remaining exposed wall. The render is then scratched to provide a key for the second coat. This method allows the rest of the wall to be rendered and screeded off without the need to continually check if the second coat is plumb. Alternatively, both coats can be applied with the plasterer using a t-bar to screed the final coat until it is plumb, straight and square. The first method is generally used where quality of finish is at a premium. The second method is quicker but can be several millimetres out of plumb. The second coat can be a slightly weaker mix 5/1/1, or the same as the base coat with maybe a water- proofer in the mix added to the water to minimize efflorescence (rising of salts). Some plasterers used lime putty in second coat instead of dehydrated lime in the render. The mortar is applied to about 5 mm thick and when the render hardens is screeded off straight. A wood float or plastic float is used to rub down the walls. Traditionally, water is splashed on walls using a coarse horsehair plasterers brush followed by immediately rubbing the float in a circular or figure 8 motion although a figure of 8 can leave marks. Many modern plasterers use a hose with a special nozzle with a fine mist spray to dampen walls when rubbing up (using a wood float to bring a consistent finish). Using a hose brings a superior finish and is more consistent in colour as there is more chance in catching the render before it has a chance to harden too much. After the work area is floated, the surface is finished with a wet sponge using the same method as floating with a wood float, bringing sand to the surface to give a smooth consistent finish.
Materials used in the render are commonly local sands with little clay content with fine to coarse grains. Sand finish is common for external render and may be one or two coats. Plasterers use a t-bar to screed the walls until it is plumb, straight and square. Two coat is superior as, although more expensive, it gives a more consistent finish and has less chance of becoming drummy or cracking. Drumminess occurs when the render doesn't bond completely with the wall, either because the wall is too smooth, a coat is too thick, or the coat is being floated when the render has hardened too much, leaving an air space that makes a drumming sound when a metal tool is "rubbed" over it.
For internal walls, two coats is the standard and follows the same method as for external rendering but with a weaker mix of five or six sand to one cement and one lime. However, instead of being finished with a sponge, the second coat is left rough and sometimes will be scored by nails inserted in the float. After drying, the surface is then scraped to remove loose grains of sand before plastering. If the walls are concrete, a splash coat is needed to ensure bonding. A splash coat is a very wet mix of two parts cement to one part sand that is "splashed" on the wall using the plasterers brush until the wall is covered. Special mixes are sometimes required for architectural or practical reasons. For example, A hospital's X-ray room will be rendered with a mix containing Barium sulfate to make the walls impervious to x-rays.
Moldings
Plain, or unenriched, moldings are formed with a running mold of zinc cut to the required profile a process that has remained the same for over 200 years.
For a cornice molding two running rules are usual, one on the wall, the other on the ceiling, upon which the mold is worked to and from by one workman, while another man roughly lays on the plaster to the shape of the molding. The miters at the angles are finished off with joint rules made of sheet steel of various lengths, three or four inches (102 mm) wide, and about one-eighth inch thick, with one end cut to an angle of about 30°. In some cases the steel plate is let into a stock or handle of hardwood.
Enrichments may be moldings added after the main outline molding is set, and are cast in molds made of gelatin or plaster of Paris.
Cracks
Cracks in plastering may be caused by settlement of the building, by the use of inferior materials or by bad workmanship.
However, due to none of these, cracks may yet ensue by the too fast drying of the work, caused through the laying of plaster on dry walls which suck from the composition the moisture required to enable it to set, by the application of external heat or the heat of the sun, by the laying of a coat upon one which has not properly set, the cracking in this case being caused by unequal contraction, or by the use of too small a proportion of sand.
In older properties, hairline cracks in plastered ceilings can occur due to minor deflection / movement of timber joists which support the floor above.
Traditionally, crack propagation was arrested by stirring chopped horsehair thoroughly into the plaster mix.
Slabs
For partitions and ceilings, plaster slabs are used to finish quickly. For ceilings metal lathing require simply to be nailed to the joists, the joints being made with plaster, and the whole finished with a thin setting coat or slab. In some cases, with fireproof ceilings, for instance, the metal lathing are hung up with wire hangers so as to allow a space of several inches between the soffit of the concrete floor and the ceiling. For partitions metal laths are grouted in with semi-fluid plaster. Where very great strength is required, the work may be reinforced by small iron rods through the slabs. This forms a very strong and rigid partition which is at the same time fire-resisting and of lightweight, and when finished measures only from two to four inches (102 mm) thick. So strong is the result that partitions of this class only two or three inches (76 mm) thick were used for temporary cells for prisoners at Newgate Gaol during the rebuilding of the new sessions house in the Old Bailey in London.
The slabs may be obtained either with a keyed surface, which requires finishing with a setting coat when the partition or ceiling is in position, or a smooth finished face, which may be papered or painted immediately the joints have been carefully made.
Fibrous plaster
Fibrous plaster is given by plasterers the suggestive name "stick and rag", and this is a rough description of the material, for it is a fibrous composed of plaster laid upon a backing of canvas stretched on wood. It is much used for moldings, circular and enriched casings to columns and girders and ornamental work, which is worked in the shop and fixed in position.
Desachy, a French modeler, took out in 1856 a patent for "producing architectural moldings, ornaments and other works of art, with surfaces of plaster," with the aid of plaster, glue, wood, wire, and canvas or other woven fabric.
The modern use of this material may be said to have started then, but the use of fibrous plaster was known and practiced by the Egyptians long before the Christian era; for ancient coffins and mummies still preserved prove that linen stiffened with plaster was used for decorating coffins and making masks. Cennino Cennini, writing in 1437, says that fine linen soaked in glue and plaster and laid on wood was used for forming grounds for painting.
Canvas and mortar were in general use in Great Britain up to the middle of the 20th century. This work is also much used for temporary work, such as exhibition buildings.
Plastering
Modern interior plastering techniques
There are two main methods in USA used in construction of the interior walls of modern homes, plasterboard, also called drywall, and veneer plastering.
In plasterboard a specialized form of sheet rock known as "greenboard" (because on the outer paper coating is greenish) is screwed onto the wall-frames (studs) of the home to form the interior walls. At the place where the two edges of wallboards meet there is a seam. These seams are covered with mesh tape and then the seams and the screw heads are concealed with the drywall compound to make the wall seem as one uniform piece. The drywall plaster is a thick paste. Later this is painted or wallpapered over to hide the work. This process is typically called "taping" and those who use drywall are known as "tapers".
Veneer plastering covers the entire wall with thin liquid plaster, uses a great deal of water and is applied very wet. The walls intended to be plastered are hanged with "Blueboard" (named as such for the industry standard of the outer paper being blue-grey in color). This type of sheet rock is designed to absorb some of the moisture of the plaster and thus allow it to cling the plaster better before it sets.
Veneer plastering is a one-shot one-coat application; taping usually requires sanding and then adding another coat, since the compound shrinks as it dries.
Traditional plastering
The plasterer usually shows up after the hangers have finished building all the internal walls, by attaching blueboard over the frames of the house with screws. The plasterer is usually a subcontractor working in crews that average about three veterans and one laborer. The job of the laborer is to set up ahead of and clean up behind the plasterers, so they can concentrate on spreading the "mud" on the walls.
Laborer's tasks
Debris left on the floors from the "hanging" crew must be removed before floor paper can be set down and to remove any tripping hazards.
Cover the floors with tar or brown paper since plaster can stain or be hard to remove from subflooring plywood.
Run hoses and extension cords and set up job lights.
Cover all seams with meshtape as well as any large gaps around outlets caused by poor roto-zip work. Gouge out any bubble in the wallboard caused by broken sheetrock under the paper and cover the holes with meshtape. Remove any loose screws (flies) left from the hanger missing the underlying frame.
Cover all windows and doors with plastic sheets and masking tape to protect the wood of their frames and save on cleaning. If any plumbing fixtures or wall plugs have been installed they are also covered, as well as the bathtubs and showers.
Set up for the next mix. As soon as the table is cleared the laborer is given instructions of how many bags will be needed as well as the next room to be worked in. The table typically consists of folding legs upon which is set a square board of wood and then covered in a plastic sheet upon which the plaster is placed in the center in a large pile.
Mixing the product. The mixing barrel is usually pre-filled to a certain level with water; since it can take some time to fill. The amount of water is usually estimated (with a margin of error leaning towards too little). The amount of water required is obtained from the amount of bags planned to be mixed. The estimation is not difficult for an experienced plasterer; who knows how many sheets he can typically cover, and that one bag usually covers 2 & 1/2 to 3 sheets and 5 gallons of water is needed for one standard 50 pound bag. With a permanent crew that normally does the same amount per mix one can simply fill up the barrel to a known cut-off point.
Once the mix is set up and the plasterers are ready they instruct the laborer to start dumping the bags in the water barrel, while intermittently running the mixing drill. Once all bags are in the barrel more water is slowly added until the plaster is of proper consistency and is then thoroughly mixed. Before the mixing is completed, a margin trowel (or margin for short) is scraped along the inside wall of the barrel to knock off clinging unmixed clumps (known as cutting in) to be furthered mixed until all is homogeneous.
While mixing the drill is slowly brought up and down and follows the edge of the barrel in a circular motion to drag the top of the mix down and ensure an even consistency throughout the mix. Care is taken not to allow the drill's paddle to hit the bottom or sides of the barrel; this can scrape off plastic bits that end up in the mix. At a certain point before the mixing is done a margin trowel is again used to scrape any clinging dry plaster into the rest of the mix. typically this is when the accelerator; if used is added. Mixing can be fatigueing in that the drill tends to not only be heavy but the mixer must also fight the torc of the paddle.
Shovel the mix onto the table. The mixing barrel must be emptied as soon as possible, as the plaster will set faster in the barrel then on the table. but the table cannot be overfilled or it may tip or plaster will spill off the sides and splatter when it hits the floor. While shoveling care must also be taken not to splatter any plaster onto nearby walls.
Clean up the mix barrel. This is done outside with a hose and nozzle. If any plaster remains they can contaminate the next mix with "rocks" that greatly vex the plasterers as they get dragged across the walls and the contamination causes the plaster to set much quicker.
Final clean up. This includes rolling up all paper flooring in finished rooms. knocking the plaster out of plug outlet holes with a drywall hammer/hatchet, taking down any masking tape and plastic, cleaning up any plaster that has splattered onto the floor etc.
Plasterer's tasks
Normally the contractor has already supplied all the bags of Gypsum plaster that will be needed, as well as any external supply of water if the house is not yet connected. The plastering crew needs to bring their own tools and equipment and sometimes supply their own bead.
The Tasks that the plasterer is usually expected to accomplish.
Hang cornerbead
The plasterer usually must first staple or tack Cornerbead onto every protruding (external) corner of the inside of the house. Care is taken to make sure this makes the wall look straight and is more of a skill of the eye than anything else.
"Bead" comes in many styles; Ranging from wire mesh attached by staples to heavier metal grades that need to be tacked on with nails. Plastic varieties also exist.
The bead must be measured and cut to size; care is taken not to bend or warp it. In places where more than one corner meets; the bead's ends are cut at an angle and the 2 or more tips are placed as close together as allowable; touching but not overlapping. The bead is completely covered with plaster as well as the rest of the wall and the plaster also helps to hold it firm. The finished product leaves only a small exposed metal strip at the protrusion of the corner which gets covered when the wall is painted. This leaves a clean, straight looking corner.
An alternative method seen in older houses of forming a rounded or bullnosed corner uses a quirked wooden staff bead. The staff bead, a 1-inch dowel with approx shaved off the back, is set on the external corner by the joiner on site, fastened to wooden plugs set into the brick/block seams, or to the wood frame. Plaster is run up to the staff bead and then cut back locally to the bead or "quirked" to avoid a weak feather edge where the plaster meets the bead.
In architecture a quirk is a small V-shaped channel used to insulate and give relief to a convex rounded moulding. To create the plastered corner, backing coat (browning) is plastered up to the staff bead, then the quirk is cut into the backing coat a little larger than the finished size. When the top skimming coat is applied, again the bead is fully skimmed in and then, using a straight edge, the quirk is re-cut to the finished depth, usually on an approximate 45 degree angle into the bead. The quirk will hide the eventual small crack that will form between the staff bead and plaster.
Set up tools
The plasterer needs to fill a 5-gallon bucket partway with water. From this bucket he hangs his trowel or trowels and places into it various tools.
Normally a plasterer has one trowel for "laying on" (the process of placing mud onto the wall).
Some then keep an older trowel that has a decent bend in it (banana curve) to be used for the purpose of "texturing"; if called for by the homeowner. A lay-on trowel tends to be too flat for this and the vacuum caused by the water can stick it to the wall, forcing him to tear it off and thus he has to rework the area.
Finally, one may have a brand new trowel "not yet broken-in" which he will used for "grinding"; this is when the plaster is nearly hardened and he is smoothing out any bumps or filling in any small dips (cat faces) to make the wall look like a uniform sheet of glossy white plaster.
Most plasterers have their own preference for the size of the trowel they use. some wield trowels as large as 20 inches long but the norm seems to be a 16"×5".
Into the bucket also goes a large brush used to splash water onto the wall and to clean his tools, a paint brush for smoothing corners, and a corner bird for forming corners.
These tool buckets are first kept near the mix table and then as the plaster starts to set are moved closer to the wall that is being worked on. Time becomes a big factor here as once the plaster starts to harden (set) it will do so fairly rapidly and the plasterer has a small margin of error to get the wall smooth.
Onto the mixing table the plasterer usually sets his "hawk" so it will be handy when he needs to grab it and to keep dirt off of it. Any debris in the plaster can become a major nuisance.
Plaster tops or bottom?
Plasterers will typically divide a room, (especially a large or high-ceilinged wall) into top and bottom. The one working on top will do from the ceiling's edge to about belly height and work off a milk crate for an ceiling, or work off stilts for 12-foot-high rooms. For cathedral ceilings or very high walls, staging is set up and one works topside, the others further below.
Clean up before they finish a job
Typically done with the laborer. No plaster globs left on the floors, walls or corner bead edges. (They will show up if painted and interfere with flooring and trim). Remove or neatly stack all trash.
Inspection
All rooms and walls are inspected for cracking and dents or scratches that may have been caused from others bumping into the walls. They are also inspected to make sure no bumps are left on the walls from splashed plaster or water. All rooms are checked to make sure all plaster is knocked out of the outlets so the electrician can install the sockets and to make sure no tools are left behind. This leaves the walls ready for the painters and finishers to come in and do their trade.
Interior plastering techniques
Smooth
The home owner and the plasterer's boss will usually decide beforehand what styles they will use in the house. Typically walls are smooth and sometimes ceilings. Usually a homeowner will opt to have the ceilings use a "texture" technique as it is much easier, faster, and thus cheaper than a smooth ceiling.
The plasterer quotes prices based on techniques to be used and board feet to be covered to the contractor or homeowner before work begins. The board feet is obtained by the hangers or estimated by the head subcontractor by counting the wallboards that come in an industry standard of 8' to 12' long. He then adds in extra expenses for soffits and cathedral ceilings.
Ceiling second or first
Typically if the ceiling is to be smooth it is done first, before the walls. If it is to be textured, it is done after the walls.
The reason for this is that invariably when a ceiling is being worked on plaster will fall and splash onto the walls. However a texture mix doesn't need to be smoothed out when it starts to set:
thus a retardant such as "Cream of tartar" or sugar can be used to prolong the setting time, and is easily scraped off the walls.
and since time is not as restraining of a factor on textured ceilings a large mix, or back-to-back mixes can be done and all ceilings covered at the same time.
another reason is that a bird is usually run along the top corner after doing a smooth ceiling, then it is easier to maintain this edge by doing the wall last. But a textured ceiling normally doesn't need to be birded, only blended in with a very wet paint brush. In this case the wall is done first and the corner formed with the bird.
Scratching
The first thing the plasterer tends to do is go over all the mesh-taped seams of the walls he is about to cover; in a very thin swatch. The wallboard draws moisture out of this strip so when the plasterer goes over it again when doing the rest of the wall it will not leave an indented seam that needs further reworking.
He then fills in the area near the ceiling so he will not have to stretch to reach it during the rest of the wall; And he forms the corner with his bird. This saves much needed time as this process is a race against the chemical reaction.
Laying on
From the mix table the plasterer scoops some "mud" onto the center of his hawk with his trowel. Holding the hawk in his off-hand and his trowel in his primary the plasterer then scoops a bulging roll of plaster onto his trowel. this takes a bit of practice to master, especially with soupy mixes.
Then holding the trowel parallel to the wall and at a slight angle of the wrist he tries to uniformly roll the plaster onto the wall. In a manner similar to a squeegee. He starts about an inch above the floor and works his way upwards to the ceiling. Care is taken to be uniform as possible as it helps in the finishing phase.
Knocking down
Depending on the setting time of the plaster. once the moisture of the plaster starts to be drawn by the board a second pass is made. this is called knocking down. it is much like applying paint with a roller in wrist action and purpose. to smooth out any lines and fill in any major voids that will make extra work once the plaster starts to truly set. very little pressure is applied and the trowel is kept relatively flat towards the wall.
Setting
Sometimes an accelerant will be added to a mix to hasten the time delay from the initial mixing phase to when the plaster starts to set. This is normally done on cold days when setting is delayed or for small jobs to minimize the wait.
Once the plaster is on the wall and starts to set (this can be determined by the table that sets first), the plasterer gingerly sprinkles water onto the wall; this helps to stall the setting and to create a slip. He then uses his trowel and often a wetted felt brush held in the opposite hand and lightly touching the wall ahead of the trowel to work this slip into any small gaps (known as "catfaces") in the plaster as well as smooth out the rough lay-on and flatten any air bubbles that formed during setting.
This is a crucial time because if the wall gets too hard it is nearly impossible to fill in any gaps as the slip will no longer set with the wall and will instead just dry and fall out. This leads to the need of what is called "grinding" as one must go over the hard wall again and again trying to smooth out the hardened wall and any major catfaces must be filled in with a contour putty, joint compound, or reworked by blending in a fresh, thin coat.
The finished wall will look glossy and uniformly flat and is smooth to the touch. After a few days it will become chalky white and can then be painted over.
Mix
From the time the bags are dumped into the barrel to when the wall is completely set is called a mix. Varying on the technique used and whether accelerant or retardant is added, a mix typically lasts about two hours.
The final moments are the most frantic if it is smooth or if the mix sets quicker than anticipated.
If this happens it is said the mix has "snapped" and is normally due to using old product or various types of weather (humidity or hot days can cause plaster to set quicker). Normally only three or four mixes are done in a day as plastering is very tiring and not as effective under unnatural lighting in the months with early dusk.
Seasons
Plastering is done year round but unique problems may arise from season to season. In the summer, the heat tends to cause the plaster to set faster. The plaster also generates its own heat and houses can become quite hellish. Typically the plaster crew will try to arrive at the house well before dawn.
In winter months, short days cause the need of artificial lighting. At certain angles these lights can make even the smoothest wall look like the surface of the moon. Another dilemma in the winter months is needing to use propane jet heaters (which can stain the plaster yellowish but do not otherwise hurt it), not just to keep the plasterers warm but to also prevent the water in the mix from freezing and generating ice crystals before the plaster has time to set. Also if the water hose is not thoroughly drained before leaving it can freeze over night and be completely stopped up in the morning.
Textured
Texturing is usually reserved for closets, ceilings and garage walls.
Typically a retarding agent is added to the mix. this is normally Cream of Tartar (or "Dope" in the plasterer's jargon) and care must be taken with the amount added. Too much and the mix may never set at all. However the amount used is often estimated; much the way one adds a dash of salt to a recipe. you add a small scoop of retarder, dependent on the size of the mix. Retardant is added so that larger mixes can be made, since the texture technique doesn't require the person to wait until it starts to set before working it.
The lay-on phase is the same as smooth but it is added with a thicker coat. Once the coat is on uniformly the plasterer then goes back and birds his corners. Staying away from the corner he then gets a trowel with a nice banana curve in it and starts to run it over the wall in a figure eight or Ess pattern, making sure to cross all areas at least once. He adds a little extra plaster to his trowel if needed. The overall effect is layers of paint-like swaths over the whole of the ceiling or wall. He can then just walk away and let it set with care taken not too leave any globs and to make sure the corners look smooth and linear.
If a wall is to be smooth and the ceiling textured, typically the wall is done first, then the ceiling after the wall has set. Instead of rebirding the ceiling (which would have been done when the wall was laid on), a clean trowel is held against the wall and its corner is run along the ceiling to "cut it in" and clean the wall at the same time. This line is then smoothed with a paintbrush to make the transition seamless.
Sponge
The sponge (technically called a float), has a circle form and rough surface. it is fixed to a backing with a central handhold and is roughly the size of a standard trowel. Sponge is a variant texture technique and used normally on ceilings and sometimes in closets. Typically when using a sponge; sand is added to the mix and the technique is called sand-sponge.
Care must be taken not to stand directly under your trowel when doing this as it is very, very unpleasant, and dangerous to get a grain of sand in your eye; which is compounded by the irritation from the lime as well. This combination can easily scratch the eye.
The lay-on and mix is the same as with regular texturing. however after a uniform and smooth coat is placed on the ceiling and the edges are cut in; a special rectangular sponge with a handle is run across the ceiling in overlapping and circular motions. This takes some skill and practice to do well.
The overall look is a fishscale type pattern on the ceiling, closet wall, etc. Even though retarder is typically used; care must be taken to clean out the sponge thoroughly when finished as any plaster that hardens inside it will be impossible to remove.
Ceilings
Stilts are often required to plaster most ceilings and it is typically harder to lay-on and work than walls. For short ceilings one can also work with milk crates. The difficulty of working upside down often results in plaster bombs splattering on the floors, walls and people below.
This is why smooth ceilings, that use no retardant and sometimes even accelerant, are done before the walls.
Retarded plaster can easily be scraped off a smooth plaster wall when wet. Any splatters from a smooth ceiling can easily be scraped off bare blueboard but not from an already plastered wall. Care must be taken when standing under your trowel or another plasterer.
The general difficulty of working a smooth ceiling fetches a higher cost. The technique is the same as a smooth wall but at an awkward angle for the plasterer.
Tools of the trade
steel straight edge (used for leveling rendered walls and lining plasterboard)
Examples
In England, fine examples of plasterwork interiors of the early modern period can be seen at Chastleton House, (Oxfordshire), Knole House, (Kent), Wilderhope Manor (Shropshire), Speke Hall, (Merseyside), and Haddon Hall, (Derbyshire).
Some examples of outstanding extant historical plasterwork interiors are found in Scotland, where the three finest specimens of interior plasterwork are elaborate decorated ceilings from the early 17th century at Muchalls Castle, Glamis Castle and Craigievar Castle, all of which are in the northeast region of that country.
The craft of modelled plasterwork, inspired by the style of the early modern period, was revived by the designers of the Arts and Crafts movement in late-19th- and early-20th-century England. Notable practitioners were Ernest Gimson, his pupil Norman Jewson, and George P. Bankart, who published extensively on the subject. Examples are preserved today at Owlpen Manor and Rodmarton Manor, both in the Cotswolds.
Modern ornate fibrous plasterwork by the specialist company of Clark & Fenn can be seen at Theatre Royal, Drury Lane, the London Palladium, Grand Theatre Leeds, Somerset House, The Plaisterers' Hall and St. Clement Danes
Corrado Parducci was a notable plaster worker in the Detroit area during the middle half of the 20th century. Probably his best known ceiling is located at Meadow Brook Hall.
See also
References
Architectural elements
Building engineering
Interior design
de:Stuck
ru:Штукатурные работы | Plasterwork | Chemistry,Technology,Engineering | 12,385 |
48,689,724 | https://en.wikipedia.org/wiki/Byzantine%20tower%20of%20Biccari | Byzantine tower of Biccari is a building located in city center of Biccari, a city in Province of Foggia in Italy.
External links
Byzantine tower of Biccari on Biccari official web site.
Towers_in_Italy
Byzantine military architecture
Byzantine Italy | Byzantine tower of Biccari | Engineering | 56 |
2,796,079 | https://en.wikipedia.org/wiki/Electrical%20enclosure | An electrical enclosure is a cabinet for electrical or electronic equipment to mount switches, knobs and displays and to prevent electrical shock to equipment users and protect the contents from the environment. The enclosure is the only part of the equipment which is seen by users. It may be designed not only for its utilitarian requirements, but also to be pleasing to the eye. Regulations may dictate the features and performance of enclosures for electrical equipment in hazardous areas, such as petrochemical plants or coal mines. Electronic packaging may place many demands on an enclosure for heat dissipation, radio frequency interference and electrostatic discharge protection, as well as functional, esthetic and commercial constraints.
Standards
Internationally, IEC 60529 classifies the IP Codes (ingress protection rating) of enclosures.
In the United States, the National Electrical Manufacturers Association (NEMA) publishes NEMA enclosure type standards for the performance of various classes of electrical enclosures. The NEMA standards cover corrosion resistance, ability to protect from rain and submersion, etc.
Materials
Electrical enclosures are usually made from rigid plastics, or metals such as steel, stainless steel, or aluminum. Steel cabinets may be painted or galvanized. Mass-produced equipment will generally have a customized enclosure, but standardized enclosures are made for custom-built or small production runs of equipment. For plastic enclosures ABS is used for indoor applications not in harsh environments. Polycarbonate, glass-reinforced, and fiberglass boxes are used where stronger cabinets are required, and may additionally have a gasket to exclude dust and moisture.
Metal cabinets may meet the conductivity requirements for electrical safety bonding and shielding of enclosed equipment from electromagnetic interference. Non-metallic enclosures may require additional installation steps to ensure metallic conduit systems are properly bonded.
Stainless steel and carbon steel
Carbon steel and stainless steel are both used for enclosure construction due to their high durability and corrosion resistance. These materials are also moisture resistant and chemical resistant. They are the strongest of the construction options. Carbon steel can be hot or cold rolled. Hot rolled carbon steel is used for stamping and moderate forming applications. Cold rolled sheet is produced from low carbon steel and then cold reduced to a certain thickness and can meet ASTM A366 and ASTM A611 requirements.
Stainless steel enclosures are suited for medical, pharma, and food industry applications since they are bacterial and fungal resistant due to their non-porous quality. Stainless steel enclosures may be specified to permit wash-down cleaning in, for example, food manufacturing areas.
Aluminum
Aluminum is chosen because of its light weight, relative strength, low cost, and corrosion resistance. It performs well in harsh environments and it is sturdy, capable of withstanding high impact with a high malleable strength. Aluminum also acts as a shield against electromagnetic interference.
Polycarbonate
Polycarbonate used for electrical enclosures is strong but light, non-conductive and non-magnetic. It is also resistant to corrosion and some acidic environments; however, it is sensitive to abrasive cleaners. Polycarbonate is the easiest material to modify.
Fiberglass
Fiberglass enclosures resist chemicals in corrosive applications. The material can be used over all indoor and outdoor temperature ranges. Fiberglass can be installed in environments that are constantly wet.
Terminology
Enclosures for some purposes have partially punched openings (knockouts) which can be removed to accommodate cables, connectors, or conduits. Where they are small and primarily intended to conceal electrical junctions from sight, or protect them from tampering, they are also known as junction boxes, street cabinets or technically as serving area interface.
Telecommunications
Telecommunication enclosures are fully assembled or modular field-assembled transportable structures capable of housing an electronic communications system. These enclosures provide a controlled internal environment for the communications equipment and occasional craftspeople. The enclosures are designed with locks, security, and alarms to discourage access by unauthorized persons. Enclosures can be provided with a decorative facade to comply with local building requirements.
Fire risk
Electrical enclosures are prone to fires that can be very intense (in the order of the megawatt) and are hence an important topic of fire safety engineering.
See also
19 inch rack
Cable management
DIN rail
Housing (engineering)
Rack unit
Telco can
Utility box art
Utility vault
References
External links
IEC IP definitions, and a comparison of IEC<>NEMA definitions
Types of Enclosures
Electrical Enclosure with Terminal
IP Protection Ratings vs. NEMA Equivalency
What Is an Electrical Enclosure? Definition, Using, Requirements | Electrical enclosure | Engineering | 923 |
10,767,533 | https://en.wikipedia.org/wiki/Journal%20of%20Environmental%20Engineering | The Journal of Environmental Engineering is a monthly engineering journal published by the American Society of Civil Engineers.
The main editor is Dionysios D. Dionysiou of University of Cincinnati.
The journal presents broad interdisciplinary information on the practice and status of research in environmental engineering science, systems engineering, and sanitation. Papers focus on engineering methods; impacts of wastewater collection and treatment; watershed contamination; environmental biology; nonpoint-source pollution on watersheds; air pollution and acid deposition; and solid waste management.
History
As one of ASCE's flagship journals which began publication in 1956, this journal's origin goes back to the publication of the first volume of Transactions of the American Society of Civil Engineers in 1892. Established originally as Journal of the Sanitary Engineering Division and renamed Journal of the Environmental Engineering Division in 1973, it acquired its current name in 1983.
Indexes
The journal is indexed in Google Scholar, Baidu, Elsevier (Ei Compendex), Clarivate Analytics (Web of Science), ProQuest, Civil engineering database, TRDI, OCLC (WorldCat), IET/INSPEC, Crossref, Scopus, and EBSCOHost.
References
External links
ASCE Library
Engineering journals
Environmental engineering
Sewerage
Systems engineering
Systems journals
Waste management journals
American Society of Civil Engineers academic journals
Academic journals established in 1956 | Journal of Environmental Engineering | Chemistry,Engineering,Environmental_science | 274 |
5,351,858 | https://en.wikipedia.org/wiki/En%20%28Lie%20algebra%29 | {{DISPLAYTITLE:En (Lie algebra)}}
In mathematics, especially in Lie theory, En is the Kac–Moody algebra whose Dynkin diagram is a bifurcating graph with three branches of length 1, 2 and k, with .
In some older books and papers, E2 and E4 are used as names for G2 and F4.
Finite-dimensional Lie algebras
The En group is similar to the An group, except the nth node is connected to the 3rd node. So the Cartan matrix appears similar, −1 above and below the diagonal, except for the last row and column, have −1 in the third row and column. The determinant of the Cartan matrix for En is .
E3 is another name for the Lie algebra A1A2 of dimension 11, with Cartan determinant 6.
E4 is another name for the Lie algebra A4 of dimension 24, with Cartan determinant 5.
E5 is another name for the Lie algebra D5 of dimension 45, with Cartan determinant 4.
E6 is the exceptional Lie algebra of dimension 78, with Cartan determinant 3.
E7 is the exceptional Lie algebra of dimension 133, with Cartan determinant 2.
E8 is the exceptional Lie algebra of dimension 248, with Cartan determinant 1.
Infinite-dimensional Lie algebras
E9 is another name for the infinite-dimensional affine Lie algebra Ẽ8 (also as E or E as a (one-node) extended E8) (or E8 lattice) corresponding to the Lie algebra of type E8. E9 has a Cartan matrix with determinant 0.
E10 (or E or E as a (two-node) over-extended E8) is an infinite-dimensional Kac–Moody algebra whose root lattice is the even Lorentzian unimodular lattice II9,1 of dimension 10. Some of its root multiplicities have been calculated; for small roots the multiplicities seem to be well behaved, but for larger roots the observed patterns break down. E10 has a Cartan matrix with determinant −1:
E11 (or E as a (three-node) very-extended E8) is a Lorentzian algebra, containing one time-like imaginary dimension, that has been conjectured to generate the symmetry "group" of M-theory.
En for is a family of infinite-dimensional Kac–Moody algebras that are not well studied.
Root lattice
The root lattice of En has determinant , and can be constructed as the lattice of vectors in the unimodular Lorentzian lattice Zn,1 that are orthogonal to the vector of norm = .
E
Landsberg and Manivel extended the definition of En for integer n to include the case n = . They did this in order to fill the "hole" in dimension formulae for representations of the En series which was observed by Cvitanovic, Deligne, Cohen and de Man. E has dimension 190, but is not a simple Lie algebra: it contains a 57 dimensional Heisenberg algebra as its nilradical.
See also
k21, 2k1, 1k2 polytopes based on En Lie algebras.
References
Further reading
Class. Quantum Grav. 18 (2001) 4443-4460
Guersey Memorial Conference Proceedings '94
Connections between Kac-Moody algebras and M-theory, Paul P. Cook, 2006
A class of Lorentzian Kac-Moody algebras, Matthias R. Gaberdiel, David I. Olive and Peter C. West, 2002
Lie groups | En (Lie algebra) | Mathematics | 754 |
14,925,881 | https://en.wikipedia.org/wiki/Mirror%20TV | A mirror TV or TV mirror is a television that can change into a mirror. Mirror TVs are often used to save space or hide electronics in bathrooms, bedrooms and living rooms. Mirror TVs can be integrated into interior designs, including in smart homes, hotels, offices, gyms, and spas.
A mirror TV consists of special semi-transparent mirror glass with an LCD TV behind the mirrored surface. The mirror is carefully polarized to allow an image to transfer through the mirror, such that when the TV is off, the device looks like a mirror.
Placement of a mirror TV is important to ensure both good mirror reflection and television picture quality. A space with high levels of lighting is optimal for reflection when the TV looks like a mirror, while low levels of light is ideal for TV viewing. Experts recommend using block out blinds in bright rooms, such as those with large windows and skylights, when watching television on a TV-Mirror during the day to reduce the amount of reflection when the TV is on. TV viewing is not affected by reflection on the TV mirror in the evenings.
Some manufacturers offer high-end input and output options for entire-home A/V integration. Many manufacturers, particularly those producing for residential use, have updated their mirror TVs to be compatible with smart TV operating systems such as Apple TV and Android TV.
References
Television technology | Mirror TV | Technology | 275 |
17,419,031 | https://en.wikipedia.org/wiki/List%20of%20observatory%20software | The following is a list of astronomical observatory software.
Commercial software
MaximDL
Non-commercial software
See also
Space flight simulation game
List of space flight simulation games
Planetarium software
observatory software | List of observatory software | Astronomy,Technology | 37 |
625,653 | https://en.wikipedia.org/wiki/M4%20%28computer%20language%29 | m4 is a general-purpose macro processor included in most Unix-like operating systems, and is a component of the POSIX standard.
The language was designed by Brian Kernighan and Dennis Ritchie for the original versions of UNIX. It is an extension of an earlier macro processor, m3, written by Ritchie for an unknown AP-3 minicomputer.
The macro preprocessor operates as a text-replacement tool. It is employed to re-use text templates, typically in computer programming applications, but also in text editing and text-processing applications. Most users require m4 as a dependency of GNU autoconf.
History
Macro processors became popular when programmers commonly used assembly language. In those early days of programming, programmers noted that much of their programs consisted of repeated text, and they invented simple means for reusing this text. Programmers soon discovered the advantages not only of reusing entire blocks of text, but also of substituting different values for similar parameters. This defined the usage range of macro processors at the time.
In the 1960s, an early general-purpose macro processor, M6, was in use at AT&T Bell Laboratories, which was developed by Douglas McIlroy, Robert Morris and Andrew Hall.
Kernighan and Ritchie developed m4 in 1977, basing it on the ideas of Christopher Strachey. The distinguishing features of this style of macro preprocessing included:
free-form syntax (not line-based like a typical macro preprocessor designed for assembly-language processing)
the high degree of re-expansion (a macro's arguments get expanded twice: once during scanning and once at interpretation time)
The implementation of Rational Fortran used m4 as its macro engine from the beginning, and most Unix variants ship with it.
many applications continue to use m4 as part of the GNU Project's autoconf. It also appears in the configuration process of sendmail (a widespread mail transfer agent) and for generating footprints in the gEDA toolsuite. The SELinux Reference Policy relies heavily on the m4 macro processor.
m4 has many uses in code generation, but (as with any macro processor) problems can be hard to debug.
Features
m4 offers these facilities:
a free-form syntax, rather than line-based syntax
a high degree of macro expansion (arguments get expanded during scan and again during interpretation)
text replacement
parameter substitution
file inclusion
string manipulation
conditional evaluation
arithmetic expressions
system interface
programmer diagnostics
programming language independent
human language independent
provides programming language capabilities
Unlike most earlier macro processors, m4 does not target any particular computer or human language; historically, however, its development originated for supporting the Ratfor dialect of Fortran. Unlike some other macro processors, m4 is Turing-complete as well as a practical programming language.
Unquoted identifiers which match defined macros are replaced with their definitions. Placing identifiers in quotes suppresses expansion until possibly later, such as when a quoted string is expanded as part of macro replacement. Unlike most languages, strings in m4 are quoted using the backtick (`) as the starting delimiter, and apostrophe (') as the ending delimiter. Separate starting and ending delimiters allows the arbitrary nesting of quotation marks in strings to be used, allowing a fine degree of control of how and when macro expansion takes place in different parts of a string.
Example
The following fragment gives a simple example that could form part of a library for generating HTML code. It defines a commented macro to number sections automatically:
divert(-1)
m4 has multiple output queues that can be manipulated with the
`divert' macro. Valid queues range from 0 to 10, inclusive, with
the default queue being 0. As an extension, GNU m4 supports more
diversions, limited only by integer type size.
Calling the `divert' macro with an invalid queue causes text to be
discarded until another call. Note that even while output is being
discarded, quotes around `divert' and other macros are needed to
prevent expansion.
# Macros aren't expanded within comments, meaning that keywords such
# as divert and other built-ins may be used without consequence.
# HTML utility macro:
define(`H2_COUNT', 0)
# The H2_COUNT macro is redefined every time the H2 macro is used:
define(`H2',
`define(`H2_COUNT', incr(H2_COUNT))<h2>H2_COUNT. $1</h2>')
divert(1)dnl
dnl
dnl The dnl macro causes m4 to discard the rest of the line, thus
dnl preventing unwanted blank lines from appearing in the output.
dnl
H2(First Section)
H2(Second Section)
H2(Conclusion)
dnl
divert(0)dnl
dnl
<HTML>
undivert(1)dnl One of the queues is being pushed to output.
</HTML>
Processing this code with m4 generates the following text:
<HTML>
<h2>1. First Section</h2>
<h2>2. Second Section</h2>
<h2>3. Conclusion</h2>
</HTML>
Implementations
FreeBSD, NetBSD, and OpenBSD provide independent implementations of the m4 language. Furthermore, the Heirloom Project Development Tools includes a free version of the m4 language, derived from OpenSolaris.
M4 has been included in the Inferno operating system. This implementation is more closely related to the original m4 developed by Kernighan and Ritchie in Version 7 Unix than its more sophisticated relatives in UNIX System V and POSIX.
GNU m4 is an implementation of m4 for the GNU Project. It is designed to avoid many kinds of arbitrary limits found in traditional m4 implementations, such as maximum line lengths, maximum size of a macro and number of macros. Removing such arbitrary limits is one of the stated goals of the GNU Project.
The GNU Autoconf package makes extensive use of the features of GNU m4.
GNU m4 is currently maintained by Gary V. Vaughan and Eric Blake. GNU m4 is free software, released under the terms of the GNU General Public License.
See also
C preprocessor
Macro (computer science)
Make
Template processor
Web template system
References
External links
GNU m4 website
GNU m4 manual
m4 tutorial
Macro Magic: m4, Part One and Part Two
Macro programming languages
Unix programming tools
Unix SUS2008 utilities
Inferno (operating system) commands | M4 (computer language) | Technology | 1,356 |
33,712,034 | https://en.wikipedia.org/wiki/Troglocyclocheilus | Troglocyclocheilus is a monospecific genus of freshwater, troglobitic ray-finned fish belonging to the family Cyprinidae, the carps, barbs and allied fishes. The only species in the genus Troglocyclocheilus khammouanensis which is known only from a single specimen, the holotype, collected from the resurgence of the Nam Don, near the village of Ban Phondou, Thakhek District in the Khammouane province of Laos at 17°33’50”N, 104°52’20”E.
References
Cyprininae
Cave fish
Cyprinid fish of Asia
Fish of Laos
Fish described in 1999
Taxa named by Maurice Kottelat
Species known from a single specimen | Troglocyclocheilus | Biology | 165 |
10,311,578 | https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer%20Prize | The Davisson–Germer Prize in Atomic or Surface Physics is an annual prize that has been awarded by the American Physical Society since 1965. The recipient is chosen for "outstanding work in atomic physics or surface physics". The prize is named after Clinton Davisson and Lester Germer, who first measured electron diffraction, and as of 2007 it is valued at $5,000.
Recipients
2023: Feng Liu
2022: David S. Weiss
2021: Michael F. Crommie
2020: Klaas Bergmann
2019: Randall M. Feenstra
2018:
2017: and Stephen Kevan
2016: Randall G. Hulet
2015: and
2014: Nora Berrah
2013: Geraldine L. Richmond
2012: Jean Dalibard
2011: Joachim Stohr
2010: Chris H. Greene
2009: and Krishnan Raghavachari
2008:
2007:
2006:
2005: Ernst G. Bauer
2004:
2003: Rudolf M. Tromp
2002: Gerald Gabrielse
2001: Donald M. Eigler
2000: William Happer
1999: Steven Gwon Sheng Louie
1998: Sheldon Datz
1997: Jerry D. Tersoff
1996:
1995: Max G. Lagally
1994: Carl Weiman [sic]
1993:
1992:
1991:
1990: David Wineland
1989:
1988: John L. Hall
1987:
1986: Daniel Kleppner
1985: J. Gregory Dash
1984: and
1983: E. W. Plummer
1982: Llewellyn H. Thomas
1981: Robert Gomer
1980: Alexander Dalgarno
1979: and Donald R. Hamann
1978: Vernon Hughes
1977: Walter Kohn and
1976: Ugo Fano
1975: and Homer D. Hagstrum
1974: Norman Ramsey
1972: Erwin Wilhelm Müller
1970: Hans Dehmelt
1967: Horace Richard Crane
1965:
Source:
See also
List of physics awards
References
Awards of the American Physical Society
Atomic physics
Surface science | Davisson–Germer Prize | Physics,Chemistry,Materials_science | 390 |
43,208,825 | https://en.wikipedia.org/wiki/The%20World%27s%20Largest%20Lobster | The World's Largest Lobster () is a concrete and reinforced steel sculpture in Shediac, New Brunswick, Canada sculpted by Canadian artist Winston Bronnum. Despite being known by its name The World's Largest Lobster, it is not actually the largest lobster sculpture.
Description
The sculpture is 11 metres long and 5 metres tall, weighing 90 tonnes. The sculpture was commissioned by the Shediac Rotary Club as a tribute to the town's lobster fishing industry. The sculpture took three years to complete, at a cost of $170,000. It attracts 500,000 visitors per year. Contrary to popular belief, this is not actually the "World's Largest Lobster" as that title went to the Big Lobster sculpture in Kingston, South Australia, until 2015 when Qianjiang, Hubei, China built a 100-tonne lobster/crayfish.
See also
List of world's largest roadside attractions
Betsy the Lobster, another large lobster sculpture
References
1990 sculptures
Sculptures of crustaceans
Buildings and structures in Westmorland County, New Brunswick
True lobsters
Outdoor sculptures in Canada
Roadside attractions in Canada
Shediac
Steel sculptures in Canada
1990 establishments in New Brunswick
Animal sculptures in Canada
Colossal statues | The World's Largest Lobster | Physics,Mathematics | 240 |
17,504,215 | https://en.wikipedia.org/wiki/Disiamylborane | Disiamylborane (bis(1,2-dimethylpropyl)borane) is an organoborane with the formula (abbreviation: Sia2BH). It is a colorless waxy solid that is used in organic synthesis for hydroboration–oxidation reactions. Like most dialkyl boron hydrides, it has a dimeric structure with bridging hydrides.
Reactions
Disiamylborane is prepared by hydroboration of trimethylethylene with diborane. The reaction stops at the secondary borane due to steric hindrance.
Disiamylborane is relatively selective for terminal alkynes and alkenes vs internal alkynes and alkenes. Like most hydroboration, the addition proceeds in an anti-Markovnikov manner. It can be used to convert terminal alkynes, into aldehydes.
The hydroboration process proceeds via an initial dissociation of the dimer.
Related reagents
9-Borabicyclo[3.3.1]nonane (9-BBN).
Thexylborane ((1,1,2-trimethylpropyl)borane, ThxBH2), a primary borane obtained by hydroboration of tetramethylethylene.
Naming
The prefix disiamyl is an abbreviation for "di-sec-isoamyl", where sec-isoamyl ("secondary isoamyl") is an archaic name for the 1,2-dimethylpropyl group (amyl being a obsolescent synonym of pentyl).
References
Alkylboranes
Reagents for organic chemistry | Disiamylborane | Chemistry | 365 |
4,118,330 | https://en.wikipedia.org/wiki/138P/Shoemaker%E2%80%93Levy | 138P/Shoemaker–Levy, also known as Shoemaker–Levy 7, is a faint periodic comet in the Solar System. The comet last came to perihelion on 11 June 2012, but only brightened to about apparent magnitude 20.5.
There were 4 recovery images of 138P on 8 August 2018 by Pan-STARRS when the comet had a magnitude of about 21.5. The comet comes to perihelion on 2 May 2019.
This comet should not be confused with Comet Shoemaker–Levy 9 (D/1993 F2), which crashed into Jupiter in 1994.
References
External links
138P/Shoemaker-Levy 7 – Seiichi Yoshida @ aerith.net
Elements and Ephemeris for 138P/Shoemaker-Levy – Minor Planet Center
138P at Kronk's Cometography
Periodic comets
0138
138P
138P
138P
138P
138P
19911113 | 138P/Shoemaker–Levy | Astronomy | 191 |
5,864,308 | https://en.wikipedia.org/wiki/Ugandan%20space%20initiatives | The development of Ugandan space initiatives has been largely shaped by that country's position on the equator. Its history is marked by an early involvement in issues of space law, and in 2022 by the launch of its first satellite, PearlAfricaSat-1.
Conditions
As one of only a handful of equatorial states, Uganda is ideally sited for a spaceport to launch satellites into geostationary orbit, but this option has never been pursued because of lack of investment in the country's project. The closest regional facility, and the only one ever active in East Africa, is the Italian-owned Broglio Space Centre located off the coast of neighboring Kenya.
Uganda has never acquired any ballistic missile capability, the usual precursor to booster development. The only state in sub-Saharan Africa to ever do so was South Africa, which developed the RSA-3 and RSA-4 missiles in the 1980s, but, after the end of the apartheid regime, cancelled its nuclear weapons and later its ballistic missile programs by 1993.
Space law
Uganda joined the first two international space law treaties, ratifying the Partial Nuclear Test Ban Treaty on March 24, 1964, and acceding to the Outer Space Treaty on April 24, 1968. It was not, however, a party to the later Rescue Agreement of 1968, the Liability Convention of 1972, the Registration Convention of 1976 or the Moon Treaty of 1984.
Uganda was one of eight equatorial states that adopted the Bogota Declaration on December 3, 1976, which seemingly contradicted the Outer Space Treaty, but asserted that geostationary orbit was not "outer space" and constitutes national territory.
Idi Amin and UFOs
When President Idi Amin came to power in a 1971 coup (at the Space Race climax of the Apollo lunar landings), his government developed an interest in UFO activity, and he even claimed to have personally witnessed a UFO over Lake Victoria in 1973. In 1971, at the beginning of his regime, United Nations ambassador Grace Ibingira advocated an early form of post-detection policy to prevent Cold War provocation of hostilities with UFOs. Near the end of the Amin era, Uganda also became the only other country to support Eric Gairy of Grenada's efforts for UN recognition of the phenomenon with a dedicated agency and declaring 1978 as the International Year of UFOs.
There is a false report that Idi Amin also pursued a human spaceflight program, but this may have been a conflation of his UFO interests and the personal project of Edward Makuka Nkoloso in Zambia the decade before. In June 1999, this report got some attention in the Time 100 magazine feature's "100 Worst Ideas of the Century".
Modern government effort
At the September 1996 Conference on Small Satellites: Missions and Technology in Madrid, Spain, informal proposals were raised for a Ugandan microsatellite project. At the Third United Nations Conference on the Exploration and Peaceful Uses of Outer Space held in Vienna in July 1999, Semakula Kiwanuka said "space technology is a powerful tool for accelerating national development" and pointed out the benefits a space program would have for his country. There have also been proposals for space science to be introduced at Mbarara University of Science & Technology.
Space technology for a country like Uganda would be most relevant in the fields of environmental Earth observation satellites and communications. Uganda sent two representatives, Samuel Edward Sekunda of the Department of Meteorology and Yafesi Okia of the Department of Lands and Surveys, to the Regional Workshop on the Use of Space Technology for Outer Space Affairs organized by the United Nations Office for Outer Space Affairs and the United Nations Economic Commission for Africa in Addis Ababa, Ethiopia in July 2002. Joel Arumadri of the National Environmental Management Authority (NEMA) represented Uganda at the April 2004 Regional Workshop on the Use of Space Technology for Natural Resources Management, Environmental Monitoring and Disaster Management in Khartoum, Sudan.
The Department of Meteorology has been directly active in the communications use of space technology, running the Radio and Internet (RANET) program, which allows rural communities to access government internet forecasts through WorldSpace satellite radio from 2001 to 2009.
President Yoweri Museveni has spoken in favor of a regional East African approach to future human spaceflight.
Amateur effort
The African Space Research Program is a private volunteer space advocacy group that has pursued a DIY aviation program, with the goal of simulating and preparing for eventual spaceflight. The team was founded by Chris Nsamba after he collaborated on a homebuilt aircraft project in the United States, and resolved to build the first Ugandan-designed homebuilt (the "African Skyhawk") upon his return. Nsamba believes it will be capable of flying at an altitude of 80,000 ft. The aircraft is being put together, with the help of 600 volunteers, in Nsamba's mother's backyard in Ntinda, a suburb of Kampala.
The group has also built a "Cadimalla Space Observer", for aerial photography, which they plan to send up with a high-altitude balloon. Jinja Airport is planned to be used for these efforts.
Nsamba, had developed ambitious plans for an eventual spaceplane (the "Dynacraft Spaceship") to send to orbit by 2017, and has also taken on the responsibility of training his volunteers, drawing on his background as a student of astronomy. When asked how he would simulate the effects of zero-gravity, Nsamba said: "I've got a jet engine on order, so I'm planning to build a tunnel, put the engine at one end and when I throw a guy in he'll float in a similar way to how he would in space."
The program has been sustained by donations from around the world, and from 2011, funding has also been supplied by the Ugandan government. A spokesman for the Department of Science and Technology said: "I applaud their ambition ... It provides an opportunity for Africans in general and Ugandans in particular to participate in space science and research instead of being spectators." Flight engineers from the Civil Aviation Authority have been assigned to review and advise the team. However, the head of safety at the Civil Aviation Authority subsequently reported to a Parliamentary science committee that all space activities are illegal in Uganda.
See also
Civil Aviation Authority of Uganda
Space programme of Kenya
References
External links
African Space Research Program
Space
Space
Space programs by country
Aviation organizations | Ugandan space initiatives | Engineering | 1,306 |
794,163 | https://en.wikipedia.org/wiki/Annualized%20failure%20rate | Annualized failure rate (AFR) gives the estimated probability that a device or component will fail during a full year of use. It is a relation between the mean time between failure (MTBF) and the hours that a number of devices are run per year. AFR is estimated from a sample of like components—AFR and MTBF as given by vendors are population statistics that can not predict the behaviour of an individual unit.
Hard disk drives
For example, AFR is used to characterize the reliability of hard disk drives.
The relationship between AFR and MTBF (in hours) is:
This equation assumes that the device or component is powered on for the full 8766 hours of a year, and gives the estimated fraction of an original sample of devices or components that will fail in one year, or, equivalently, 1 − AFR is the fraction of devices or components that will show no failures over a year. It is based on an exponential failure distribution (see failure rate for a full derivation).
Note: Some manufacturers count a year as 8760 hours.
This ratio can be approximated by, assuming a small AFR,
For example, a common specification for PATA and SATA drives may be an MTBF of 300,000 hours, giving an approximate theoretical 2.92% annualized failure rate i.e. a 2.92% chance that a given drive will fail during a year of use.
The AFR for a drive is derived from time-to-fail data from a reliability-demonstration test (RDT).
AFR will increase towards and beyond the end of the service life of a device or component. Google's 2007 study found, based on a large field sample of drives, that actual AFRs for individual drives ranged from 1.7% for first year drives to over 8.6% for three-year-old drives. A CMU 2007 study showed an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives.
See also
Failure rate
Frequency of exceedance
References
Engineering failures
Rates | Annualized failure rate | Technology,Engineering | 424 |
26,059,406 | https://en.wikipedia.org/wiki/PTK%20Forensics | PTK Forensics (PTK) was a non-free, commercial GUI for old versions of the digital forensics tool The Sleuth Kit (TSK). It also includes a number of other software modules for investigating digital media. The software is not developed anymore.
PTK runs as a GUI interface for The Sleuth Kit, acquiring and indexing digital media for investigation. Indexes are stored in an SQL database for searching as part of a digital investigation. PTK calculates a hash signature (using SHA-1 and MD5) for acquired media for verification and consistency purposes.
References
External links
SourceForge.net download site for PTK
Computer forensics
Digital forensics software | PTK Forensics | Engineering | 143 |
2,407,197 | https://en.wikipedia.org/wiki/Embedded%20C%2B%2B | Embedded C++ (EC++) is a dialect of the C++ programming language for embedded systems. It was defined by an industry group led by major Japanese central processing unit (CPU) manufacturers, including NEC, Hitachi, Fujitsu, and Toshiba, to address the shortcomings of C++ for embedded applications. The goal of the effort is to preserve the most useful object-oriented features of the C++ language yet minimize code size while maximizing execution efficiency and making compiler construction simpler. The official website states the goal as "to provide embedded systems programmers with a subset of C++ that is easy for the average C programmer to understand and use".
Differences from C++
Embedded C++ excludes some features of C++.
Some compilers, such as those from Green Hills and IAR Systems, allow certain features of ISO/ANSI C++ to be enabled in Embedded C++. IAR Systems calls this "Extended Embedded C++".
Compilation
An EC++ program can be compiled with any C++ compiler. But, a compiler specific to EC++ may have an easier time doing optimization.
Compilers specific to EC++ are provided by companies such as:
IAR Systems
Freescale Semiconductor, (spin-off from Motorola in 2004 who had acquired Metrowerks in 1999)
Tasking Software, part of Altium Limited
Green Hills Software
Criticism
The language has had a poor reception with many expert C++ programmers. In particular, Bjarne Stroustrup says, "To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be." In fact, the official English EC++ website has not been updated since 2002. Nevertheless, a restricted subset of C++ (based on Embedded C++) has been adopted by Apple Inc. as the exclusive programming language to create all I/O Kit device drivers for Apple's macOS, iPadOS and iOS operating systems of the popular Macintosh, iPhone, and iPad products. Apple engineers felt the exceptions, multiple inheritance, templates, and runtime type information features of standard C++ were either insufficient or not efficient enough for use in a high-performance, multithreaded kernel.
References
External links
Background and Objectives of the Embedded C++ Specification Development
Embedded C++ Yields Faster Smaller Code, John Carbone (Embedded.com), June 19, 1998
Building Bare-Metal ARM Systems with GNU: Part 1 - Getting Started, Miro Samek, Quantum Leaps, June 26, 2007
Technical Report on C++ Performance, by WG 21 of ISO Subcommittee SC 22
C++ programming language family
C++
Hardware description languages
Embedded systems | Embedded C++ | Technology,Engineering | 565 |
48,547,304 | https://en.wikipedia.org/wiki/NGC%20124 | NGC 124 is a spiral galaxy in the constellation Cetus. It was discovered by Truman Henry Safford on September 23, 1867. The galaxy was described as "very faint, large, diffuse, 2 faint stars to northwest" by John Louis Emil Dreyer, the compiler of the New General Catalogue.
The 17th magnitude supernova SN 2004dd was discovered in this galaxy on 12 July 2004. It was a type II supernova.
References
External links
Unbarred spiral galaxies
Cetus
Astronomical objects discovered in 1867
0124
Discoveries by Truman Safford | NGC 124 | Astronomy | 113 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.