id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,253,294
https://en.wikipedia.org/wiki/Alpha%20Lyncis
Alpha Lyncis (α Lyn, α Lyncis) is the brightest star in the northern constellation of Lynx with an apparent magnitude of +3.13. Unusually, it is the only star in the constellation that has a Bayer designation. Based upon parallax measurements, this star is located about from the Earth. Its common name is Elvashak. Characteristics This is a red giant star that has exhausted the hydrogen at its core and has evolved away from the main sequence. It has expanded to about 58 times the Sun's radius and it is emitting roughly 621 times the luminosity of the Sun. The estimated effective temperature of the star's outer envelope is 3,881 K, which is lower than the Sun's effective temperature of 5,778 K, and is giving Alpha Lyncis a red-orange hue that is characteristic of late K-type stars. Alpha Lyncis is a suspected small-amplitude red variable star that changes apparent magnitude from +3.17 up to +3.12. This variability pattern typically occurs in stars that have developed an inert carbon core surrounded by a helium-fusing shell, and suggests that Alpha Lyncis is starting to evolve into a Mira variable. References Lyncis, Alpha Lynx (constellation) K-type giants Lyncis, 40 045860 080493 Durchmusterung objects 3705 Suspected variables
Alpha Lyncis
[ "Astronomy" ]
291
[ "Lynx (constellation)", "Constellations" ]
15,916,469
https://en.wikipedia.org/wiki/L.%20James%20Sullivan
Leroy James Sullivan (June 27, 1933 – September 22, 2024) was an American firearms inventor. Going by Jim Sullivan, he designed several "scaled-down" versions of larger firearms. Early life Sullivan was born on June 27, 1933, in Nome, Alaska. Sullivan lived in Nome until he was seven years old, concerned that World War II would spread to Alaska, Sullivan's family moved to Seattle, Washington. Education Sullivan attended the public schools of Seattle, and later in Kennewick, Washington. Sullivan went on to study engineering, for two years, at the University of Washington in Seattle. Aware that he was about to be drafted to fight in the Korean War Sullivan wanted to become an Army diver, so he left the University of Washington to attend the Sparling School of Deep Sea Diving in Long Beach, California. Military service Sullivan served in the US Army, from 1953 to 1955, although he was trained by the Army to be a telephone installer and repairman. Due to his civilian training he went overseas to Korea in 1954, where he was assigned by the Army to be a diver to repair oil pipelines and other facilities damaged during the US invasion of Inchon Harbor. Small arms designer Sullivan is largely responsible for the Ultimax 100 light machine gun and the SureFire MGX. He also contributed to the Ruger M77 rifle, M16, Stoner 63, and Ruger Mini-14 rifles (scaled from the AR-10, Stoner 62, and M14 rifle respectively). Armwest LLC M4 In 2014, Sullivan provided a video interview regarding his contributions to the M16/M4 family of rifles while working for Armalite. A noted critic of the M4, he illustrates the deficiencies found in the rifle in its current configuration. In the video, he demonstrates his "Arm West LLC modified M4", with enhancements he believes necessary to rectify the issues with the weapon. Proprietary issues aside, the weapon is said to borrow features in his prior development, the Ultimax. Sullivan has stated (without exact details as to how) the weapon can fire from the closed bolt in semi-automatic and switch to open bolt when firing in fully automatic, improving accuracy. The weight of the cyclic components of the gun has been doubled (while retaining the weapon's weight at less than eight pounds). Compared to the standard M4, which in automatic fires 750-950 rounds a minute, the rate of fire of the Arm West M4 is heavily reduced both to save ammunition and reduce barrel wear. The reduced rate also renders the weapon more controllable and accurate in automatic firing. Death Sullivan died on September 22, 2024, at the age of 91. References 1933 births 2024 deaths United States Army personnel of the Korean War Inventors from Alaska Weapon designers Weapon design Firearm designers People from Nome, Alaska People from Seattle People associated with firearms Place of death unknown
L. James Sullivan
[ "Engineering" ]
599
[ "Design", "Weapon design" ]
5,444,788
https://en.wikipedia.org/wiki/Push%E2%80%93pull%20agricultural%20pest%20management
Push–pull technology is an intercropping strategy for controlling agricultural pests by using repellent "push" plants and trap "pull" plants. For example, cereal crops like maize or sorghum are often infested by stem borers. Grasses planted around the perimeter of the crop attract and trap the pests, whereas other plants, like Desmodium, planted between the rows of maize, repel the pests and control the parasitic plant Striga. Push–pull technology was developed at the International Centre of Insect Physiology and Ecology (ICIPE) in Kenya in collaboration with Rothamsted Research, UK. and national partners. This technology has been taught to smallholder farmers through collaborations with universities, NGOs and national research organizations. How push–pull works Push–pull technology involves use of behaviour-modifying stimuli to manipulate the distribution and abundance of stemborers and beneficial insects for management of stemborer pests. It is based on in-depth understanding of chemical ecology, agrobiodiversity, plant-plant and insect-plant interactions, and involves intercropping a cereal crop with a repellent intercrop such as Desmodium uncinatum (silverleaf) (push), with an attractive trap plant such as Napier grass (pull) planted as a border crop around this intercrop. Gravid stemborer females are repelled from the main crop and are simultaneously attracted to the trap crop. The push The "push" in the intercropping scheme is provided by the plants that emit volatile chemicals (kairomones) which repel stemborer moths and drive them away from the main crop (maize or sorghum). The most commonly used species of push plants are legumes of the genus Desmodium (e.g. silverleaf Desmodium, D. uncinatum, and greenleaf Desmodium, D. intortum). The Desmodium is planted in between the rows of maize or sorghum, where they emit volatile chemicals (such as (E)-β-ocimene and (E)-4,8-dimethyl-1,3,7-nonatriene) that repel the stemborer moths. These semiochemicals are also produced in grasses such as maize when they are damaged by insect herbivores, which may explain why they are repellent to stemborers. Being a low-growing plant, Desmodium does not interfere with the growth of crops, but can suppress weeds and help improve soil quality by increasing soil organic matter content, fixing nitrogen, and stabilizing soils from erosion. It also serves as a highly nutritious animal feed and effectively suppresses striga weeds through an allelopathic mechanism. Another plant showing good repellent properties is molasses grass (Melinis minutiflora), a nutritious animal feed with tick-repelling and stemborer larval parasitoid attractive properties. The pull The approach relies on a combination of companion crops to be planted around and among maize or sorghum. Both domestic and wild grasses can help to protect the crops by attracting and trapping the stemborers. The grasses are planted in the border around the maize and sorghum fields where invading adult moths become attracted to chemicals emitted by the grasses themselves. Instead of landing on the maize or sorghum plants, the insects head for what appears to be a tastier meal. These grasses provide the "pull" in the "push–pull" strategy. They also serve as a haven for the borers' natural enemies. Good trap crops include well-known grasses such as Napier grass (Pennisetum purpureum), Signal grass (Brachiaria brizantha), and Sudan grass (Sorghum vulgare sudanense). Napier grass produces significantly higher levels of attractive volatile compounds (green leaf volatiles), cues used by gravid stemborer females to locate host plants, than maize or sorghum. There is also an increase of approximately 100-fold in the total amounts of these compounds produced in the first hour of nightfall by Napier grass (scotophase), the period at which stemborer moths seek host plants for laying eggs, causing the differential oviposition preference. However, many of the stemborer larvae, about 80%, do not survive, as Napier grass tissues produce sticky sap in response to feeding by the larvae, which traps them, causing the death of about 80% of larvae. Recent large-scale field studies in East Africa show that maize grown in push–pull systems has higher levels of two benzoxazinoid glycosides, compounds known for their antiherbivore properties. These glycosides were present in greater abundance in maize leaves from push–pull fields compared to those from conventional fields. Suppression of Striga Desmodium also controls the parasitic weed, Striga, resulting in significant yield increases of about 2 tonnes/hectare (0.9 short tons per acre) per cropping season. In addition to benefits derived from increased nitrogen availability and competition for light, it was found that D. uncinatum strongly suppresses striga growth through allelopathy. These effects are thought to be related to isoflavanones produced in Desmodium roots, which can either promote the germination of striga seeds or inhibit seedling growth, depending on their structure. Together, these effects result in the phenomenon known as "suicidal germination", thus reducing the striga seed bank in the soil. Other Desmodium species have also been evaluated and have similar effects on stemborers and striga weed and are currently being used as intercrops in maize, sorghum and millets. Improvement of soil quality Desmodium also enhances soil quality by increasing soil organic matter, nitrogen content, and soil biodiversity, as well as conserving moisture, moderating soil temperature and preventing erosion. Economics of push-pull agriculture Push-pull agriculture leads to beneficial economic outcomes on the level of individual smallholder and subsistence farmers through larger income streams coming from the sale of surplus grain, desmodium seeds, fodder, and milk. Economic study has calculated the return on investment of push-pull methods for farmers to be over 2.2 as compared to 1.8 for pesticide use, and .8 for monocrop. Although startup costs of push-pull technology are highly variable due to the requirements of labor to plant desmodium and Napier grass and purchase of these seeds, costs significantly decline in following growing years. Push-pull technology has also been seen to help boost local economies. Because these farmers have more income, they are able to spend money in their local economy which boosts the standards of living and prosperity of the community at large. The primary economic opponents to such methods are large multinational corporations such as Monsanto and others that produce seasonal inputs such as chemical pesticides, fertilizers and high-yield seeds that require such inputs. After controlling for extraneous maize yield determinants, it was found that there was a 61.9% maize yield increase with a 15.3% increase in the cost of maize production and a 38.6% increase in the average net income brought in from maize. In households where push-pull technology has been adopted in Kenya, increased economic earnings have been associated with more years of education, improved access to rural institutions, and attendance to a larger number of field days when compared with households that have not adopted the technology. Additionally, if adoption of the technology continues at the current rate of 14.4%, a reduction of 75,077 people considered poor could be expected in a situation where the local economies remain closed, and 76,504 fewer people could be expected to be considered poor if the economies were open. Cultural acceptance of push-pull agriculture Because push-pull technology was developed mainly outside of Sub-saharan Africa—where international agencies today aim to grow its impact the most—a lack of trust was initially faced. This distrust was fueled by local suspicions that external agents had hidden self-interested agendas. In relationships where resources to implement new technologies are also externally provided, farmers often feel that they must simply passively follow the instructions they are given; however, efforts have been made in Ethiopia to encourage farmer engagement with the development of push-pull technology and to thus make the process more collaborative and bridge this gap. Additionally, as mentioned above, push-pull technology is very similar to traditional intercropping methods which has helped it gain community acceptance Push-pull technology has also been more widely seen as culturally acceptable and congruent because of the way it provides traditional roles for men and women in the agriculture work. Because push-pull technology can fit within existing family frameworks, the practice does not demand an overhaul of existing dynamics. In order to further make the implementation of push-pull technology, farmers played a participatory and influential role in deciding how the technology would be carried out to best suit their needs and align with traditional practices. For example, local farmers preferred to drill the lines in which seeds would be sown using an ox-drawn plough. In general, by promoting the participatory leadership of local farmers, the prospects of sustainability of such projects are anticipated to be strengthened. History Push–pull technology was developed at the International Centre of Insect Physiology and Ecology (ICIPE) in Kenya in collaboration with Rothamsted Research, UK. and national partners in the 1990s. Research and development for the push-pull strategy was funded by a number of partners including the Gatsby Charitable Foundation of the UK, the Rockefeller Foundation, the UK’s Department for International Development, and the Global Environment Facility of the UNEP, among others. Future prospects of push-pull agriculture This strategy is based around the use of locally available plants, not costly industrial inputs, thus making it both more economically feasible and more culturally appropriate as this method is in many ways similar to traditional African practices of intercropping. For this reason, this method is anticipated to be a popular solution to food insecurity in Sub-Saharan Africa. While this strategy is less resource-intensive, it is more knowledge-intensive. For this reason, mass media campaigns have been launched, public meetings held, printed materials disseminated, and farmer-to-farmer and farmer field school programs established in order to overcome knowledge barriers to the implementation of push-pull technology. The most efficient, influential, and cost-effective methods of disseminating information and encouraging farmers to adopt push-pull methods have been identified to be field days (lead to approximately 26.8% increase in adoption), farmer field schools (22.2% chance of swaying farmers' decisions), and farmer teachers (18.1% chance of convincing farmers to adopt the technology). Additionally, it has been found that over 80% of farmers who participate in field days adopt the technology on their land. Another measure that has been taken to boost adoption rates of push-pull technology is to distribute desmodium seeds and other inputs that are required to begin this practice. Distribution of seeds and other required inputs has been made possible through partnerships with seed companies and local farmer groups. In order to combat the former shortage and high cost of desmodium seeds that were limiting the spread of push-pull technology, intensive seed production initiatives have been launched and farmer groups have been encouraged to propagate the seeds themselves. As a result of these measures, the market for desmodium seeds has been stimulated and the seeds have become more accessible to smallholder farmers looking to implement push-pull methods in their fields. In Kenya, Tanzania, and Uganda alone, push-pull technology has been adopted by 68,800 smallholder farmers; however, these numbers may be higher in reality because of gaps in reporting. Because these areas in Sub-Saharan Africa often suffer from unreliable crop production as a result of stemborers and striga, soil infertility, and unsustainable supplies of fodder, the push-pull solution to these problems is expected to be adopted by more smallholder farmers in the future at an annual adoption rate of 30% and a potential annual adoption rate of 50% because of intensive education campaigns that have been launched. See also Biological pest control Cultural methods Sustainable agriculture List of sustainable agriculture topics Ecotechnology List of companion plants List of beneficial weeds List of pest-repelling plants References External links www.push-pull.net Agroecology Biological pest control Chemical ecology Sustainable agriculture
Push–pull agricultural pest management
[ "Chemistry", "Biology" ]
2,587
[ "Biochemistry", "Chemical ecology" ]
5,445,272
https://en.wikipedia.org/wiki/24%20%28puzzle%29
The 24 puzzle is an arithmetical puzzle in which the objective is to find a way to manipulate four integers so that the end result is 24. For example, for the numbers 4, 7, 8, 8, a possible solution is . Note that all four numbers must be used exactly once. The problem has been played as a card game in Shanghai since the 1960s, using playing cards. It has been known by other names, including Maths24. A proprietary version of the game has been created which extends the concept of the basic game to more complex mathematical operations. Original version The original version of 24 is played with an ordinary deck of playing cards with all the face cards removed. The aces are taken to have the value 1 and the basic game proceeds by having 4 cards dealt and the first player that can achieve the number 24 exactly using only allowed operations (addition, subtraction, multiplication, division, and parentheses) wins the hand. Some advanced players allow exponentiation, roots, logarithms, and other operations. For short games of 24, once a hand is won, the cards go to the player that won. If everyone gives up, the cards are shuffled back into the deck. The game ends when the deck is exhausted, and the player with the most cards wins. Longer games of 24 proceed by first dealing the cards out to the players, each of whom contributes to each set of cards exposed. A player who solves a set takes its cards and replenishes their pile, after the fashion of War. Players are eliminated when they no longer have any cards. A slightly different version includes the face cards, Jack, Queen, and King, giving them the values 11, 12, and 13, respectively. In a variation of the game played with a standard 52-card deck, there are four-card combinations. Expansion to more complex operations Additional operations, such as square root and factorial, allow more possible solutions to the game. For instance, a set of 1,1,1,1 would be impossible to solve with only the five basic operations. However, with the use of factorials, it is possible to get 24 as . See also Krypto (game) References External links 24game.github.io: an online, open-source version of the 24 puzzle General information from Pagat List of all possible solutions to the puzzle Card games introduced in the 1960s Mathematical games Brain training video games Year of introduction missing Card games for children
24 (puzzle)
[ "Mathematics" ]
506
[ "Recreational mathematics", "Mathematical games" ]
5,445,341
https://en.wikipedia.org/wiki/Tantalum%28IV%29%20sulfide
Tantalum(IV) sulfide is an inorganic compound with the formula TaS2. It is a layered compound with three-coordinate sulfide centres and trigonal prismatic or octahedral metal centres. It is structurally similar to molybdenum disulfide MoS2, and numerous other transition metal dichalcogenides. Tantalum disulfide has three polymorphs 1T-TaS2, 2H-TaS2, and 3R-TaS2, representing trigonal, hexagonal, and rhombohedral respectively. The properties of the 1T-TaS2 polytype have been described. CDW, the periodic distortion induced by the electron-phonon interaction, is manifested by formation of a superlattice constituted by clusters of 13 atoms, which is called the Star of David (SOD), where the surrounding 12 Ta atoms move slightly towards the centre of the star. there are three 1T-TaS2 charge density wave phases: commensurate charge density wave (CCDW), nearly commensurate charge density wave (NCCDW), and incommensurate charge density wave (ICCDW). In the CCDW phase, the entire material is covered with the superlattice, but in the ICCDW phase, the atoms do not move. NCCDW is the phase between the two as the SOD clusters are confined within the nearly hexagonal-shaped areas. The phase transition of 1T-TaS2 could be achieved via temperature difference, as it is one of the most investigated methods to achieve phase transition of the material. In common with many other transition metal dichalcogenide (TMD) compounds, which are metallic at high temperatures, it exhibits a series of charge-density-wave (CDW) phase transitions from 550 K to 50 K. It is unusual amongst them in showing a low-temperature insulating state below 200 K, which is believed to arise from electron correlations, similar to many oxides. The insulating state is commonly attributed to a Mott state. When cooling down to 550K, 1T-TaS2 transitions from metallic to ICCDW, then the material achieves NCCDW when cooling below 350K, and finally entering CCDW below 180K. However, if the temperature change is achieved by raising the temperature, another phase could appear between the CCDW phase and the NCCDW phase. The Triclinic Charge Density Wave (TCDW) is again the hybrid state between CCDW and ICCDW, the difference is that instead of forming an enclosed hexagon area, the material forms strips with different atom shifts. When 1T-TaS2 is heated at a lower temperature, the first transition is from CCDW to TCDW at 220K; Then, continue heating the material above 280K the phase of the material transits to NCCDW. It is also superconducting under pressure or upon doping, with a familiar dome-like phase diagram as a function of dopant, or substituted isovalent element concentration. Metastability. 1T-TaS2 is unique, not only amongst TMDs but also amongst 'quantum materials' in general, in showing a metastable metallic state at low temperatures. Switching from the insulating to the metallic state can be achieved either optically or by the application of electrical pulses. The metallic state is persistentbelow ~20K, but its lifetime can be tuned by changing the temperature. The metastable state lifetime can also be tuned by strain. The electrically-induced switching between states is of current interest, because it can be used for ultrafast energy-efficient memory devices. Because of the frustrated triangular arrangement of localized electrons, the material is suspected of supporting some form of quantum spin liquid state. It has been the subject of numerous studies as a host for intercalation of electron donors. Preparation TaS2 is prepared by reaction of powdered tantalum and sulfur at ~900 °C. It is purified and crystallized by chemical vapor transport using iodine as the transporting agent: TaS2 + 2 I2 TaI4 + 2 S It can be easily cleaved and has a characteristic golden sheen. Upon extended exposure to air, the formation of an oxide layer causes darkening of the surface. Thin films can be prepared by chemical vapour deposition and molecular beam epitaxy. Properties Three major crystalline phases are known for TaS2: trigonal 1T with one S-Ta-S sheet per unit cell, hexagonal 2H with two S-Ta-S sheets, and rhombohedral 3R with three S-Ta-S sheets per cell; 4H and 6R phases are also observed, but less frequently. These polymorphs mostly differ by the relative arrangement of the S-Ta-S sheet rather than the sheet structure. 2H-TaS2 is a superconductor with the bulk transition temperature TC = 0.5 K, which increases to 2.2 K in flakes with a thickness of a few atomic layers. The bulk TC value increases up to ~8 K at 10 GPa and then saturates with increasing pressure. In contrast, 1T-TaS2 starts superconducting only at ~2 GPa; as a function of pressure its TC quickly rises up to 5 K at ~4 GPa and then saturates. At ambient pressure and low temperatures 1T-TaS2 is a Mott insulator. Upon heating it changes to a Triclinic charge density wave (TCDW) state at TTCDW ~ 220 K, to a nearly commensurate charge density wave (NCCDW) state at TNCCDW ~ 280 K, to an incommensurate CDW (ICCDW) state at TICCDW ~ 350 K, and to a metallic state at TM ~ 600 K. In the CDW state the TaS2 lattice deforms to create a periodic Star of David pattern. Application of (e.g. 50fs) optical laser pulses or voltage pulses (~2–3 V) through electrodes or in a scanning tunneling microscope (STM) to the CDW state causes it to drop electrical resistance and creates a "mosaic" or domain state consisting of nanometer-sized domains, where both the domains and their walls exhibit metallic conductivity. This mosaic structure is metastable and gradually disappears upon heating. Memory devices and other potential applications Switching of the material to and from the "mosaic", or domain state, by optical or electrical pulses is used for "Charge configuration memory" (CCM) devices. The distinguishing feature of such devices is that they exhibit very efficient and fast non-thermal resistance switching at low temperatures. Room temperature operation of a charge-density-wave oscillator and thermally-driven GHz modulation of the CDW state has been demonstrated. References Disulfides Tantalum compounds Transition metal dichalcogenides Monolayers
Tantalum(IV) sulfide
[ "Physics" ]
1,466
[ "Monolayers", "Atoms", "Matter" ]
5,445,365
https://en.wikipedia.org/wiki/Tantalum%20trialuminide
Tantalum trialuminide (TaAl3) is an inorganic chemical compound. This compound and Ta3Al are stable, refractory, and reflective, and they have been proposed as coatings for use in infrared wave mirrors. References Aluminides Tantalum compounds
Tantalum trialuminide
[ "Chemistry" ]
60
[ "Intermetallics", "Inorganic compounds", "Aluminides", "Inorganic compound stubs" ]
5,445,371
https://en.wikipedia.org/wiki/Tantalum%28V%29%20bromide
Tantalum(V) bromide is the inorganic compound with the formula Ta2Br10. Its name comes from the compound's empirical formula, TaBr5. It is a diamagnetic, orange solid that hydrolyses readily. The compound adopts an edge-shared bioctahedral structure, which means that two TaBr5 units are joined by a pair of bromide bridges. There is no bond between the Ta centres. Niobium(V) chloride, niobium(V) bromide, niobium(V) iodide, tantalum(V) chloride, and tantalum(V) iodide all share this structural motif. Preparation and handling The material is usually prepared by the reaction of bromine with tantalum metal (or tantalum carbide) at elevated temperatures in a tube furnace. The bromides of the early metals are sometimes preferred to the chlorides because of the relative ease of handling liquid bromine vs gaseous chlorine. Like other molecular halides, it is soluble in nonpolar solvents such as carbon tetrachloride (1.465 g/100 mL at 30 °C), but it reacts with some solvents. It can also be produced from the more accessible oxide by metathesis using aluminium tribromide: Ta2O5 + 3.3 AlBr3 → 2 TaBr5 + 3.3 Al2O3 Carbothermal reduction of the oxide in the presence of bromine has also been employed, the byproduct being COBr2. References Bromides Tantalum(V) compounds Metal halides
Tantalum(V) bromide
[ "Chemistry" ]
343
[ "Bromides", "Inorganic compounds", "Metal halides", "Salts" ]
5,445,429
https://en.wikipedia.org/wiki/Terbium%28III%29%20iodide
Terbium(III) iodide (TbI3) is an inorganic chemical compound. Preparation Terbium(III) iodide can be produced by reacting terbium and iodine. Terbium iodide hydrate can be crystallized from solution by reacting hydriodic acid with terbium, terbium(III) oxide, terbium hydroxide or terbium carbonate: An alternative method is reacting terbium and mercury(II) iodide at 500 °C. Structure Terbium(III) iodide adopts the bismuth(III) iodide (BiI3) crystal structure type, with octahedral coordination of each Tb3+ ion by 6 iodide ions. References Iodides Terbium compounds Lanthanide halides
Terbium(III) iodide
[ "Chemistry" ]
172
[ "Inorganic compounds", "Inorganic compound stubs" ]
5,445,445
https://en.wikipedia.org/wiki/Terbium%28III%29%20oxide
Terbium(III) oxide, also known as terbium sesquioxide, is a sesquioxide of the rare earth metal terbium, having chemical formula . It is a p-type semiconductor, which conducts protons, which is enhanced when doped with calcium. It may be prepared by the reduction of in hydrogen at 1300 °C for 24 hours. It is a basic oxide and easily dissolved to dilute acids, and then almost colourless terbium salt is formed. Tb2O3 + 6 H+ → 2 Tb3+ + 3 H2O The crystal structure is cubic and the lattice constant is a = 1057 pm. References Terbium compounds Sesquioxides Semiconductor materials
Terbium(III) oxide
[ "Chemistry" ]
150
[ "Semiconductor materials", "Inorganic compounds", "Inorganic compound stubs" ]
5,445,483
https://en.wikipedia.org/wiki/Ditellurium%20bromide
Ditellurium bromide is the inorganic compound with the formula Te2Br. It is one of the few stable lower bromides of tellurium. Unlike sulfur and selenium, tellurium forms families of polymeric subhalides where the halide/chalcogen ratio is less than 2. Preparation and properties Te2Br is a gray solid. Its structure consists of a chain of Te atoms with Br occupying a doubly bridged site. It is prepared by heating tellurium with the appropriate stoichiometry of bromine near 215 °C. The corresponding chloride and iodide, Te2Cl and Te2I, are also known. Other tellurium bromides include the yellow liquid Te2Br2, the orange solid TeBr4, and the greenish-black solid TeBr2. Complexes of the type TeBr2(thiourea)2 are well characterized. References Bromides Nonmetal halides Tellurium compounds Chalcohalides
Ditellurium bromide
[ "Chemistry" ]
209
[ "Inorganic compounds", "Chalcohalides", "Salts", "Inorganic compound stubs", "Bromides" ]
5,445,494
https://en.wikipedia.org/wiki/Tritellurium%20dichloride
Tritellurium dichloride is the inorganic compound with the formula Te3Cl2. It is one of the more stable lower chlorides of tellurium. Preparation and properties Te3Cl2 is a gray solid. Its structure consists of a long chain of Te atoms, with every third Te center carrying two chloride ligands for the repeat unit -Te-Te-TeCl2-. It is a semiconductor with a band gap of 1.52 eV, which is larger than that for elemental Te (0.34 eV). It is prepared by heating Te with the appropriate stoichiometry of chlorine. References Chlorides Nonmetal halides Tellurium halides Inorganic polymers Conductive polymers
Tritellurium dichloride
[ "Chemistry" ]
150
[ "Chlorides", "Inorganic compounds", "Inorganic polymers", "Molecular electronics", "Salts", "Conductive polymers" ]
5,445,513
https://en.wikipedia.org/wiki/Tellurium%20tetrabromide
Tellurium tetrabromide (TeBr4) is an inorganic chemical compound. It has a similar tetrameric structure to TeCl4. It can be made by reacting bromine and tellurium. In the vapour TeBr4 dissociates: TeBr4 → TeBr2 + Br2 It is a conductor when molten, dissociating into the ions TeBr3+ and Br−. When dissolved in benzene and toluene, TeBr4 is present as the unionized tetramer Te4Br16. In solvents with donor properties such as acetonitrile, CH3CN ionic complexes are formed which make the solution conducting: TeBr4 + 2CH3CN → (CH3CN)2TeBr3+ + Br− References Bromides Tellurium halides Tellurium(IV) compounds Chalcohalides
Tellurium tetrabromide
[ "Chemistry" ]
189
[ "Inorganic compounds", "Chalcohalides", "Inorganic compound stubs", "Salts", "Bromides" ]
5,445,533
https://en.wikipedia.org/wiki/Tellurium%20tetraiodide
Tellurium tetraiodide (TeI4) is an inorganic chemical compound. It has a tetrameric structure which is different from the tetrameric solid forms of TeCl4 and TeBr4. In TeI4 the Te atoms are octahedrally coordinated and edges of the octahedra are shared. Preparation Tellurium tetraiodide can be prepared by reacting Te and iodomethane, CH3I. In the vapour TeI4 dissociates: TeI4 → TeI2 + I2 It can be also obtained by reacting telluric acid with hydrogen iodide. Te(OH)6 + HI → TeI4 + I2 + 6 H2O It can also be obtained by reacting the elements, which can also produce tellurium diiodide and tellurium monoiodide, depending on the reaction conditions: Te + 2 I2 → TeI4 TeI4 → TeI2 + I2 Properties Tellurium tetraiodide is an iron-gray solid that decomposes slowly in cold water and quickly in warm water to form tellurium dioxide and hydrogen iodide. It is stable even in moist air and decomposes when heated, releasing iodine. It is soluble in hydriodic acid to form H[TeI5] and it is slightly soluble in acetone. Tellurium tetraiodide is a conductor when molten, dissociating into the ions TeI3+ and I−. In solvents with donor properties such as acetonitrile, CH3CN ionic complexes are formed which make the solution conducting: TeI4 + 2 CH3CN → (CH3CN)2TeI3+ + I− Five modifications of tellurium tetraiodide are known, all of which are composed of tetrameric molecules. The δ form is the most thermodynamically stable form. This is structurally derived (as well as the α, β and γ forms) from the ε form. References Iodides Tellurium halides Tellurium(IV) compounds Chalcohalides
Tellurium tetraiodide
[ "Chemistry" ]
456
[ "Inorganic compounds", "Chalcohalides" ]
5,445,548
https://en.wikipedia.org/wiki/Tellurium%20trioxide
Tellurium trioxide (TeO3) is an inorganic chemical compound of tellurium and oxygen. In this compound, tellurium is in the +6 oxidation state. Polymorphs There are two forms, yellow-red α-TeO3 and grey, rhombohedral, β-TeO3 which is less reactive. α-TeO3 has a structure similar to FeF3 with octahedral TeO6 units that share all vertices. Preparation α-TeO3 can be prepared by heating orthotelluric acid, Te(OH)6, at over 300 °C. The β-TeO3 form can be prepared by heating α-TeO3 in a sealed tube with O2 and H2SO4. α-TeO3 is unreactive to water but is a powerful oxidising agent when heated. With alkalis it forms tellurates. α-TeO3 when heated loses oxygen to form firstly Te2O5 and then TeO2. References Oxides Tellurium(VI) compounds Interchalcogens
Tellurium trioxide
[ "Chemistry" ]
229
[ "Inorganic compounds", "Oxides", "Inorganic compound stubs", "Salts" ]
5,445,552
https://en.wikipedia.org/wiki/Poussin%20proof
In number theory, a branch of mathematics, the Poussin proof is the proof of an identity related to the fractional part of a ratio. In 1838, Peter Gustav Lejeune Dirichlet proved an approximate formula for the average number of divisors of all the numbers from 1 to n: where d represents the divisor function, and γ represents the Euler-Mascheroni constant. In 1898, Charles Jean de la Vallée-Poussin proved that if a large number n is divided by all the primes up to n, then the average fraction by which the quotient falls short of the next whole number is γ: where {x} represents the fractional part of x, and π represents the prime-counting function. For example, if we divide 29 by 2, we get 14.5, which falls short of 15 by 0.5. References Dirichlet, G. L. "Sur l'usage des séries infinies dans la théorie des nombres", Journal für die reine und angewandte Mathematik 18 (1838), pp. 259–274. Cited in MathWorld article "Divisor Function" below. de la Vallée Poussin, C.-J. Untitled communication. Annales de la Societe Scientifique de Bruxelles 22 (1898), pp. 84–90. Cited in MathWorld article "Euler-Mascheroni Constant" below. External links Number theory
Poussin proof
[ "Mathematics" ]
310
[ "Number theory stubs", "Discrete mathematics", "Number theory" ]
5,446,251
https://en.wikipedia.org/wiki/IMS%20VDEX
IMS Vocabulary Definition Exchange (IMS VDEX) is a mark-up language or grammar for controlled vocabularies developed by IMS Global as an open specification, with the Final Specification being approved in February 2004. IMS VDEX allows the exchange and expression of simple machine-readable lists of human language terms, along with information that may assist a human in understanding the meaning of the various terms, i.e. a flat list of values, a hierarchical tree of values, a thesaurus, a taxonomy, a glossary or a dictionary. Structural a vocabulary has an identifier, title and a list of terms. Each term has a unique key, titles and (optional) descriptions. A term may have nested terms, thus a hierarchical structure can be created. It is possible to define relationships between terms and add custom metadata to terms. IMS VDEX support multilinguality. All values supposed to be read by a human, i.e. titles, can be defined in one or more languages. Purposes VDEX was designed to supplement other IMS specifications and the IEEE LOM standard by giving additional semantic control to tool developers. IMS VDEX could be used for the following purposes. It is used in practice for other purposes as well. Interfaces providing pre-defined choices – providing radio buttons and drop-down menus for interfaces such as metadata editors or a repository browse tool, based on the vocabulary allowed in the metadata profile used Distributing vocabularies among many users – achieved by simple XML file sharing, or possibly a searchable repository or registry of vocabularies XML stylesheets used to select and generate different views – selecting an overview of an entire vocabulary as an HTML or PDF file, for example; providing scope notes for catalogues; or storing a glossary of terms which are called upon by hyperlinks within a document Validation of metadata instances – validated against an application profile, by comparison of the vocabulary terms used in certain metadata elements with those of the machine readable version of the vocabularies specified by the application profile. Controlled terms for other IMS specifications and IEEE LOM – both may contain elements where controlled terms should be used. These elements are often specified as being of a vocabulary data type, and a definition of the permitted terms and their usage may be expressed using VDEX. Technical details The VDEX Information Model is represented in the diagram. A VDEX file describing a vocabulary comprises a number of information elements, most of which are relatively simple, such as a string representation of the default (human) language or a URI identifying the value domain (or vocabulary). Some of the elements are ‘containers’ – such as a term – that contain additional elements. Elements may be required or optional, and in some cases, repeatable. Within a term, for example, a description and caption may be defined. Multiple language definitions can be used inside a description, by using a langstring element, where the description is paired with the language to be used. Additional elements within a term include media descriptors, which are one or more media files to supplement a term’s description; and metadata, which is used to describe the vocabulary further. The relationship container defines a relationship between terms by identifying the two terms and the specifying type or relationship, such as a term being broader or narrower than another. The term used to specify the type of relationship may conform to the ISO standards for thesauri. Vocabulary identifiers are unique, persistent URIs, whereas term or relationship identifiers are locally unique strings. VDEX also allows for a default language and vocabulary name to be given, and for whether the ordering of terms within the vocabulary is significant (order significance) to be specified. A profile type is specified to describe the type of vocabulary being expressed; different features of the VDEX model are permitted depending on the profile type, providing a common grammar for several classes of vocabulary. For example, it is possible, in some profile types, for terms to be contained within one another and be nested, which is suited to the expression of hierarchical vocabularies. Five profile types exist: lax, thesaurus, hierarchicalTokenTerms, ‘glossaryOrDictionary’ and flatTokenTerms. The lax profile is the least restrictive and offers the full VDEX model, whereas the flatTokenTerms profile is the most restrictive and lightweight. VDEX also offers some scope for complex vocabularies, assuming the existence of a well-defined application profile (for exchange interoperability). Some examples are: Faceted schemes – faceted vocabularies are possible with the definition of appropriate relationships Multi-lingual thesauri – metadata could be used within a relationship to achieve multilingual thesauri Polyhierarchical taxonomies – can be expressed using the source/target value pairs in the relationship. Identifiers in VDEX data should be persistent, unique, resolvable, transportable and URI-compliant. Specifically, vocabulary identifiers should be unique URIs, whereas term and relationship identifiers should be locally unique strings. Implementations ALOHA Metadata Tagging Tool — Java-based software project that can read IMS VDEX files. IVIMEDS 1G v1.0 – from The International Virtual Medical School – includes VDEX instances in curriculum maps. Partners can create their own maps in VDEX format and use these to help students search the repository. Skills Profiling Web Service — project implemented and demonstrated use of a skills profiling web service using open standards in a medical context. IMS VDEX files were used in the representation of the SPWS hierarchy skills framework. Scottish Doctors — project used VDEX as a format for expressing curricular outcome systems. VDEX XSLT scripts — developed by The Higher Education Academy Centre for Philosophical and Religious Studies to convert VDEX to XHTML and PostgreSQL . VDEX Implementation Project — carried out by the Institute for Computer-Based Learning at Heriot–Watt University, with a primary objective of creating a tool for editing vocabularies in VDEX format. The project, which ended in January 2004, was based on the Public Draft (not the current Final Specification). VDEX Java Binding — implementation neutral Java interface for VDEX, as well as providing a default implementation of that interface, and XML marshalling functionality. imsvdex Python egg — API for VDEX XML-files. It is free software written in Python. ATVocabularyManager — addon for Plone CMS uses VDEX as a possible format to define vocabularies. collective.vdexvocabulary — implements IMS VDEX as standard Zope vocabulary which can also be used in Plone CMS, written in Python. vdexcsv — offers a commandline converter from CSV to VDEX. It is written in Python. See also IMS Global Learning object metadata References Marc van Coillie Using IMS VDEX for the EDS AP – EIfEL Antonio Sarasa, Jose Manuel Canabal, Juan Carlos Sacristan, Raquel Jimenez Using IMS VDEX in Agrega External links IMS VDEX — official resources by IMS global What is IMS VDEX — JISC CETIS CETIS Metadata and Digital Repository Special Interest Group (SIG) — mailing list for those in UK Higher and Further Education interested in creating, storing and serving educational metadata. Data management Educational technology standards Knowledge representation Library science Metadata Technical communication
IMS VDEX
[ "Technology" ]
1,572
[ "Data management", "Metadata", "Data" ]
5,447,226
https://en.wikipedia.org/wiki/Current%20differencing%20buffered%20amplifier
A current differencing buffered amplifier (CDBA) is a multi-terminal active component with two inputs and two outputs and developed by Cevdet Acar and Serdar Özoğuz. Its block diagram can be seen from the figure. It is derived from the current feedback amplifier (CFA). Basic operation The characteristic equation of this element can be given as: , , . Here, the current through the z-terminal follows the difference between the currents through p-terminal and n-terminal. Input terminals p and n are internally grounded. The difference of the input currents is converted into the output voltage Vw, therefore CDBA element can be considered as a special type of current feedback amplifier with differential current input and grounded y input. The CDBA is simplifies the implementation, is free from parasitic capacitances, able to operate in the frequency range of more than hundreds of MHz (even GHz!), and suitable for current mode operation while, it also provides a voltage output. Several voltage and current mode continuous-time filters, oscillators, analog multipliers, inductance simulators and a PID controller have been developed using this active element. References Acar, C., and Ozoguz, S., “A new versatile building block: current differencing buffered amplifier suitable for analog signal processing filters”, Microelectronics Journal, vol. 30, pp. 157–160, 1999. Ali Ümit Keskin, "A Four Quadrant Analog Multiplier employing single CDBA", Analog Integrated Circuits and Signal Processing, vol. 40, no. 1, pp. 99–101, 2004. Tangsrirat, W., Klahan, K., Kaewdang, K., and Surakampontorn, W., “Low-Voltage Wide-Band NMOS-Based Current Differencing Buffered Amplifier” ECTI Transactions on Electrical Eng., Electronics, and Communications, vol. 2, no. 1, pp. 15–22, 2004. Electronic amplifiers
Current differencing buffered amplifier
[ "Technology" ]
425
[ "Electronic amplifiers", "Amplifiers" ]
5,447,381
https://en.wikipedia.org/wiki/Spin%20pumping
Spin pumping is the dynamical generation of pure spin current by the coherent precession of magnetic moments, which can efficiently inject spin from a magnetic material into an adjacent non-magnetic material. The non-magnetic material usually hosts the spin Hall effect that can convert the injected spin current into a charge voltage easy to detect. A spin pumping experiment typically requires electromagnetic irradiation to induce magnetic resonance, which converts energy and angular momenta from electromagnetic waves (usually microwaves) to magnetic dynamics and then to electrons, enabling the electronic detection of electromagnetic waves. The device operation of spin pumping can be regarded as the spintronic analog of a battery. Spin pumping involves an AC effect and a DC effect: The AC effect generates a spin current that oscillates at the same frequency with the microwave source. The DC effect requires that the magnetic dynamic is circularly polarized or elliptically polarized, whereas a linear oscillation can only generate an AC component. Both effects result in a net enhancement of the effective magnetic damping. Spin pumping in ferromagnets The spin current pumped into an adjacent layer by a precessing magnetic moment is given by where is the spin current (the vector indicates the orientation of the spin, not the direction of the current), is the spin-mixing conductance characterizing the spin transparency of the interface, is the saturation magnetization, and is the time-dependent orientation of the moment. Optical, microwave and electrical methods are also being explored. These devices could be used for low-power data transmission in spintronic devices or to transmit electrical signals through insulators. Spin pumping in antiferromagnets Spin pumping in antiferromagnetic materials does not vanish because the antiparallel magnetic moments contribute constructively rather than destructively to spin current, which was theoretically predicted in 2014. Since the frequency of antiferromagnetic resonance is much higher than that of ferromagnetic resonance, spin pumping in antiferromagnets can be utilized to study electromagnetic signals in the sub-terahertz and terahertz regime, which had been demonstrated by two independent experiments in 2020. Besides higher frequency, spin pumping in antiferromagnets features the chirality degree of freedom of magnetic dynamics that does not exist in ferromagnets. For example, the spin currents pumped by the left-handed and the right-handed resonance modes are opposite in direction. References See also Spintronics Spin wave Spin Hall effect Spintronics
Spin pumping
[ "Physics", "Materials_science" ]
512
[ "Spintronics", "Condensed matter physics" ]
5,447,687
https://en.wikipedia.org/wiki/Bis-tris%20methane
Bis-tris methane, also known as BIS-TRIS or BTM, is a buffering agent used in biochemistry. Bis-tris methane is an organic tertiary amine with labile protons having a pKa of 6.46 at 25 °C. It is an effective buffer between the pH 5.8 and 7.2. Bis-tris methane binds strongly to Cu and Pb ions as well as, weakly, to Mg, Ca, Mn, Co, Ni, Zn and Cd. See also Bis-tris propane Tris Tricine References Polyols Amines Buffer solutions
Bis-tris methane
[ "Chemistry" ]
126
[ "Buffer solutions", "Amines", "Bases (chemistry)", "Functional groups" ]
5,447,789
https://en.wikipedia.org/wiki/Residential%20education
Residential education, broadly defined, is a pre-college education provided in an environment where students both live and learn outside their family homes. Some typical forms of residential education include boarding schools, preparatory schools, orphanages, children and youth villages, residential academies, military schools and, most recently, residential charter schools. References Flint, Anthony, "Boarding School Approach to Youths At Risk Questioned", Boston Globe, August 16, 1993. Goldsmith, Heidi, "The Renaissance of Residential Education in the U.S." Conference Summary, October 2000. External links CORE: the Coalition for Residential Education Washington, DC Metropolitan Area School types Total institutions
Residential education
[ "Biology" ]
129
[ "Behavioural sciences", "Behavior", "Total institutions" ]
5,447,834
https://en.wikipedia.org/wiki/Crow%20instability
In aerodynamics, the Crow instability, or V.C.I. vortex crow instability, is an inviscid line-vortex instability, named after its discoverer S. C. Crow. The effect of the Crow instability can often be observed in the skies behind large aircraft, when the wingtip vortices interact with contrails from the engines, producing visible distortions in the shape of the contrail. Instability development The Crow instability is a vortex pair instability, and typically goes through several stages: A pair of counter rotating vortices act upon each other to amplify small sinusoidal distortions in their vortex shapes (normally created by some initial disturbance in the system). The waves develop into either symmetric or anti-symmetric modes, depending on the nature of the initial disturbance. These distortions grow, both through interaction from one vortex on another, and also 'Self Induction' of a vortex with itself. This leads to an exponential growth in the vortex wave amplitude. The vortex amplitudes reach a critical value and reconnect, forming a chain of vortex rings. Aviation vortices The wings of airplanes in flight produce at least one pair of trailing vortices. These vortices are a major source of wake turbulence as they persist for a significant period of time after the airplane has passed. If the decay of trailing vortices were due solely to viscous effects in the core of each vortex, decay would be so slow that they would persist for hundreds of miles behind the airplane. In fact, these vortices only persist for tens of miles. The additional cause of the collapse of these vortices is large-scale instabilities such as Crow instability. References Bibliography External links Scientific American: The Crow Instability Photos: The Crow Instability Meteorological phenomena
Crow instability
[ "Physics" ]
362
[ "Meteorological phenomena", "Physical phenomena", "Earth phenomena" ]
5,448,004
https://en.wikipedia.org/wiki/Nebivolol
Nebivolol is a beta blocker used to treat high blood pressure and heart failure. As with other β-blockers, it is generally a less preferred treatment for high blood pressure. It may be used by itself or with other blood pressure medication. It is taken by mouth. Common side effects include dizziness, feeling tired, nausea, and headaches. Serious side effects may include heart failure and bronchospasm. Its use in pregnancy and breastfeeding is not recommended. It works by blocking β1-adrenergic receptors in the heart and dilating blood vessels. Nebivolol was patented in 1983 and came into medical use in 1997. It is available as a generic medication in the United Kingdom. In 2022, it was the 173rd most commonly prescribed medication in the United States, with more than 3million prescriptions. Medical uses It is used to treat high blood pressure and heart failure. Nebivolol is used in the treatment of angina, to decrease the heart rate and contractile force. This is relevant in patients who need to decrease the oxygen demand of the heart so that the blood supplied from stenosed or constricted arteries is adequate. ACE inhibitors, angiotensin II receptor antagonists, calcium-channel blockers, and thiazide diuretics are generally preferred over beta blockers for the treatment of primary hypertension in the absence of co-morbidities. Pharmacology and biochemistry β1-selectivity Beta blockers help patients with cardiovascular disease by blocking β1 receptors, while many of the side-effects of these medications are caused by their blockade of β2 receptors. For this reason, beta blockers that selectively block β1 adrenergic receptors (termed cardioselective or β1-selective beta blockers) produce fewer adverse effects (for instance, bronchoconstriction) than those drugs that non-selectively block both β1 and β2 receptors. In a laboratory experiment conducted on biopsied heart tissue, nebivolol proved to be the most β1-selective of the β-blockers tested, being approximately 3.5 times more β1-selective than bisoprolol. However, the drug's receptor selectivity in humans is more complex and depends on the drug dose and the genetic profile of the patient taking the medication. The drug is highly cardioselective at 5 mg. In addition, at doses above 10 mg, nebivolol loses its cardioselectivity and blocks both β1 and β2 receptors, while the recommended starting dose of nebivolol is 5 mg, sufficient control of blood pressure may require doses up to 40 mg. Furthermore, nebivolol is also not cardioselective when taken by patients with a genetic makeup that makes them "poor metabolizers" of nebivolol (and other drugs) or with CYP2D6 inhibitors. As many as 1 in 10 Caucasian people and even more black people are poor CYP2D6 metabolizers and therefore might benefit less from nebivolol's cardioselectivity although currently there are no directly comparable studies. Nebivolol while selectively blocking beta(1) receptor acts as a beta(3)-agonist. β3 receptors are found in the gallbladder, urinary bladder, and in brown adipose tissue. Their role in gallbladder physiology is unknown, but they are thought to play a role in lipolysis and thermogenesis in brown fat. In the urinary bladder it is thought to cause relaxation of the bladder and prevention of urination. Due to enzymatic inhibition, fluvoxamine increases the exposure to nebivolol and its active hydroxylated metabolite (4-OH-nebivolol) in healthy volunteers. Vasodilator action Nebivolol is unique as a beta-blocker. Unlike carvedilol, it has a nitric oxide (NO)-potentiating, vasodilatory effect via stimulation of β3 receptors. Nebivolol induces vasodilation by stimulating the production of nitric oxide, a natural blood vessel relaxant. This effect is achieved by activating the endothelial isoform of NO synthase (eNOS) in the cells lining the blood vessels. Unlike traditional β-blockers, nebivolol's unique mechanism of action improves arterial flexibility and reduces peripheral resistance, making it beneficial for hypertensive patients with endothelial dysfunction. The drug's ability to increase NO production persists even after metabolism, offering long-lasting benefits. Nebivolol's distinct approach to promoting NO release has shown promising results in improving endothelial function and managing hypertension in clinical trials. Along with labetalol, celiprolol and carvedilol, it is one of four beta blockers to cause dilation of blood vessels in addition to effects on the heart. Antihypertensive effect Nebivolol lowers blood pressure (BP) by reducing peripheral vascular resistance, and significantly increases stroke volume with preservation of cardiac output. The net hemodynamic effect of nebivolol is the result of a balance between the depressant effects of beta-blockade and an action that maintains cardiac output. Antihypertensive responses were significantly higher with nebivolol than with placebo in trials enrolling patient groups considered representative of the U.S. hypertensive population, in black people, and in those receiving concurrent treatment with other antihypertensive drugs. Pharmacokinetics Nebivolol plasma protein binding is approximately 98%, mostly to albumin and its half-life of low doses is 12 hours in extensive CYP2D6 metabolizers and 19 hours in poor metabolizers. Contraindications Severe bradycardia Heart block greater than first degree Patients with cardiogenic shock Decompensated cardiac failure Sick sinus syndrome (unless a permanent pacemaker is in place) Patients with severe hepatic impairment (Child-Pugh class B) Patients who are hypersensitive to any component of this product. Side effects Side effects might include headache, tiredness, dizziness, lightheadedness, reduced blood flow to extremities, bradycardia. Controversies Pharmacology of side-effects Several studies have suggested that nebivolol has reduced typical beta-blocker-related side effects, such as fatigue, clinical depression, bradycardia, or impotence. However, according to the FDABystolic is associated with a number of serious risks. Bystolic is contraindicated in patients with severe bradycardia, heart block greater than first degree, cardiogenic shock, decompensated cardiac failure, sick sinus syndrome (unless a permanent pacemaker is in place), severe hepatic impairment (Child-Pugh > B) and in patients who are hypersensitive to any component of the product. Bystolic therapy is also associated with warnings regarding abrupt cessation of therapy, cardiac failure, angina and acute myocardial infarction, bronchospastic diseases, anesthesia and major surgery, diabetes and hypoglycemia, thyrotoxicosis, peripheral vascular disease, non-dihydropyridine calcium channel blockers use, as well as precautions regarding use with CYP2D6 inhibitors, impaired renal and hepatic function, and anaphylactic reactions. Finally, Bystolic is associated with other risks as described in the Adverse Reactions section of its PI. For example, a number of treatment-emergent adverse events with an incidence greater than or equal to 1 percent in Bystolic-treated patients and at a higher frequency than placebo-treated patients were identified in clinical studies, including headache, fatigue, and dizziness. FDA warning letter about advertising claims In August 2008, the FDA issued a Warning Letter to Forest Laboratories citing exaggerated and misleading claims in their launch journal ad, in particular over claims of superiority and novelty of action. History Mylan Laboratories licensed the US and Canadian rights to nebivolol from Janssen Pharmaceutica N.V. in 2001. Nebivolol is already registered and successfully marketed in more than 50 countries, including the United States where it is marketed under the brand name Bystolic from Mylan Laboratories and Forest Laboratories. Nebivolol is manufactured by Forest Laboratories. In India, nebivolol is available as Nebula (Zydus Healthcare Ltd), Nebizok (Eris life-sciences), Nebicip (Cipla ltd), Nebilong (Micro Labs), Nebistar (Lupin ltd), Nebicard (Torrent), Nubeta (Abbott Healthcare Pvt Ltd – India), and Nodon (Cadila Pharmaceuticals). In Greece and Italy, nebivolol is marketed by Menarini as Lobivon. In Germany it is marketed as Nebilet by Berlin Chemie. In the Middle East, Russia and Australia, it is marketed under the name Nebilet and in Pakistan it is marketed by The Searle Company Limited as Byscard. References Wikipedia medicine articles ready to translate Beta blockers Chromanes Fluoroarenes Drugs developed by AbbVie Diols Secondary alcohols Amines
Nebivolol
[ "Chemistry" ]
1,970
[ "Amines", "Bases (chemistry)", "Functional groups" ]
5,448,125
https://en.wikipedia.org/wiki/Wormholes%20in%20fiction
A wormhole is a postulated method, within the general theory of relativity, of moving from one point in space to another without crossing the space between. Wormholes are a popular feature of science fiction as they allow faster-than-light interstellar travel within human timescales. A related concept in various fictional genres is the portable hole. While there's no clear demarcation between the two, this article deals with fictional, but pseudo-scientific, treatments of faster-than-light travel through space. A jumpgate is a fictional device able to create an Einstein–Rosen bridge portal (or wormhole), allowing fast travel between two points in space. In franchises Stargate franchise Wormholes are the principal means of space travel in the Stargate movie and the spin-off television series, Stargate SG-1, Stargate Atlantis and Stargate Universe, to the point where it was called the franchise that is "far and away most identified with wormholes". The central plot device of the programs is an ancient transportation network consisting of the ring-shaped devices known as Stargates, which generate artificial wormholes that allow one-way matter transmission and two-way radio communication between gates when the correct spatial coordinates are "dialed". However, for some reason not yet explained, the water-like event horizon breaks down the matter and converts it into energy for transport through the wormhole, restoring it into its original state at the destination. This would explain why electromagnetic energy can travel both ways — it doesn't have to be converted. The one-way rule may be caused by the Stargates themselves: as a Gate may only be capable of creating an event-horizon that either breaks down or reconstitutes matter, but not both. It does serve as a very useful plot device: when one wants to return to the other end one must close the original wormhole and "redial", which means one needs access to the dialing device. The one-way nature of the Stargates helps to defend the gate from unwanted incursions. Also, Stargates can sustain an artificial wormhole for only 38 minutes. It's possible to keep it active for a longer period, but it would take immense amounts of energy. The wormholes generated by the Stargates are based on the misconception that wormholes in 3D space have 2D (circular) event horizons, but a proper visualization of a wormhole in 3D space would be a spherical event horizon. Babylon 5 and Crusade In television series Babylon 5 and its spin-off series Crusade, jump points are artificial wormholes that serve as entrances and exits to hyperspace, allowing for faster-than-light travel. Jump points can either be created by larger ships (battleships, destroyers, etc.) or by standalone jumpgates. In the B5 universe, jumpgates are considered neutral territory. It is considered a gross violation of normal rules of engagement to attack them directly, as the jumpgate network is needed by every spacefaring race. However, in wartime, it is common for powers to program their gates to deny access to opposing sides, thus forcing enemies to use their own jump points. Farscape The television series Farscape features an American astronaut who accidentally gets shot through a wormhole and ends up in a distant part of the universe, and also features the use of wormholes to reach other universes (or "unrealized realities") and as weapons of mass destruction. Wormholes are the cause of John Crichton's presence in the far reaches of our galaxy and the focus of an arms race of different alien species attempting to obtain Crichton's perceived ability to control them. Crichton's brain was secretly implanted with knowledge of wormhole technology by one of the last members of an ancient alien species. Later, an alien interrogator discovers the existence of the hidden information and thus Crichton becomes embroiled in interstellar politics and warfare while being pursued by all sides (as they want the ability to use wormholes as weapons). Unable to directly access the information, Crichton is able to subconsciously foretell when and where wormholes will form and is able to safely travel through them (while all attempts by others are fatal). By the end of the series, he eventually works out some of the science and is able to create his own wormholes (and shows his pursuers the consequences of a wormhole weapon). Star Trek franchise Early in the storyline of Star Trek: The Motion Picture, an antimatter imbalance in the refitted Enterprise starship's warp drive power systems creates an unstable ship-generated wormhole directly ahead of the vessel, threatening to rip the starship apart partially through its increasingly severe time dilation effects, until Commander Pavel Chekov fires a photon torpedo to blast apart a sizable asteroid that was pulled in with the starship (and directly ahead of it), destabilizing the wormhole effect and throwing the Enterprise clear as it slowed to sub-light velocities. Near the end of the film, Willard Decker recalls that "Voyager 6" (a.k.a. V'ger) disappeared into what they used to call a "black hole". At one time, black holes in science fiction were often endowed with the traits of wormholes. This has for the most part disappeared as a black hole isn't a hole in space but a dense mass and the visible vortex effect often associated with black holes is merely the accretion disk of visible matter being drawn toward it. Decker's line is most likely to inform that it was probably a wormhole that Voyager 6 entered, although the intense gravity of a black hole does warp the fabric of spacetime. In Star Trek: The Next Generation, in episode "A Matter of Time", Captain Jean-Luc Picard acknowledged that since the first wormholes were discovered students had been asked questions about the ramifications of accidentally changing history for the worse through knowledge obtained by traveling through wormholes. The setting of the television series Star Trek: Deep Space Nine is a space station, Deep Space 9, located near the artificially-created Bajoran wormhole. This wormhole is unique in the Star Trek universe because of its stability. In an earlier episode of Star Trek: The Next Generation it was established that wormholes are generally unstable on one or both ends – either the end(s) move erratically or they do not open reliably. The Bajoran wormhole is stationary on both ends and opens consistently, bridging the Alpha and Gamma quadrants and enabling starship travel across vast distances. It serves as a strategic gateway that introduces the Alpha quadrant to the threatening Dominion and provides one method of communication with the non-physical entities, known as the Prophets, who inhabit it. Discovered at the start of the series, the existence of the wormhole and the various consequences of its discovery elevate the strategic importance of the space station and is a major factor in most of the overarching plots over the course of the series. In the Star Trek: Voyager episode "Counterpoint", an alien scientist explains that the term wormhole is often used as a layman's term and describes various spatial anomalies. Examples for those wormholes in Star Trek are intermittent cyclical vortex, interspatial fissure, interspatial flexure or spatial flexure in episode "Q2" respectively spatial vortex in episode "Night". In the episode "Inside Man" an artificially created wormhole was named geodesic fold. In the 2009 Star Trek film, red matter is used to create artificial black holes. A large one acts a conduit between spacetime and sends Spock and Nero back in time. Doctor Who The Rift which appears in the long-running British science-fiction series Doctor Who and its spin-off Torchwood is a wormhole. One of its mouths is located in Cardiff Bay, Wales and the other floats freely throughout space-time. It is the central plot device in the latter show. In "Planet of the Dead", a wormhole transports a London double-decker bus to a barren, desert-like planet. The wormhole could only be navigated safely through by a metal object, and human tissue is not meant for inter-space travel, as demonstrated by the bus driver, who is burnt to the bones on attempting to get back to Earth. It is discussed that the Time Vortex was created by the Time Lords (an ancient and powerful race of human-looking aliens that can control space and time; the protagonist is one of them) to allow travel of TARDISes (Time And Relative Dimension In Space) to any point in spacetime. Marvel Cinematic Universe In the 2011 film Thor, the Bifrost is reimagined as an Einstein–Rosen Bridge which is operated by the gatekeeper, Heimdall, and used by Asgardians to travel between the Nine Realms. In the 2012 film The Avengers, Loki uses the Tesseract to arrive on Earth and summon the Chitauri to invade New York. In the 2013 film Thor: The Dark World, the Bifrost Bridge is repaired using the Tesseract, and is once again used by Asgardians for space travel. Additionally, Jane Foster and her associates encounter a wormhole in London which teleports her to Svartalfheim. In the 2017 film Thor: Ragnarok, Thor is teleported to the planet Sakaar via a wormhole, where he learns that Bruce Banner and Loki had both landed on the planet via wormholes as well. The largest one, referred to as the "Devil's Anus", is described by Banner as "a collapsing Neutron Star within an Einstein-Rosen Bridge". In the 2018 film Avengers: Infinity War, Thanos acquires the Space Stone from the Statesman and uses it to generate wormholes and travel between different points of the Universe. In literature In some earlier analyses of general relativity, the event horizon of a black hole was believed to form an Einstein-Rosen bridge. In music In games In television and film fiction See also Space bridge References Further reading Lists of astronomical locations in fiction Speculative fiction lists
Wormholes in fiction
[ "Astronomy" ]
2,100
[ "Lists of astronomical locations in fiction", "Astronomy-related lists" ]
5,449,205
https://en.wikipedia.org/wiki/Round%20of%20drinks
A round of drinks is a set of alcoholic beverages purchased by one person in a group for that complete group. The purchaser buys the round of drinks as a single order at the bar. In many places it is customary for people to take turns buying rounds. It is a nearly ubiquitous custom in Ireland, the United Kingdom, Canada, New Zealand, and Australia. In Australia and New Zealand it is referred to as shouting. This practice is also customary in many parts of North America, especially in areas where people with cultural roots in Ireland and the UK predominate. A notable exception was the UK State Management Scheme in which treating (i.e. buying a round) was forbidden, from July 1916 until June 1919. Greaves' Rules Greaves' Rules is a set of etiquette guidelines common in the UK for buying rounds of drinks in English public houses. The rules were first defined by William Greaves (April 1938 - November 2017), a London journalist of the defunct Today newspaper as a Saturday morning essay in the paper, based upon his long experience of pubs and rounds. They immediately attracted a wide following in drinking circles and are known internationally as a representation of the spirit of drinking in an English pub. When an individual arrives at a pub, common practice invites the newcomer to unilaterally offer a drink to a companion, with the unspoken understanding that when the drink has been nearly consumed, his/her companion will reciprocate. Trust and fair play are the root of the rules, though there are occasions (such as a requirement of one of the drinkers to need to carry out more important jobs, if any can be conceived of) where the rules can be broken, and were itemised by Greaves in his article. See, for example a copy of Greaves' Rules in the Oxford Pub Guide, with particular reference to rule 7 and especially rule 8. Greaves' Rules is a lighthearted set of rules governing whose turn it is to buy a round of drinks in a British public house. The rules were first published as a Saturday essay in the now defunct Today newspaper but were later re-commissioned by the Daily Telegraph and published in that newspaper on 20 November 1993. Copies of the rules soon appeared in many bars throughout the UK and are now known internationally as a representation of the spirit of drinking in a British pub. Kate Fox, a social anthropologist came up with a similar idea in her book Watching the English, but concluded their rationale was the need to minimise the possibility of violence between drinking companions. Australia In John O'Grady's They're a Weird Mob, Nino learns some customs related to shouting. Your turn. What is my turn? Your turn to shout Why should I shout? Because I shouted you. I did not hear you shout at me. He thought for a while and said, I get it. When you buy a bloke a beer, it's called a shout, see? Why is that? I haven't a clue, but that's what it's called. I shouted for you, now it's your turn to shout for me. I was only a little thirsty. I do not think I wish another drink. He looked quite stern, In this country, if you want to keep out of trouble, you always return a shout, see? Is this the custom? Bloody oath, it's the custom. Your turn. United States In the culture of the United States Military, possession of a challenge coin can be used to determine who buys a round of drinks. One individual of a group lays down their coin, and all else present must lay down their coins as well. Anyone who does not have a coin with them must buy a round. If everyone can produce a coin, the challenger must buy a round. References Further reading Gutfeld, Greg. Lessons from the Land of Pork Scratchings, London: Simon and Schuster, 2008. Etiquette Drinking culture
Round of drinks
[ "Biology" ]
797
[ "Etiquette", "Behavior", "Human behavior" ]
5,449,464
https://en.wikipedia.org/wiki/Vizing%27s%20theorem
In graph theory, Vizing's theorem states that every simple undirected graph may be edge colored using a number of colors that is at most one larger than the maximum degree of the graph. At least colors are always necessary, so the undirected graphs may be partitioned into two classes: "class one" graphs for which colors suffice, and "class two" graphs for which colors are necessary. A more general version of Vizing's theorem states that every undirected multigraph without loops can be colored with at most colors, where is the multiplicity of the multigraph. The theorem is named for Vadim G. Vizing who published it in 1964. Discovery The theorem discovered by Soviet mathematician Vadim G. Vizing was published in 1964 when Vizing was working in Novosibirsk and became known as Vizing's theorem. Indian mathematician R. P. Gupta independently discovered the theorem, while undertaking his doctorate (1965-1967). Examples When , the graph must itself be a matching, with no two edges adjacent, and its edge chromatic number is one. That is, all graphs with are of class one. When , the graph must be a disjoint union of paths and cycles. If all cycles are even, they can be 2-edge-colored by alternating the two colors around each cycle. However, if there exists at least one odd cycle, then no 2-edge-coloring is possible. That is, a graph with is of class one if and only if it is bipartite. Proof This proof is inspired by . Let be a simple undirected graph. We proceed by induction on , the number of edges. If the graph is empty, the theorem trivially holds. Let and suppose a proper -edge-coloring exists for all where . We say that color } is missing in with respect to proper -edge-coloring if for all . Also, let -path from denote the unique maximal path starting in with -colored edge and alternating the colors of edges (the second edge has color , the third edge has color and so on), its length can be . Note that if is a proper -edge-coloring of then every vertex has a missing color with respect to . Suppose that no proper -edge-coloring of exists. This is equivalent to this statement: (1) Let and be arbitrary proper -edge-coloring of and be missing from and be missing from with respect to . Then the -path from ends in . This is equivalent, because if (1) doesn't hold, then we can interchange the colors and on the -path and set the color of to be , thus creating a proper -edge-coloring of from . The other way around, if a proper -edge-coloring exists, then we can delete , restrict the coloring and (1) won't hold either. Now, let and be a proper -edge-coloring of and be missing in with respect to . We define to be a maximal sequence of neighbours of such that is missing in with respect to for all . We define colorings as for all , not defined, otherwise. Then is a proper -edge-coloring of due to definition of . Also, note that the missing colors in are the same with respect to for all . Let be the color missing in with respect to , then is also missing in with respect to for all . Note that cannot be missing in , otherwise we could easily extend , therefore an edge with color is incident to for all . From the maximality of , there exists such that . From the definition of this holds: Let be the -path from with respect to . From (1), has to end in . But is missing in , so it has to end with an edge of color . Therefore, the last edge of is . Now, let be the -path from with respect to . Since is uniquely determined and the inner edges of are not changed in , the path uses the same edges as in reverse order and visits . The edge leading to clearly has color . But is missing in , so ends in . Which is a contradiction with (1) above. Classification of graphs Several authors have provided additional conditions that classify some graphs as being of class one or class two, but do not provide a complete classification. For instance, if the vertices of the maximum degree in a graph form an independent set, or more generally if the induced subgraph for this set of vertices is a forest, then must be of class one. showed that almost all graphs are of class one. That is, in the Erdős–Rényi model of random graphs, in which all -vertex graphs are equally likely, let be the probability that an -vertex graph drawn from this distribution is of class one; then approaches one in the limit as goes to infinity. For more precise bounds on the rate at which converges to one, see . Planar graphs showed that a planar graph is of class one if its maximum degree is at least eight. In contrast, he observed that for any maximum degree in the range from two to five, there exist planar graphs of class two. For degree two, any odd cycle is such a graph, and for degree three, four, and five, these graphs can be constructed from platonic solids by replacing a single edge by a path of two adjacent edges. In Vizing's planar graph conjecture, states that all simple, planar graphs with maximum degree six or seven are of class one, closing the remaining possible cases. Independently, and partially proved Vizing's planar graph conjecture by showing that all planar graphs with maximum degree seven are of class one. Thus, the only case of the conjecture that remains unsolved is that of maximum degree six. This conjecture has implications for the total coloring conjecture. The planar graphs of class two constructed by subdivision of the platonic solids are not regular: they have vertices of degree two as well as vertices of higher degree. The four color theorem (proved by ) on vertex coloring of planar graphs, is equivalent to the statement that every bridgeless 3-regular planar graph is of class one . Graphs on nonplanar surfaces In 1969, Branko Grünbaum conjectured that every 3-regular graph with a polyhedral embedding on any two-dimensional oriented manifold such as a torus must be of class one. In this context, a polyhedral embedding is a graph embedding such that every face of the embedding is topologically a disk and such that the dual graph of the embedding is simple, with no self-loops or multiple adjacencies. If true, this would be a generalization of the four color theorem, which was shown by Tait to be equivalent to the statement that 3-regular graphs with a polyhedral embedding on a sphere are of class one. However, showed the conjecture to be false by finding snarks that have polyhedral embeddings on high-genus orientable surfaces. Based on this construction, he also showed that it is NP-complete to tell whether a polyhedrally embedded graph is of class one. Algorithms describe a polynomial time algorithm for coloring the edges of any graph with colors, where is the maximum degree of the graph. That is, the algorithm uses the optimal number of colors for graphs of class two, and uses at most one more color than necessary for all graphs. Their algorithm follows the same strategy as Vizing's original proof of his theorem: it starts with an uncolored graph, and then repeatedly finds a way of recoloring the graph in order to increase the number of colored edges by one. More specifically, suppose that is an uncolored edge in a partially colored graph. The algorithm of Misra and Gries may be interpreted as constructing a directed pseudoforest (a graph in which each vertex has at most one outgoing edge) on the neighbors of : for each neighbor of , the algorithm finds a color that is not used by any of the edges incident to , finds the vertex (if it exists) for which edge has color , and adds as an edge to . There are two cases: If the pseudoforest constructed in this way contains a path from to a vertex that has no outgoing edges in , then there is a color that is available both at and . Recoloring edge with color allows the remaining edge colors to be shifted one step along this path: for each vertex in the path, edge takes the color that was previously used by the successor of in the path. This leads to a new coloring that includes edge . If, on the other hand, the path starting from in the pseudoforest leads to a cycle, let be the neighbor of at which the path joins the cycle, let be the color of edge , and let be a color that is not used by any of the edges at vertex . Then swapping colors and on a Kempe chain either breaks the cycle or the edge on which the path joins the cycle, leading to the previous case. With some simple data structures to keep track of the colors that are used and available at each vertex, the construction of and the recoloring steps of the algorithm can all be implemented in time , where is the number of vertices in the input graph. Since these steps need to be repeated times, with each repetition increasing the number of colored edges by one, the total time is . In an unpublished technical report, claimed a faster time bound for the same problem of coloring with colors. History In both and , Vizing mentions that his work was motivated by a theorem of showing that multigraphs could be colored with at most colors. Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz. See also Brooks' theorem relating vertex colorings to maximum degree Notes References . . . . . . . . . . . . . . . . (In Russian.) . External links Proof of Vizing's theorem at PlanetMath. Graph coloring Theorems in graph theory
Vizing's theorem
[ "Mathematics" ]
2,069
[ "Graph coloring", "Graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Theorems in graph theory" ]
5,450,517
https://en.wikipedia.org/wiki/Corrosion%20in%20space
Corrosion in space is the corrosion of materials occurring in outer space. Instead of moisture and oxygen acting as the primary corrosion causes, the materials exposed to outer space are subjected to vacuum, bombardment by ultraviolet and X-rays, solar energetic particles (mostly electrons and protons from solar wind), and electromagnetic radiation. In the upper layers of the atmosphere (between 90–800 km), the atmospheric atoms, ions, and free radicals, most notably atomic oxygen, play a major role. The concentration of atomic oxygen depends on altitude and solar activity, as the bursts of ultraviolet radiation cause photodissociation of molecular oxygen. Between 160 and 560 km, the atmosphere consists of about 90% atomic oxygen. Materials Corrosion in space has the highest impact on spacecraft with moving parts. Early satellites tended to develop problems with seizing bearings. Now the bearings are coated with a thin layer of gold. Different materials resist corrosion in space differently. Electrolytes in batteries or cooling loops can cause galvanic corrosion, general corrosion, and stress corrosion. Aluminium is slowly eroded by atomic oxygen, while gold and platinum are highly corrosion-resistant. Gold-coated foils and thin layers of gold on exposed surfaces are therefore used to protect the spacecraft from the harsh environment. Thin layers of silicon dioxide deposited on the surfaces can also protect metals from the effects of atomic oxygen; e.g., the Starshine 3 satellite aluminium front mirrors were protected that way. However, the protective layers are subject to erosion by micrometeorites. Silver builds up a layer of silver oxide, which tends to flake off and has no protective function; such gradual erosion of silver interconnects of solar cells was found to be the cause of some observed in-orbit failures. Many plastics are considerably sensitive to atomic oxygen and ionizing radiation. Coatings resistant to atomic oxygen are a common protection method, especially for plastics. Silicone-based paints and coatings are frequently employed, due to their excellent resistance to radiation and atomic oxygen. However, the silicone durability is somewhat limited, as the surface exposed to atomic oxygen is converted to silica which is brittle and tends to crack. Solving corrosion The process of space corrosion is being actively investigated. One of the efforts aims to design a sensor based on zinc oxide, able to measure the amount of atomic oxygen in the vicinity of the spacecraft; the sensor relies on drop of electrical conductivity of zinc oxide as it absorbs further oxygen. Other problems The outgassing of volatile silicones on low Earth orbit devices leads to presence of a cloud of contaminants around the spacecraft. Together with atomic oxygen bombardment, this may lead to gradual deposition of thin layers of carbon-containing silicon dioxide. Their poor transparency is a concern in case of optical systems and solar panels. Deposits of up to several micrometers were observed after 10 years of service on the solar panels of the Mir space station. Other sources of problems for structures subjected to outer space are erosion and redeposition of the materials by sputtering caused by fast atoms and micrometeoroids. Another major concern, though of non-corrosive kind, is material fatigue caused by cyclical heating and cooling and associated thermal expansion mechanical stresses. See also Space weathering References External links The Cosmos on a Shoestring: Small Spacecraft for Space and Earth Science, Appendix B: Failure in Spacecraft Systems PDF New Scientist premium article: Space is corrosive NASA Long Duration Exposition Facility: surface contamination in space Corrosion Spaceflight
Corrosion in space
[ "Chemistry", "Materials_science", "Astronomy" ]
705
[ "Outer space", "Metallurgy", "Corrosion", "Electrochemistry", "Materials degradation", "Spaceflight" ]
5,450,573
https://en.wikipedia.org/wiki/List%20of%20Space%20Shuttle%20crews
This is a list of persons who served aboard Space Shuttle crews, arranged in chronological order by Space Shuttle missions. Abbreviations: PC = Payload Commander MSE = USAF Manned Spaceflight Engineer Mir = Launched to be part of the crew of the Mir Space Station ISS = Launched to be part of the crew of the International Space Station. Names of astronauts returning from the Mir or ISS on the Space Shuttle are shown in italics. They did not have specific crew roles, but are listed in the Payload Specialist columns for reasons of space. Only two flights have carried more than seven crew members for either launch or landing. STS-61-A in 1985 is the only flight to have both launched and landed with a crew of eight, and STS-71 in 1995 is the only other flight to have landed with a crew of eight. 1977 * Note 1: In this year, Approach and Landing Tests (ALT) were accomplished. These were atmospheric only, non-spaceflight tests from a Boeing 747 Shuttle Carrier Aircraft, both with the orbiter attached and for a series of drop-test flights. ** Note 2: The durations listed count only the orbiter free-flight time, and not total time aloft along with airborne time atop of the 747 SCA. *** Note 3: Flights with the orbiter attached to the SCA for the duration, but both crewed and powered to test crew procedures and orbiter systems. 1981–1985 1986–1990 1991–1995 1996–2000 2001–2005 2006–2011 See also List of Space Shuttle missions Space Shuttle program External links Spacefacts.de - List of crews scheduled for future human spaceflight Space Shuttle program Space Shuttle crews Astronauts by space program
List of Space Shuttle crews
[ "Engineering" ]
345
[ "Space programs", "Astronauts by space program" ]
5,450,650
https://en.wikipedia.org/wiki/Tom%20Maibaum
Thomas Stephen Edward Maibaum Fellow of the Royal Society of Arts (FRSA) is a computer scientist. Maibaum has a Bachelor of Science (B.Sc.) undergraduate degree in pure mathematics from the University of Toronto, Canada (1970), and a Doctor of Philosophy (Ph.D.) in computer science from Queen Mary and Royal Holloway Colleges, University of London, England (1974). Maibaum has held academic posts at Imperial College, London, King's College London (UK) and McMaster University (Canada). His research interests have concentrated on the theory of specification, together with its application in different contexts, in the general area of software engineering. From 1996 to 2005, he was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He is a Fellow of the Institution of Engineering and Technology and the Royal Society of Arts. References External links KCL home page , McMaster University Living people 20th-century Hungarian people Hungarian expatriates in Canada University of Toronto alumni Hungarian expatriates in England Alumni of Queen Mary University of London Alumni of Royal Holloway, University of London Academics of Imperial College London Academics of King's College London Hungarian computer scientists Formal methods people Academic staff of McMaster University Fellows of the Institution of Engineering and Technology Year of birth missing (living people)
Tom Maibaum
[ "Engineering" ]
313
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
5,451,403
https://en.wikipedia.org/wiki/Tunnel%20hull
A tunnel hull is a type of boat hull that uses two typically planing hulls with a solid centre that traps air. This entrapment then creates aerodynamic lift in addition to the planing (hydrodynamic) lift from the hulls. Many times this is attributed to ground effect. Theoretical research and full-scale testing of tunnel hulls has demonstrated the dramatic contributions of 'close-proximity ground effect' on enhanced aerodynamic lift/drag in operation of performance tunnel hull designs. Tunnel hulls are distinguishable from other catamarans by the typical close hull spacing and solid deck in between the hulls. Formula 1 powerboats have a tunnel hull catamaran design allowing them to go faster. Tunnel hulls are a common design in offshore powerboat racing. References See also Cathedral hull Hickman sea sled Boston Whaler Supercavitation propeller Offshore Powerboat Racing Shipbuilding
Tunnel hull
[ "Engineering" ]
182
[ "Shipbuilding", "Marine engineering" ]
10,746,852
https://en.wikipedia.org/wiki/Transmedia%20storytelling
Transmedia storytelling (also known as transmedia narrative or multiplatform storytelling) is the technique of telling a single story or story experience across multiple platforms and formats using current digital technologies. From a production standpoint, transmedia storytelling involves creating content that engages an audience using various techniques to permeate their daily lives. In order to achieve this engagement, a transmedia production will develop stories across multiple forms of media in order to deliver unique pieces of content in each channel. Importantly, these pieces of content are not only linked together (overtly or subtly), but are in narrative synchronization with each other. History Transmedia storytelling can be related to the concepts of semiotics and narratology. Semiotics is the "science of signs" and a discipline concerned with sense production and interpretation processes. The origins of the approach to disperse the content across various commodities and media is traced to the Japanese marketing strategy of media mix, originated in the early 1960s. Some, however, have traced the roots to Pamela: Or, Virtue Rewarded (1740) written by Samuel Richardson and even suggest that they go back further to the roots of earliest literature. Some works include, but are not limited to: Ong's Hat was most likely started sometime around 1993, and also included most of the aforementioned design principles. Ong's Hat also incorporated elements of legend tripping into its design, as chronicled in a scholarly work titled Legend-Tripping Online: Supernatural Folklore and the Search for Ong's Hat. Dreadnot, an early example of an ARG-style project, was published on sfgate.com in 1996. This ARG included working voice mail phone numbers for characters, clues in the source code, character email addresses, off-site websites, and real locations in San Francisco. Harry Potter Franchise (1997–present) Best selling book series spawns films, officially developed immersive fan sites, social media, video games, off-Broadway stage plays and spin-off films (Fantastic Beasts and Where to Find Them, Fantastic Beasts: The Crimes of Grindlewald, and Fantastic Beasts: The Secrets of Dumbledore) The Beatles Defiance, a television show and video game paired to tell connective and separate stories. Definition The study of transmedia storytelling—a concept introduced by Henry Jenkins, author of the seminal book Convergence Culture—is an emerging subject. Because of the nature of new media and different platforms, varying authors have different understandings of it. Jenkins states the term "transmedia" means "across media" and may be applied to superficially similar, but different phenomena. In particular, the concept of "transmedia storytelling" should not be confused with traditional cross-platform, "transmedia" media franchises, or "media mixes". One example that Jenkins gives is of the media conglomerate DC Comics. This organization releases comic books before the release of its related films so the audience understands a character's backstory. Much of transmedia storytelling is not based on singular characters or plot lines, but rather focuses on larger complex worlds where multiple characters and plot lines can be sustained for a longer period of time. In addition, Jenkins focused on how transmedia extends to attract larger audiences. For example, DC Comics releases coloring books to attract younger audience members. Sometimes, audience members can feel as though some transmedia storylines have left gaps in the plot line or character development, so they begin another extension of transmedia storytelling, such as fan fiction. Transmedia storytelling exists in the form of transmedia narratives, which Kalinov and Markova define as: "a multimedia product which communicates its narrative through a multitude of integrated media channels". In his book, You’re Gonna Need a Bigger Story, Houston Howard describes transmedia storytelling as “the art of extending a story across multiple mediums and multiple platforms in a way that creates a better business model for creators and a better experience for the audience.” In "Ball & Flint: transmedia in 90 seconds" (2013), Pont likens transmedia story-telling to "throwing a piece of flint at an old stone wall" and "delighting in the ricochet", making story something you can now "be hit by and cut by". Shannon Emerson writes in the blog post "Great Examples of Multiplatform Storytelling" that transmedia storytelling can also be called multiplatform storytelling, transmedia narrative, and even cross-media seriality. She also cites Henry Jenkins as a leading scholar in this realm. Educational uses Transmedia storytelling mimics daily life, making it a strong constructivist pedagogical tool for educational uses. The level of engagement offered by transmedia storytelling is essential to the Me or Millennial Generation as no single media satisfy curiosity. Schools have been slow to adopt the emergence of this new culture which shifts the spotlight of literacy from being one of individual expression to one of community. Whether we see it or not, Jenkins notes that we live in a globally connected world in which we use multiple platforms to connect and communicate. Using transmedia storytelling as a pedagogical tool, wherein students interact with platforms, such as Twitter, Facebook, Instagram, or Tumblr permits students' viewpoints, experiences, and resources to establish a shared collective intelligence that is enticing, engaging, and immersive, catching the millennial learners' attention, ensuring learners a stake in the experience. Transmedia storytelling offers the educator the ability to lead students to think critically, identify with the material and gain knowledge, offering valuable framework for the constructivist educational pedagogy that supports student centered learning. Transmedia storytelling allows for the interpretation of the story from the individual perspective, making way for personalized meaning-making - and in the case of fully participatory projects - allows participants to become co-creators of the story. In "The Better Mousetrap: Brand Invention in a Media Democracy" (2012), Pont explains, "Transmedia thinking anchors itself to the world of story, the ambition principally being one of how you can 'bring story to life' in different places, in a non-linear fashion. The marketing of movies is the most obvious applications of thie concept. Transmedia maintains that there's a 'bigger picture opportunity' to punting a big picture to additional platforms. Transmedia theory, applied to a movie launch, is all about promoting the story, not the 'premiere date of a movie starring...' In an industry built on the conventions of 'stars sell movies', where their name sits above the film's title, transmedia thinking is anti-conventional and boldly purist." Transmedia storytelling is also used by companies like Microsoft and Kimberly-Clark to train employees and managers. Anders Gronstedt and Marc Ramos say "At the core of every training challenge is a good story waiting to be told. More and more, these stories are being told across a multitude of devices and screens, where they can reach learners more widely, and engage with them more deeply." However, transmedia storytelling isn't used much at lower education levels. Children would thrive using transmedia storytelling worlds in their learning, but many of these worlds have copyrights linked to them. Transmedia storytelling has yet to tackle learning and educating children, but there have been a few transmedia worlds that have begun to show up with education, mostly by Disney. Transmedia storytelling is apparent in comics, films, print media, radio, and now social media. The story is told different depending on the medium. With social media, the story is told differently depending on which social media platform someone uses (Twitter, Facebook, Instagram) The scale in which the impact each medium has differs from medium to medium. Before social media, radio and print media were the primary medium to connect with an audience. With the advancements in technology, social media has become the go-to medium to reach a large group of people in a fast amount of time. In the ideal form of TS, “each medium does what it does best — so that a story might be introduced in a film, expanded through television, novels, and comics, and its world might be explored and experienced through game play. Each franchise entry needs to be self-contained enough to enable autonomous consumption. That is, you don't need to have seen the film to enjoy the game and vice-versa.” References Further reading Azemard, Ghislaine (2013), 100 notions for crossmedia and transmedia, éditions de l'immatériel, p. 228 Bernardo, Nuno (2014) Transmedia 2.0: How to Create an Entertainment Brand Using a Transmedial Approach to Storytelling Kérchy, Anna (2016) Alice in Transmedia Wonderland: Curiouser and Curiouser New Forms of a Children's Classic. Jefferson: McFarland. McAdams, Mindy (2016). Transmedia Storytelling. Conference paper: World Journalism Education Congress, Auckland, New Zealand. Phillips, Andrea. (2012) Transmedia Storytelling Pont, Simon (2013) "Digital State: How the Internet is Changing Everything". Kogan Page. Pont, Simon (2012) "The Better Mousetrap: Brand Invention in a Media Democracy". Kogan Page. Pratten, Robert (2015) Getting Started in Transmedia Storytelling: A Practical Guide for Beginners 2nd Edition Vernallis, Carol, Holly Rogers and Lisa Perrott (2020), Transmedia Directors: Artistry, Industry and New Audiovisual Aesthetics. *Queiroz, Cecília; Cunha, Regina et al. (2014) "Interactive Narratives, New Media & Social Engagement" - Toronto, Canadá. Imagining Transmedia by Ed Finn, Bob Beard, Joey Eschrich, Ruth Wylie Storytelling
Transmedia storytelling
[ "Technology" ]
1,996
[ "Multimedia", "Multimedia works" ]
10,747,879
https://en.wikipedia.org/wiki/Lazy%20learning
(Not to be confused with the lazy learning regime, see Neural tangent kernel). In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to eager learning, where the system tries to generalize the training data before receiving queries. The primary motivation for employing lazy learning, as in the K-nearest neighbors algorithm, used by online recommendation systems ("people who viewed/purchased/listened to this movie/item/tune also ...") is that the data set is continuously updated with new entries (e.g., new items for sale at Amazon, new movies to view at Netflix, new clips at YouTube, new music at Spotify or Pandora). Because of the continuous update, the "training data" would be rendered obsolete in a relatively short time especially in areas like books and movies, where new best-sellers or hit movies/music are published/released continuously. Therefore, one cannot really talk of a "training phase". Lazy classifiers are most useful for large, continuously changing datasets with few attributes that are commonly queried. Specifically, even if a large set of attributes exist - for example, books have a year of publication, author/s, publisher, title, edition, ISBN, selling price, etc. - recommendation queries rely on far fewer attributes - e.g., purchase or viewing co-occurrence data, and user ratings of items purchased/viewed. Advantages The main advantage gained in employing a lazy learning method is that the target function will be approximated locally, such as in the k-nearest neighbor algorithm. Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems and deal successfully with changes in the problem domain. At the same time they can reuse a lot of theoretical and applied results from linear regression modelling (notably PRESS statistic) and control. It is said that the advantage of this system is achieved if the predictions using a single training set are only developed for few objects. This can be demonstrated in the case of the k-NN technique, which is instance-based and function is only estimated locally. Disadvantages Theoretical disadvantages with lazy learning include: The large space requirement to store the entire training dataset. In practice, this is not an issue because of advances in hardware and the relatively small number of attributes (e.g., as co-occurrence frequency) that need to be stored. Particularly noisy training data increases the case base unnecessarily, because no abstraction is made during the training phase. In practice, as stated earlier, lazy learning is applied to situations where any learning performed in advance soon becomes obsolete because of changes in the data. Also, for the problems for which lazy learning is optimal, "noisy" data does not really occur - the purchaser of a book has either bought another book or hasn't. Lazy learning methods are usually slower to evaluate. In practice, for very large databases with high concurrency loads, the queries are not postponed until actual query time, but recomputed in advance on a periodic basis - e.g., nightly, in anticipation of future queries, and the answers stored. This way, the next time new queries are asked about existing entries in the database, the answers are merely looked up rapidly instead of having to be computed on the fly, which would almost certainly bring a high-concurrency multi-user system to its knees. Larger training data also entail increased cost. Particularly, there is the fixed amount of computational cost, where a processor can only process a limited amount of training data points. There are standard techniques to improve re-computation efficiency so that a particular answer is not recomputed unless the data that impact this answer has changed (e.g., new items, new purchases, new views). In other words, the stored answers are updated incrementally. This approach, used by large e-commerce or media sites, has long been used in the Entrez portal of the National Center for Biotechnology Information (NCBI) to precompute similarities between the different items in its large datasets: biological sequences, 3-D protein structures, published-article abstracts, etc. Because "find similar" queries are asked so frequently, the NCBI uses highly parallel hardware to perform nightly recomputation. The recomputation is performed only for new entries in the datasets against each other and against existing entries: the similarity between two existing entries need not be recomputed. Examples of Lazy Learning Methods K-nearest neighbors, which is a special case of instance-based learning. Local regression. Lazy naive Bayes rules, which are extensively used in commercial spam detection software. Here, the spammers keep getting smarter and revising their spamming strategies, and therefore the learning rules must also be continually updated. References Further reading lazy: Lazy Learning for Local Regression, R package with reference manual Webb G.I. (2011) Lazy Learning. In: Sammut C., Webb G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA David W. Aha: Lazy learning. Kluwer Academic Publishers, Norwell 1997, ISBN 0-7923-4584-3. Bontempi, Birattari, Bersini, Hugues Bersini, Iridia: Lazy Learning for Local Modeling and Control Design. 1997. Machine learning
Lazy learning
[ "Engineering" ]
1,131
[ "Artificial intelligence engineering", "Machine learning" ]
10,747,995
https://en.wikipedia.org/wiki/Eager%20learning
In artificial intelligence, eager learning is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system. The main advantage gained in employing an eager learning method, such as an artificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in the training data. Eager learning is an example of offline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result. The main disadvantage with eager learning is that it is generally unable to provide good local approximations in the target function. References Machine learning
Eager learning
[ "Engineering" ]
183
[ "Artificial intelligence engineering", "Machine learning" ]
10,748,030
https://en.wikipedia.org/wiki/Offline%20learning
Offline learning is a machine learning training approach in which a model is trained on a fixed dataset that is not updated during the learning process. This dataset is collected beforehand, and the learning typically occurs in a batch mode (i.e., the model is updated using batches of data, rather than a single input-output pair at a time). Once the model is trained, it can make predictions on new, unseen data. In online learning, only the set of possible elements is known, whereas in offline learning, the learner also knows the order in which they are presented. See also Online machine learning Incremental learning References Machine learning
Offline learning
[ "Technology", "Engineering" ]
136
[ "Artificial intelligence engineering", "Computing stubs", "Machine learning" ]
10,748,580
https://en.wikipedia.org/wiki/OJSC%20Dolomite
OJSC Dolomite () forms part of the Russia metallurgical complex, being the only producer of metallurgical dolomite in the Central Black Earth economic region. The company mines 55% of the total amount of dolomite produced in Russia and 43% in CIS. It is part of the NLMK Group. The company has explored the Dankov dolomite field (Lipetsk Oblast) since 1932. The product mix includes fluxed and converter dolomite, dolomite flour, crushed rock for construction and road works. The facility is located near to developed transport infrastructure, which is strategically advantageous for its customers. In 2005 the Company production reached 1.9 mln. tonnes. Dolomite is mainly sold in the domestic market. The main customers are steelmaking companies; their share is 69% of the total sales volume. NLMK's share in the company's sales structure amounted to 51% in 2005. References Companies based in Lipetsk Oblast Manufacturing companies of the Soviet Union Metallurgical facilities Non-renewable resource companies established in 1932 NLMK Group
OJSC Dolomite
[ "Chemistry", "Materials_science" ]
233
[ "Metallurgy", "Metallurgical facilities" ]
10,749,616
https://en.wikipedia.org/wiki/Lactarius%20rubrilacteus
Lactarius rubrilacteus is a species of mushroom of the genus Lactarius. It is also known as the bleeding milkcap, as is at least one other member of the genus, Lactarius sanguifluus. Description The mushroom can have either a bluish green or an orangy brown hue, with creamy white or yellow spores that are ellipsoid in shape. Greenish colors are more common to old, damaged or unexpanded specimens. The cap of the mushroom is convex and sometimes shield-shaped and across, reaching a height of tall. The cap also has quite an underfolded margin and a depressive disk. Lactarius rubrilacteus has many laticifers which appear as a white network across the surface of the mushroom. When sliced or cut, the mushroom flesh will typically release a dark red to purple latex or milky substance. The flesh itself will lose colour when damaged, and is usually granular or brittle to the touch. The stem is coloured as the cap, thin, and up to several centimetres long. The fungus itself exudes a slight odour that is faintly aromatic. This mushroom is edible but of little interest. Commonly found with a small blue or green mushroom attached at the base. Bruises green. Similar species Lactarius deliciosus is a related species, but its cap differs in appearance. L. sanguifluus is also similar. Distribution and habitat The mushroom is primarily found in parts of western North America, growing in forests and on the ground. The mushroom usually finds cover under conifer trees, mainly Douglas fir. It is widely distributed in these areas between the months of June and October. Chemical reactivity Potassium hydroxide: When the mushroom comes in contact with potassium hydroxide, most of the mushroom, including the mantle and ectomycorrhizae, loses its bluish hue and becomes a dull brown. Melzer's reagent: Hardly any visible reaction on any part of the mushroom occurs. This particular mushroom appears to have little reactivity to Melzer's Reagent. Sulfovanillin: Most of the mushroom becomes a reddish-brown color, but the oldest roots of the fungi stay unaltered by contact with sulfovanillin. See also List of Lactarius species References rubrilacteus Fungi described in 1979 Fungi of North America Edible fungi Taxa named by Alexander H. Smith Fungus species
Lactarius rubrilacteus
[ "Biology" ]
497
[ "Fungi", "Fungus species" ]
10,749,667
https://en.wikipedia.org/wiki/Bleeding%20milkcap
The term bleeding milkcap is used to describe at least two mushrooms of the genus Lactarius: Lactarius rubrilacteus Lactarius sanguifluus
Bleeding milkcap
[ "Biology" ]
37
[ "Set index articles on fungus common names", "Set index articles on organisms" ]
10,751,045
https://en.wikipedia.org/wiki/Novozymes
Novozymes A/S was a global biotechnology company headquartered in Bagsværd, outside of Copenhagen, Denmark. The company's focus was the research, development and production of industrial enzymes, microorganisms, and biopharmaceutical ingredients. The company merged with Chr. Hansen to form Novonesis in January 2024. Prior to the merger, the company had operations around the world, including in China, India, Brazil, Argentina, United Kingdom, the United States, and Canada. Class B shares of its stock were listed on the NASDAQ OMX Nordic exchange. History In 1925, the brothers Harald and Thorvald Pedersen founded Novo Terapeutisk Laboratorium and Nordisk Insulinlaboratorium with the aim to produce insulin. In 1941 the company's predecessor launched its first enzyme, trypsin, extracted from the pancreas of animals and used to soften leather, and was the first to produce enzymes by fermentation using bacteria in the 1950s. In the late 1980s Novozymes presented the world's first fat-splitting enzyme for detergents manufactured with genetically engineered microorganisms, called Lipolase. The current Novozymes was founded in 2000 as a spinout from pharmaceutical company Novo Nordisk. In the 2000s Novozymes expanded through the acquisition of several companies focusing on business outside the core enzyme business. Amongst them were the Brazilian bio agricultural company Turfal and German pharmaceutical, chemical and life science company EMD/Merck Crop BioScience Inc. These acquisitions made Novozymes a leader in sustainable solutions for the agricultural biological industry. In January 2016, the company spun out its biopharmaceutical operations into Albumedix. In June 2020, the business announced it would acquire Ireland-based PrecisionBiotics for $90 million. In December of the same year Novozymes announced it would acquire Microbiome Labs in a $125 million deal. In 12 December 2023, it was announced that Novozymes and Danish bioscience company Chr. Hansen had obtained regulatory approval for a merger, and on the following day, the name of the combined company was revealed as Novonesis. Ownership The Novozymes class A share capital is held by Novo Holdings A/S, a wholly owned subsidiary of the Novo Nordisk Foundation. In addition, Novo A/S holds 5,826,280 B shares, which overall gives Novo A/S 25.5% of the total share capital and 70.1% of the votes. References External links Forbes Magazine: "100 Corporations That Will Survive 100 Years" (January 28, 2009) Companies listed on Nasdaq Copenhagen Biotechnology companies of Denmark Life science companies based in Copenhagen Companies based in Gladsaxe Municipality Pharmaceutical companies established in 1925 Danish companies established in 2000 Danish brands Biotechnology companies established in 1925 Yeast banks Companies in the OMX Copenhagen 25 Companies in the S&P Europe 350 Dividend Aristocrats
Novozymes
[ "Biology" ]
604
[ "Life sciences industry", "Life science companies based in Copenhagen" ]
10,751,304
https://en.wikipedia.org/wiki/Motion-induced%20blindness
Motion Induced Blindness (MIB), also known as Bonneh's illusion is a visual illusion in which a large, continuously moving pattern erases from perception some small, continuously presented, stationary dots when one looks steadily at the center of the display. It was discovered by Bonneh, Cooperman, and Sagi (2001), who used a swarm of blue dots moving on a virtual sphere as the larger pattern and three small yellow dots as the smaller pattern. They found that after about 10 seconds, one or more of the dots disappeared for brief, random times. The illustrated version is a reproduction of an MIB display used by Michael Bach (2002). Bach replaced the 3D swarm of blue dots with a flat, rotating matrix of blue crosses and added a central, green, flashing dot for people to keep their eyes on. This produces robust disappearances of the yellow dots. Bonnh et al. attributed the causes of the illusion to attentional mechanisms, arguing that the visual system operates in a winner-takes-all manner. Related illusions Disappearances of easily visible, stationary patterns presented to one eye can happen when a different pattern is presented to the other eye—binocular rivalry, discovered in 1593. This also happens when the other eye's pattern is moving. Similar, but weaker disappearances happen when the two patterns are both presented to one or to both eyes—monocular rivalry, discovered in 1898. Moreover, easily visible stationary patterns that are away from where one looks can disappear with steady fixation—Troxler's fading, discovered in 1804. Other related illusions are flash suppression and motion-induced interocular suppression. Causes Interhemispheric Switch There is a correlation between an individual's switch rate during binocular rivalry and the rate of disappearance and reappearance in MIB in the same individual. This is most evident when the investigation involves an adequate sample from the 8-10X range of switch rates in the human population. In addition, TMS, Transcranial Magnetic Stimulation interruption of the MIB cycle is specific jointly, for both the hemisphere receiving the TMS pulse and the phase of the MIB cycle, with the disappearance phase susceptible to interruption via Left hemisphere TMS and the reappearance phase susceptible to Right hemisphere interruption. In this way, MIB is like binocular rivalry, where hemispheric manipulations using caloric vestibular stimulation (or TMS) also require the correct combination of cerebral hemisphere and phase (1/4 possibilities). From these observations, it can be argued that MIB is an interhemispheric switching phenomenon, an unexpected member of the class of rhythmic, biphasic, perceptual rivalries such as binocular rivalry and plaid motion rivalry. In this formulation, the disappearance in MIB can be understood in terms of the cognitive style of the Left hemisphere, which chooses a single possibility from the many, and ignores or "denies" the others (denial being one of the characteristic defence mechanisms of the Left, which becomes exaggerated in the Left hemisphere bias of mania). MIB reappearance is attributable to the Right hemisphere, whose "discrepancy detector" cognitive style assesses all possibilities, and therefore disagrees with the biased decision to ignore the bright yellow stimulus. A corollary of this formulation is a predictable connection between MIB and mood, which was successfully tested on thousands of viewers watching ABC TV's Catalyst Program in Australia, where longer disappearance phases were observed in euphoric individuals and very short, or absent, disappearances were a feature of the dysphoria of stress, trauma and depression. Surface completion Numerous psychophysical findings emphasize the importance of surface completion and depth cues in visual perception. Thus, if MIB is affected by these factors it will regulate in accordance to simple occlusion principles. In their study, Graf et al. (2002) stereoscopically presented a moving grid stimulus set behind, in front of, or in the same plane as the static dots. They then showed involuntary completion of the grid elements into a surface interacting with the static targets - creating an illusion of occlusion. When the grid appeared in front of the targets the proportion of disappearance was larger than when it was behind or on the same plane. Although to a lesser extent, MIB did nonetheless occur in the conditions where the perceptual occlusion was not taking place (targets were in front of the mask). The effect of interposition and perceived depth on target disappearance in MIB was also shown in a study done by Hsu et al. (2010) where a concave target appearing behind its surrounding disappeared more frequently than a convex one appearing in front of the mask. These effects, albeit being less significant, were replicated in similar settings without the use of motion. The above experiments show that surface completion and simple occlusion precepts can predictably modulate MIB. However. they do not explain the origin of MIB, and may only be evoking other processes contingent upon it. Moreover, the surface completion theory does not explain the role of motion in this phenomenon. Perceptual filling-in Hsu et al. (2004) compared MIB to a similar phenomenon of perceptual filling-in (PFI), which likewise reveals a striking dissociation between the percept and the sensory input. They describe both as visual attributes which are perceived in a certain region of the visual field regardless of being in the background (in the same manner as colour, brightness or texture) thus inducing target disappearance. They argue that because in both MIB and PFI the disappearance, or the incorporation of the background motion stimuli, becomes more profound with an increase in eccentricity, with a decrease in contrast, and when perceptual grouping with other stimuli is controlled for. The two illusions are very likely to be a result of intermutual processes. Since MBI and PFI show to be structurally similar, it seems plausible that MIB can be a phenomenon responsible for completing missing information across the blind spot and scotomas where motion is involved. Motion streak suppression Rather than a deficiency of our visual processing, MIB may be a functional side effect of the visual system's attempt to facilitate a better perception of movement. Wallis and Arnold (2009) propose a plausible explanation of target disappearance in MIB by linking it to the processes responsible for motion streak suppression. In their view, target disappearance is a side effect of our vision's attempt to provide an apparent perception of moving form. MIB shows to be hindered at equiluminance and augmented at the trailing edges of movement, all reminiscent of motion streak suppression. It appears that what drives MIB is a competition between a neural signal sensitive to spatiotemporal luminance and one responding to proximate stationary targets; where the stronger signal eventuates with what we actually perceive at any given moment (Donner et al., 2008). Perceptual scotoma A different explanatory approach by New and Scholl (2008) proposes that the phenomenon is another instance of our visual system's endeavor to provide clear and accurate perception. Because the static targets appear to be invariant with respect to the background motion, the visual system removes them from our awareness, discarding it as being contrary to the logic of perception and real life situations; thus treating it as a piece of disaffiliated retina or a scotoma. Consistent with this account is the fact that targets which are stabilized on the retina are more likely to be induced to disappearance than the ones moving across the retina. Implications MIB may reveal the mechanisms underlying our visual perception. Researchers have speculated about whether MIB occurs outside the laboratory, without being noticed. Situations such as driving, in which some night drivers should see stationary red tail lights of the preceding cars disappear temporally when they attend to the moving stream of lights from oncoming traffic may be case points. See also Binocular rivalry Flash suppression Monocular rivalry Motion-induced interocular suppression Troxler's fading Visual perception References Optical illusions Binocular rivalry
Motion-induced blindness
[ "Physics" ]
1,671
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
10,751,948
https://en.wikipedia.org/wiki/Forced%20degradation
Forced degradation or accelerated degradation is a process whereby the natural degradation rate of a product or material is increased by the application of an additional stress. Introduction Forced degradation studies are used to identify reactions which may occur to degrade a processed product. Usually conducted before final formulation, forced degradation uses external stresses to rapidly screen material stabilities. Longer-term storage tests are usually used to measure similar properties when final formulations are involved because of the stringent FDA regulations. These tests are generally more expensive (because of the time involved) than forced degradation which is therefore used for rapid selection and elimination tests. Common stresses There are a number of common stresses which are used to pH (acid/base) Chemical processes are often catalysed by the presence of acids and bases. The exposure of materials to these can therefore accelerate degradation reactions. Temperature In accordance to arrhenius kinetics, increasing temperature increases the rate of any degradation process. Temperature is often used in conjunction with other stresses to increase reaction rates. Oxidation Concentration Light Methodologies Standard methodologies include: Wet chemistry methods Flow chemistry Calor Application To demonstrate the specificity when developing stability indicating method. To help to identify reactions that cause degradation of pharmaceutical product. As a part of method development strategy. Are designed to generate product related variants. See also Chemical decomposition Thermogravimetric analysis Total productive maintenance External links Flow chemistry degradation by Syrris Chemical process engineering
Forced degradation
[ "Chemistry", "Engineering" ]
285
[ "Chemical process engineering", "Chemical engineering" ]
10,753,014
https://en.wikipedia.org/wiki/S.O.S%20Soap%20Pad
S.O.S Soap Pad is a trade name for an abrasive cleaning pad, used for household cleaning, and made from steel wool saturated with soap. In 1917, Irwin Cox of San Francisco, California, an aluminum pot salesman, invented a pre-soaped pad with which to clean pots. As a way of introducing himself to potential new customers, Cox made the soap encrusted steel-wool pads as a calling card. His wife named the soap pads S.O.S or "Save Our Saucepans." Cox soon found out that the S.O.S pads were a hotter product than his pots and pans. It is commonly believed that a period was left out in the name's punctuation. However, this spelling was chosen by design. The acronym, S.O.S., is the famous distress signal and could not be trademarked. By removing the last period, the name was unique and could then be registered with the United States Patent and Trademark Office. The product was indirectly featured in a widely circulated black & white photograph taken by William Safire of the Kitchen Debate. An S.O.S box is clearly visible on the right side of the picture, standing on the countertop above the washing machine. S.O.S was bought by General Foods in 1958, then in 1968 it was sold to Miles Laboratories. In the mid-1990s, the manufacturer began advertising that S.O.S pads had been made rust-resistant. In fact the pads were so well-protected against rust, and the pads lasted so much longer, that Miles removed the rust-inhibiting ingredients and ceased to advertise the pad's rust resistant quality. Later, in 1994 Miles sold the brand to Clorox. See also Brillo Pad References Website Official S.O.S Soap Pad website Cleaning products Clorox brands
S.O.S Soap Pad
[ "Chemistry" ]
385
[ "Cleaning products", "Products of chemical industry" ]
10,753,175
https://en.wikipedia.org/wiki/Marshall%20Van%20Alstyne
Marshall W. Van Alstyne (born March 28, 1962) is the Allen and Kelly Questrom Professor in IS at Boston University and a research associate at the MIT Initiative on the Digital Economy. He co-developed the theory of two-sided markets with Geoffrey G Parker. His work focuses on the economics of information. This includes a sustained interest in information markets and in how information and technology affect productivity with a new emphasis on “platforms” as an extension of the work on two-sided markets. Early life and education Marshall Van Alstyne was born in Columbus, Ohio. He received a B.A. in Computer Science from Yale University  in 1984. He then moved to Software Systems Developer role at Martin Marietta Data Systems in Colorado and later Associate Staff at MIT Lincoln Laboratory in Massachusetts, before starting his M.S. and Doctorate programs. He obtained his MS in Management in 1991 and Ph.D. in Information Systems and Economics in 1998, both at the MIT Sloan School of Management. Career Alstyne is a professor at Boston University and research associate at the MIT Initiative on the Digital Economy. Marshall co-organizes and co-chairs the annual MIT Platform Strategy Summit, an executive meeting on platform-centered economics and management, and he organizes and co-chairs  the Platform Strategy Research Symposium, the premier conference on Platform research. After finishing his PhD, he joined as an assistant professor at the University of Michigan, and later joined Boston University in 2004. Publications He is the co-author of Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You. The book describes the information technologies, standards, and rules that make up platforms, and are used and developed by the biggest and most innovative global companies. Forbes included it among 16 must-read business books for 2016, describing it as "a practical guide to the new business model that is transforming the way we work and live." Awards Herbert Simon Award (2021) – Practical research impact Thinkers 50 (2021) – Award shared with Geoff Parker INFORMS Practical Impact Award (2020) – For research with real world impact Thinkers 50 Digital Thinking Award(2019) – Ranked #36 among management scholars globally National Science Foundation Faculty Career Award (1999) Personal He is the son of constitutional law scholar William Van Alstyne. References External links Homepage MIT Sloan School of Management alumni Living people Yale University alumni Boston University faculty University of Michigan faculty 1962 births Information systems researchers
Marshall Van Alstyne
[ "Technology" ]
494
[ "Information systems", "Information systems researchers" ]
10,755,257
https://en.wikipedia.org/wiki/Liquid%20bandage
Liquid bandage is a topical skin treatment for minor wounds which binds to the skin to form a protective polymeric layer that keeps dirt and germs out and moisture in. It can be directly applied to the wound after removing debris. For the fast-acting, reactive adhesive that is used to mend deep cuts or surgery wounds, see cyanoacrylates (specifically 2-Octyl cyanoacrylate). Design Liquid bandage is typically a polymer dissolved in a solvent (commonly water or an alcohol), sometimes with an added antiseptic and local anesthetic, although the alcohol in some brands may serve the same purpose. These products protect the wound by forming a thin film of polymer when the carrier evaporates. Polymers used may include polyvinylpyrrolidone (water based), ethyl cellulose, pyroxylin/nitrocellulose or poly(methylacrylate-isobutene-monoisopropylmaleate) (alcohol based), and acrylate or siloxane polymers (hexamethyldisiloxane or isooctane solvent based). In addition to their use in replacing conventional bandages in minor cuts and scrapes, they have found use in surgical and veterinary offices. Liquid bandages are increasingly finding use in the field of combat, where they can be used to rapidly stanch a wound until proper medical attention can be obtained. Scenarios for usage Liquid bandages are suitable for clean cuts that close easily and shallow small wounds, as it will help both sides of the wound to bond and produce a suture-like effect. Due to the drying of liquid wound dressing, it will form a nonelastic film on the wound and cannot absorb tissue fluid. If the wound area is too large, it will actually hinder wound shrinkage and healing. It's not recommended for use on large wounds, abrasion patches, ulcers, suppuration, burns, sensitive skin areas around the eyes, mucosa, and patients with favism. See also Butterfly stitches Dermal adhesive References Medical dressings Polymers Skin care
Liquid bandage
[ "Chemistry", "Materials_science" ]
437
[ "Polymers", "Polymer chemistry" ]
10,755,823
https://en.wikipedia.org/wiki/Cellular%20floor%20raceways
Cellular floor raceways are electrical wiring ducts or cells made from steel floor deck that serve as structural formwork for placement of concrete floor slabs and also as wire and cable raceways within the concrete floor slab. These raceway systems are generally used to create floor slabs on multi-story steel-framed buildings but can also be used in concrete framed structures and for on-grade applications. References Electrical wiring
Cellular floor raceways
[ "Physics", "Engineering" ]
81
[ "Electrical systems", "Building engineering", "Physical systems", "Architecture stubs", "Electrical engineering", "Electrical wiring", "Architecture" ]
10,755,909
https://en.wikipedia.org/wiki/DIKW%20pyramid
The DIKW pyramid, also known variously as the knowledge pyramid, knowledge hierarchy, information hierarchy, DIKW hierarchy, wisdom hierarchy, data pyramid, and information pyramid, sometimes also stylized as a chain, refer to models of possible structural and functional relationships between a set of components—often four, data, information, knowledge, and wisdom—models that had antecedents prior to the 1980s. In the latter years of that decade, interest in the models grew after explicit presentations and discussions, including from Milan Zeleny, Russell Ackoff, and Robert W. Lucky. Subsequent important discussions extended along theoretical and practical lines into the coming decades. While debate continues as to actual meaning of the component terms of DIKW-type models, and the actual nature of their relationships—including occasional doubt being cast over any simple, linear, unidirectional model—even so they have become very popular visual representations in use by business, the military, and others. Among the academic and popular, not all versions of the DIKW-type models include all four components (earlier ones excluding data, later ones excluding or downplaying wisdom, and several including additional components (for instance Ackoff inserting "understanding" before and Zeleny adding "enlightenment" after the wisdom component). In addition, DIKW-type models are no longer always presented as pyramids, instead also as a chart or framework (e.g., by Zeleny), as flow diagrams (e.g., by Liew, and by Chisholm et al.), and sometimes as a continuum (e.g., by Choo et al.). Short description As Rowley noted in 2007, the DIKW model "is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but [as of that date] there ha[d] been limited direct discussion of the hierarchy". Reviews of textbooks and a survey of scholars in relevant fields indicate that there was not a consensus as to definitions used in the model as of that date, and as reviewed by Liew in that year, even less "in the description of the processes that transform components lower in the hierarchy into those above them". Zins work, published in 2007—from studies in 2003-2005 that documented "130 definitions of data, information, and knowledge formulated by 45 scholars", published in 2007—to suggest that the data–information–knowledge components of DIKW refer to a class of no less than five models, as a function of whether data, information, and knowledge are each conceived of as subjective, objective (what Zins terms, "universal" or "collective") or both. In Zins' usage, subjective and objective "are not related to arbitrariness and truthfulness, which are usually attached to the concepts of subjective knowledge and objective knowledge". Information science, Zins argues, studies data and information, but not knowledge, as knowledge is an internal (subjective) rather than an external (universal–collective) phenomenon. Representations Graphical representation DIKW is a hierarchical model often depicted as a pyramid, sometimes as a chain, with data at its base and wisdom at its apex (or chain-beginning and -end). Both Zeleny and Ackoff have been credited with originating the pyramid representation, although neither used a pyramid to present their ideas. According to Wallace, Debons and colleagues may have been the first to "present the hierarchy graphically". Many variations of the DIKW-type pyramid have been produced. One, in use by knowledge managers in the United States Department of Defense, attempts to show the DIKW progression to enable effective decisions and consequent activities supporting shared understanding throughout defense organizations, as well as supporting management of risks associated with decisions. DIKW-type hierarchical information paradigms have also been represented as two-dimensional charts,and as flow diagrams, where relationships between the components may be presented less hierarchically, with defining aspects of the relationships, feedback loops, etc. Computational representation Intelligent decision support systems are trying to improve decision making by introducing new technologies and methods from the domain of modeling and simulation in general, and in particular from the domain of intelligent software agents in the contexts of agent-based modeling. The following example describes a military decision support system, but the architecture and underlying conceptual idea are transferable to other application domains: The value chain starts with data quality describing the information within the underlying command and control systems. Information quality tracks the completeness, correctness, currency, consistency and precision of the data items and information statements available. Knowledge quality deals with procedural knowledge and information embedded in the command and control system such as templates for adversary forces, assumptions about entities such as ranges and weapons, and doctrinal assumptions, often coded as rules. Awareness quality measures the degree of using the information and knowledge embedded within the command and control system. Awareness is explicitly placed in the cognitive domain. By the introduction of a common operational picture, data are put into context, which leads to information instead of data. The next step, which is enabled by service-oriented web-based infrastructures (but not yet operationally used), is the use of models and simulations for decision support. Simulation systems are the prototype for procedural knowledge, which is the basis for knowledge quality. Finally, using intelligent software agents to continually observe the battle sphere, apply models and simulations to analyze what is going on, to monitor the execution of a plan, and to do all the tasks necessary to make the decision maker aware of what is going on, command and control systems could even support situational awareness, the level in the value chain traditionally limited to pure cognitive methods. History Danny P. Wallace, a professor of library and information science, explained that the origin of the DIKW pyramid is uncertain: The presentation of the relationships among data, information, knowledge, and sometimes wisdom in a hierarchical arrangement has been part of the language of information science for many years. Although it is uncertain when and by whom those relationships were first presented, the ubiquity of the notion of a hierarchy is embedded in the use of the acronym DIKW as a shorthand representation for the data-to-information-to-knowledge-to-wisdom transformation.Many authors think that the idea of the DIKW relationship originated from two lines in the poem "Choruses", by T. S. Eliot, that appeared in the pageant play The Rock, in 1934: Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? Knowledge, intelligence, and wisdom In 1927, Clarence W. Barron addressed his employees at Dow Jones & Company on the hierarchy: "Knowledge, Intelligence and Wisdom". Data, information, knowledge In 1955, English-American economist and educator Kenneth Boulding presented a variation on the hierarchy consisting of "signals, messages, information, and knowledge". However, "[t]he first author to distinguish among data, information, and knowledge and to also employ the term 'knowledge management' may have been American educator Nicholas L. Henry", in a 1974 journal article. Data, information, knowledge, wisdom Other early versions (prior to 1982) of the hierarchy that refer to a data tier include those of Chinese-American geographer Yi-Fu Tuan and sociologist-historian Daniel Bell.. In 1980, Irish-born engineer Mike Cooley invoked the same hierarchy in his critique of automation and computerization, in his book Architect or Bee?: The Human / Technology Relationship. Thereafter, in 1987, Czechoslovakia-born educator Milan Zeleny mapped the components of the hierarchy to knowledge forms: know-nothing, know-what, know-how, and know-why. Zeleny "has frequently been credited with proposing the [representation of DIKW as a pyramid ]... although he actually made no reference to any such graphical model." The hierarchy appears again in a 1988 address to the International Society for General Systems Research, by American organizational theorist Russell Ackoff, published in 1989. Subsequent authors and textbooks cite Ackoff's as the "original articulation" of the hierarchy or otherwise credit Ackoff with its proposal. Ackoff's version of the model includes an understanding tier (as Adler had, before him), interposed between knowledge and wisdom. Although Ackoff did not present the hierarchy graphically, he has also been credited with its representation as a pyramid. In 1989, Bell Labs veteran Robert W. Lucky wrote about the four-tier "information hierarchy" in the form of a pyramid in his book Silicon Dreams. In the same year as Ackoff presented his address, information scientist Anthony Debons and colleagues introduced an extended hierarchy, with "events", "symbols", and "rules and formulations" tiers ahead of data. In 1994 Nathan Shedroff presented the DIKW hierarchy in an information design context. Jennifer Rowley noted in 2007 that as of that date there was "little reference to wisdom" in discussions of the DIKW in published college textbooks, and she at times did not include wisdom in her own discussion of her research. Meanwhile, Chaim Zins' extensive primary research analysis conceptualizing data, information, and knowledge in that same year makes no explicit comment regarding wisdom, although citations included by Zins do make mention of the term (e.g., Dodig-Crnković, Ess, and Wormell cited therein), Definitions/conceptions of the four DIKW components In 2013, Baskarada and Koronios attempted a relatively thorough review of the definitions of individual components, to that date. Data In the context of DIKW-type models, data is conceived, per Zins' 2007 formulation, as being composed of symbols or signs, representing stimuli or signals, that, in Rowley words (in 2007), are "of no use until ... in a usable (that is, relevant) form". Zeleny characterized this non-usable characteristic of data as "know-nothing". The view in 2007 was that in some cases, data are understood to refer not only to symbols, but also to signals or stimuli referred to by such symbols—what Zins terms "subjective data". "[U]niversal data", on the other hand, for Rowley, are "the product of "observation", while subjective data are the observations. This distinction is often obscured in definitions of data in terms of "facts". Data as fact In Henry's early formulation of a hierarchy, data was simply defined as "merely raw facts", Intervening texts define data as "chunks of facts about the state of the world", and "material facts", respectively. Rowley, following her 2007 study of DIKW definitions given in textbooks, separately characterizes data "as being discrete, objective facts or observations, which (are unorganized and unprocessed and therefore have no meaning or value because of lack of context and interpretation." Cleveland does not include an explicit data tier, but defines information as "the sum total of ... facts and ideas". Insofar as facts have as a fundamental property that they are true, have objective reality, or otherwise can be verified, such definitions would preclude false, meaningless, and nonsensical data from the DIKW model, such that the principle of garbage in, garbage out would not be accounted for under DIKW. Data as signal In the subjective domain, per Zins' 2007 work, data are conceived of as "sensory stimuli, which we perceive through our senses", or "signal readings", including "sensor and/or sensory readings of light, sound, smell, taste, and touch". Others have argued that what Zins calls subjective data actually count as a "signal" tier (as had Boulding), which precedes data in the DIKW chain. American information scientist Glynn Harmon defined data as "one or more kinds of energy waves or particles (light, heat, sound, force, electromagnetic) selected by a conscious organism or intelligent agent on the basis of a preexisting frame or inferential mechanism in the organism or agent" (e.g., Harmon, as cited by Zins) The meaning of sensory stimuli may also be thought of as subjective data; as Zins stated in 2007, informationis the meaning of these sensory stimuli (i.e., the empirical perception). For example, the noises that I hear are data. The meaning of these noises (e.g., a running car engine) is information. Still, there is another alternative as to how to define these two concepts—which seems even better. Data are sense stimuli, or their meaning (i.e., the empirical perception). Accordingly, in the example above, the loud noises, as well as the perception of a running car engine, are data. Likewise, per that work of Zins, subjective data, if understood in this way, would be comparable to knowledge by acquaintance, in that it is based on direct experience of stimuli; however, unlike knowledge by acquaintance, as described by Bertrand Russell and others, the subjective domain is "not related to ... truthfulness". Whether Zins' alternate definition would hold would be a function of whether "the running of a car engine" is understood as an objective fact or as a contextual interpretation. Data as symbol Whether the DIKW definition of data is deemed to include Zins's 2007 view of subjective data (with or without meaning), data is somemwhat consistently defined to include "symbols", or, per Zins, "sets of signs that represent empirical stimuli or perceptions", in Rowley's words (writing in that same year), of "a property of an object, an event or of their environment". Data, in this sense, as described by Liew, likewise in 2007, are "recorded (captured or stored) symbols", including "words (text and/or verbal), numbers, diagrams, and images (still and/or video), which are the building blocks of communication", the purpose of which "is to record activities or situations, to attempt to capture the true picture or real event," such that "all data are historical, unless used for illustrative purposes, such as forecasting." Boulding's version of DIKW-type models explicitly named the level below the information tier message, distinguishing it from an underlying signal tier. Debons and colleagues reverse this relationship, identifying an explicit symbol tier as one of several levels underlying data. Zins argues in the same work that, for most of those surveyed, data "are characterized as phenomena in the universal domain... Apparently," clarifies Zins, "it is more useful to relate to the data, information, and knowledge as sets of signs rather than as meaning and its building blocks". Information "Classically," states Gamble's 2007 text, "information is defined as data that are endowed with meaning and purpose." In the context of DIKW, as presented by Rowley in 2007, information meets the definition for knowledge by description ("information is contained in descriptions"), and is differentiated from data in that it is "useful". In her words, "[i]nformation is inferred from data", in the process of answering interrogative questions (e.g., Ackoff's "who", "what", "where", "how many", "when") thereby making the data useful for "decisions and/or action". Structural v. functional information Rowley, following her 2007 review of how DIKW is presented in textbooks, describes information as "organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant." Note that this definition contrasts with Rowley's separate characterization of Ackoff's definitions, wherein "[t]he difference between data and information is structural, not functional." In his formulation of the hierarchy, Henry defined information as "data that changes us", this being a functional, rather than structural, distinction between data and information. Meanwhile, Cleveland, who did not refer to a data level in his version of DIKW, described information as "the sum total of all the facts and ideas that are available to be known by somebody at a given moment in time". American educator Bob Boiko is more obscure, defining information only as "matter-of-fact". Symbolic v. subjective information Information may be conceived of in DIKW-type models as universal, per Zins writing in 2007, existing as symbols and signs; subjective, the meaning to which symbols attach; or both. Examples from of information as both symbol and meaning, per Zins analysis based on the work of others, include: American information scientist Anthony Debons's characterization of information as representing "a state of awareness (consciousness) and the physical manifestations they form", such that "[i]nformation, as a phenomenon, represents both a process and a product; a cognitive/affective state, and the physical counterpart (product of) the cognitive/affective state." Danish information scientist Hanne Albrechtsen's description of information as "related to meaning or human intention", either as "the contents of databases, the web, etc." (italics added) or "the meaning of statements as they are intended by the speaker/writer and understood/misunderstood by the listener/reader." Zeleny formerly described information as "know-what", but has since refined this to differentiate between "what to have or to possess" (information) and "what to do, act or carry out" (wisdom). To this conceptualization of information, he also adds "why is", as distinct from "why do" (another aspect of wisdom). Zeleny further argues that there is no such thing as explicit knowledge, but rather that knowledge, once made explicit in symbolic form, becomes information. Knowledge American philosophers John Dewey and Arthur Bentley, in their 1949 book Knowing and the Known, argued that "knowledge" is "a vague word", and presented a view, distinct but foreshadowing DIKW-type models, that outlined nineteen "terminological guide-posts". Other definitions may refer to information having been processed, organized or structured in some way, or else as being applied or put into action. As such, the knowledge component of DIKW-type models is generally understood to be a concept elusive and difficult to define. As well, definitions of knowledge by those who study DIKW-type models differ from that used by epistemology. Per Rowley, writing in 2007, the DIKW view is that "knowledge is defined with reference to information." Zins, also writing in 2007, has suggested that knowledge, being subjective rather than universal, is not the subject of study in information science, and that it is often defined in propositional terms, while Zeleny has asserted that to capture knowledge in symbolic form is to make it into information, i.e., that "All knowledge is tacit". "One of the most frequently quoted definitions" of knowledge captures some of the various ways in which it has been defined by others: Knowledge is a fluid mix of framed experience, values, contextual information, expert insight and grounded intuition that provides an environment and framework for evaluating and incorporating new experiences and information. It originates and is applied in the minds of knowers. In organizations it often becomes embedded not only in documents and repositories but also in organizational routines, processes, practices and norms. Knowledge as processed Mirroring the description of information as "organized or structured data", knowledge was described, as of 2007, as: "synthesis of multiple sources of information over time"... "organization and processing to convey understanding, experience [and] accumulated learning"... or "a mix of contextual information, values, experience and rules". One of Boulding's definitions for knowledge had been "a mental structure" and Cleveland described knowledge as "the result of somebody applying the refiner's fire to [information], selecting and organizing what is useful to somebody". A 2007 text describes knowledge as "information connected in relationships". Knowledge as procedural Zeleny defines knowledge as "know-how" (i.e., procedural knowledge), and also "know-who" and "know-when", each gained through "practical experience". "Knowledge ... brings forth from the background of experience a coherent and self-consistent set of coordinated actions.". Further, implicitly holding information as descriptive, Zeleny declares that "Knowledge is action, not a description of action." Ackoff, likewise, described knowledge as the "application of data and information", which "answers 'how' questions", that is, in Rowley's view, "know-how". Meanwhile, as described by Rowley in 2007, textbooks discussing DIKW were found to describe knowledge variously in terms of experience, skill, expertise or capability, for instance as "study and experience"... "a mix of contextual information, expert opinion, skills and experience"... "information combined with understanding and capability"... or "perception, skills, training, common sense and experience". Businessmen James Chisholm and Greg Warman, writing in that same year, characterized knowledge simply as "doing things right". Knowledge as propositional In Rowley's 2007 views, knowledge can be described as "belief structuring" and "internalization with reference to cognitive frameworks". One definition given by Boulding for knowledge was "the subjective 'perception of the world and one's place in it'", while Zeleny's said that knowledge "should refer to an observer's distinction of 'objects' (wholes, unities)". Zins, likewise, wrote in 2007 that knowledge is described in propositional terms, as justifiable beliefs (subjective domain, akin to tacit knowledge), and sometimes also as signs that represent such beliefs (universal/collective domain, akin to explicit knowledge). Zeleny has rejected the idea of explicit knowledge (as in Zins' universal knowledge), arguing that once made symbolic, knowledge becomes information. Boiko appears to echo this sentiment, in his claim that "knowledge and wisdom can be information". In the subjective domain, per Zins 2007 work, knowledge isa thought in the individual's mind, which is characterized by the individual's justifiable belief that it is true. It can be empirical and non-empirical, as in the case of logical and mathematical knowledge (e.g., "every triangle has three sides"), religious knowledge (e.g., "God exists"), philosophical knowledge (e.g., "Cogito ergo sum"), and the like. Note that knowledge is the content of a thought in the individual's mind, which is characterized by the individual's justifiable belief that it is true, while "knowing" is a state of mind which is characterized by the three conditions: (1) the individual believe[s] that it is true, (2) S/he can justify it, and (3) It is true, or it [appears] to be true. The distinction here between subjective knowledge and subjective information is that subjective knowledge is characterized by justifiable belief, where subjective information is a type of knowledge concerning the meaning of data. Boiko implied that knowledge was both open to rational discourse and justification, when he defined knowledge as "a matter of dispute". Wisdom Although commonly included as a level in DIKW-type models, Rowley noted in 2007 that, in discussions of the DIKW-type models, "there is limited reference to wisdom". Boiko appears to have dismissed wisdom, characterizing it as "non-material". Ackoff refers to understanding as an "appreciation of 'why'", and wisdom as "evaluated understanding", where understanding is posited as a discrete layer between knowledge and wisdom. Adler had previously also included an understanding tier, while other authors have depicted understanding as a dimension in relation to which DIKW is plotted. Cleveland described wisdom simply as "integrated knowledge—information made super-useful". Other authors have characterized wisdom as "knowing the right things to do" and "the ability to make sound judgments and decisions apparently without thought". Wisdom involves using knowledge for the greater good; because of this, wisdom is described as being deeper and more uniquely human, and requires a sense of good and bad, of right and wrong, of the ethical and unethical. Zeleny described wisdom as "know-why", but later refined his definitions, so as to differentiate "why do" (wisdom) from "why is" (information), and expanding his definition to include a form of know-what ("what to do, act or carry out"). And, as noted by Nikhil Sharma, Zeleny has argued for a tier to the model beyond wisdom, termed "enlightenment". Other included components Criticisms Rafael Capurro, a philosopher based in Germany, argues—per Zins 2007 description—that data is an abstraction, that information refers to "the act of communicating meaning", and knowledge "is the event of meaning selection of a (psychic/social) system from its 'world' on the basis of communication". As such, any impression of a logical hierarchy between these concepts "is a fairytale". One objection offered by Zins: while knowledge may be an exclusively cognitive phenomenon, the difficulty in pointing to a given fact as being distinctively information or knowledge, but not both, makes DIKW-type models unworkable, for instance, he asksis Albert Einstein's famous equation "E = mc2" (which is printed on my computer screen, and is definitely separated from any human mind) information or knowledge? Is "2 + 2 = 4" information or knowledge? Alternatively, in Zins' 2007 analysis referencing Roberto Poli, information and knowledge might be seen as synonyms. In answer to these criticisms, Zins argues that, subjectivist and empiricist philosophy aside, "the three fundamental concepts of data, information, and knowledge and the relations among them, as they are perceived by leading scholars in the information science academic community", have meanings open to distinct definitions. Rowley, in her 2007 discussion, echoes this point in arguing that, where definitions of knowledge may disagree, "[t]hese various perspectives all take as their point of departure the relationship between data, information and knowledge." Information processing theory argues that the physical world is made of information itself. Under this definition, data is either made up of or synonymous with physical information. It is unclear, however, whether information as it is conceived in the DIKW model would be considered derivative from physical-information/data or synonymous with physical information. In the former case, the DIKW model is open to the fallacy of equivocation. In the latter, the data tier of the DIKW model is preempted by an assertion of neutral monism. Educator Martin Frické has published an article critiquing the DIKW hierarchy, in which he argues that the model is based on "dated and unsatisfactory philosophical positions of operationalism and inductivism", that information and knowledge are both weak knowledge, and that wisdom is the "possession and use of wide practical knowledge. David Weinberger argues that although the DIKW pyramid appears to be a logical and straight-forward progression, this is incorrect. "What looks like a logical progression is actually a desperate cry for help." He points out there is a discontinuity between Data and Information (which are stored in computers), versus Knowledge and Wisdom (which are human endeavours). This suggests that the DIKW pyramid is too simplistic in representing how these concepts interact. "...Knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used." See also , a similar graphic in the field of psychology Inverted pyramid (journalism), a metaphor used by journalists and writers to prioritise and structure the most newsworthy info and important details over general info References Further reading Information science Knowledge management Information systems
DIKW pyramid
[ "Technology" ]
5,928
[ "Information systems", "Information technology" ]
10,756,580
https://en.wikipedia.org/wiki/Metabolic%20flux%20analysis
Metabolic flux analysis (MFA) is an experimental fluxomics technique used to examine production and consumption rates of metabolites in a biological system. At an intracellular level, it allows for the quantification of metabolic fluxes, thereby elucidating the central metabolism of the cell. Various methods of MFA, including isotopically stationary metabolic flux analysis, isotopically non-stationary metabolic flux analysis, and thermodynamics-based metabolic flux analysis, can be coupled with stoichiometric models of metabolism and mass spectrometry methods with isotopic mass resolution to elucidate the transfer of moieties containing isotopic tracers from one metabolite into another and derive information about the metabolic network. Metabolic flux analysis (MFA) has many applications such as determining the limits on the ability of a biological system to produce a biochemical such as ethanol, predicting the response to gene knockout, and guiding the identification of bottleneck enzymes in metabolic networks for metabolic engineering efforts. Metabolic flux analysis may use 13C-labeled isotope tracers for isotopic labeling experiments. Nuclear magnetic resonance (NMR) techniques and mass spectrometry may then be used to measure metabolite labeling patterns to provide information for determination of pathway fluxes. Because MFA typically requires rigorous flux calculation of complex metabolic networks, publicly available software tools have been developed to automate MFA and reduce its computational burden. Experimental method Although using a stoichiometric balance and constraints of the metabolites comprising the metabolic network can elucidate fluxes, this approach has limitations including difficulty in stimulating fluxes through parallel, cyclic, and reversible pathways. Moreover, there is limited insight on how metabolites interconvert in a metabolic network without the use of isotope tracers. Thus, the use of isotopes has become the dominant technique for MFA. Isotope labeling experiments Isotope labeling experiments are optimal for gathering experimental data necessary for MFA. Because fluxes determine the isotopic labeling patterns of intracellular metabolites, measuring these patterns allows for inference of fluxes. The first step in the workflow of isotope labeling experiments is cell culture on labeled substrates. A substrate such as glucose is labeled by isotope(s), most often 13C, and is introduced into the culture medium. The medium also typically contains vitamins and essential amino acids to facilitate cells' growth. The labeled substrate is then metabolized by the cells, leading to the incorporation of the 13C tracer in other intracellular metabolites. After the cells reach steady-state physiology (i.e., constant metabolite concentrations in culture), cells are then lysed to extract metabolites. For mammalian cells, extraction involves quenching of cells using methanol to stop their cellular metabolism and subsequent extraction of metabolites using methanol and water extraction. Concentrations of metabolites and labeled isotope in metabolites of the extracts are measured by instruments like liquid chromatography-mass spectrometry or NMR, which also provide information on the position and number of labeled atoms on the metabolites. This data are necessary for gaining insight into the dynamics of intracellular metabolism and metabolite turnover rates to infer metabolic flux. Methodologies Isotopically stationary A predominant method for metabolic flux analysis is isotopically stationary MFA. This technique for flux quantitation is applicable under metabolic and isotopic steady-state, two conditions that assume that metabolite concentrations and isotopomer distributions are not changing over time, respectively. Knowledge of the stoichiometric matrix (S) comprising the consumption and production of metabolites within biochemical reactions is needed to balance fluxes (v) around the assumed metabolic network model. Assuming metabolic steady-state, metabolic fluxes can thus be quantitated by solving the inverse of the following simple linear algebra equation: To reduce the possible solution space for flux distributions, isotopically stationary MFA requires additional stoichiometric constraints such as growth rates, substrate secretion and uptake, and product accumulation rates as well as upper and lower bounds for fluxes. Although isotopically stationary MFA allows precise deduction of metabolic fluxes through mathematical modeling, the analysis is limited to batch cultures during the exponential phase. Moreover, after addition of a labeled substrate, the time-point for when metabolic and isotopic steady-state may be accurately assumed can be difficult to determine. Isotopically non-stationary When isotope labeling is transient and has not yet equilibrated, isotopically non-stationary MFA (INST-MFA) is advantageous in deducing fluxes, particularly for systems with slow labeling dynamics. Similar to isotopically stationary MFA, this method requires mass and isotopomer balances to characterize the stoichiometry and atom transitions of the metabolic network. Unlike traditional MFA methods, however, INST-MFA requires applying ordinary differential equations to examine how isotopic labeling patterns of metabolites change over time; such examination can be accomplished by measuring changing isotopic labeling patterns over different time points to input into INST-MFA. INST-MFA is thus a powerful method for elucidating fluxes of systems with pathway bottlenecks and revealing metabolic phenotypes of autotrophic organisms. Although INST-MFA's computationally intensive demands previously hindered its widespread use, newly developed software tools have streamlined INST-MFA to decrease computational time and demand. Thermodynamics-based Thermodynamics-Based Metabolic Flux Analysis (TMFA) is a specialized type of metabolic flux analysis which utilizes linear thermodynamic constraints in addition to mass balance constraints to generate thermodynamically feasible fluxes and metabolite activity profiles. TMFA takes into consideration only pathways and fluxes that are feasible by using the Gibbs free energy change of the reactions and activities of the metabolites that are part of the model. By calculating Gibbs free energies of metabolic reactions and consequently their thermodynamic favorability, TMFA facilitates identification of limiting pathway bottleneck reactions that may be ideal candidates for pathway regulation. Software Simulation algorithms are needed to model the biological system and calculate the fluxes of all pathways in a complex network. Several computational software exist to meet the need for efficient and precise tools for flux quantitation. Generally, the steps for applying modeling software towards MFA include metabolic reconstruction to compile all desired enzymatic reactions and metabolites, provide experimental information such as the labeling pattern of the substrate, define constraints such as growth equations, and minimizing the error between the experimental and simulated results to obtain final fluxes. Examples of MFA software include 13CFLUX2 and OpenFLUX, which evaluate 13C labeling experiments for flux calculation under metabolic and isotopically stationary conditions. The increasing interest in developing computation tools for INST-MFA calculation has also led to the development of software applications such as INCA, which was the first software capable of performing INST-MFA and simulating transient isotope labeling experiments. Applications Biofuel production Metabolic flux analysis has been used to guide scale-up efforts for fermentation of biofuels. By directly measuring enzymatic reaction rates, MFA can capture the dynamics of cells' behavior and metabolic phenotypes in bioreactors during large-scale fermentations. For example, MFA models were used to optimize the conversion of xylose into ethanol in xylose-fermenting yeast by using calculated flux distributions to determine maximal theoretical capacities of the selected yeast towards ethanol production. Metabolic engineering Identification of bottleneck enzymes determines rate-limiting reactions that limit the productivity of a biosynthetic pathway. Moreover, MFA can help predict unexpected phenotypes of genetically engineered strains by constructing a fundamental understanding of how fluxes are wired in engineered cells. For example, by calculating the Gibbs free energies of reactions in Escherichia coli metabolism, TMFA facilitated identification of a thermodynamic bottleneck reaction in a genome-scale model of Escherichia coli. See also Isotopic labeling Flux balance analysis Fluxomics Metabolic network modelling Metabolomics Metabolic engineering References Systems biology
Metabolic flux analysis
[ "Biology" ]
1,675
[ "Systems biology" ]
10,757,955
https://en.wikipedia.org/wiki/Mobile%20Internet%20device
A mobile Internet device (MID) is a multimedia capable mobile device providing wireless Internet access. They are designed to provide entertainment, information and location-based services for personal or business use. They allow 2-way communication and real-time sharing. They have been described as filling a niche between smartphones and tablet computers. As all the features of MID started becoming available on smartphones and tablets, the term is now mostly used to refer to both low-end as well as high-end tablets. Archos Internet tablets The form factor of mobile Internet tablets from Archos is very similar to the Lenovo image on the right. The class has included multiple operating systems: Windows CE, Windows 7 and Android. The Android tablet uses an ARM Cortex CPU and a touchscreen. Intel Mobile Internet Device (MID) platform Intel announced a prototype MID at the Intel Developer Forum in Spring 2007 in Beijing. A MID development kit by Sophia Systems using Intel Centrino Atom was announced in April 2008. Intel MID platforms are based on an Intel processor and chipset which consume less power than most of the x86 derivatives. A few platforms have been announced as listed below: McCaslin platform (2007) Intel's first generation MID platform (codenamed McCaslin) contains a 90 nm Intel A100/A110 processor (codenamed Stealey) which runs at 600–800 MHz. Menlow platform (2008) On 2 March 2008, Intel introduced the Intel Atom processor brand for a new family of low-power processor platforms. The components have thin, small designs and work together to "enable the best mobile computing and Internet experience" on mobile and low-power devices. Intel's second generation MID platform (codenamed Menlow) contains a 45 nm Intel Atom processor (codenamed Silverthorne) which can run up to 2.0 GHz and a System Controller Hub (codenamed Poulsbo) which includes Intel HD Audio (codenamed Azalia). This platform was initially branded as Centrino Atom but such practice was discontinued in Q3 2008. Moorestown platform (2010) Intel's third generation MID/smartphone platform (codenamed Moorestown) contains a 45 nm Intel Atom processor (codenamed Lincroft ) and a separate 65 nm Platform Controller Hub (codenamed Langwell). Since the memory controller and graphics controller are all now integrated into the processor, the northbridge has been removed and the processor communicates directly with the southbridge via the DMI bus interface. Medfield platform (2012) Intel's fourth generation MID/smartphone platform (codenamed Medfield) contains their first complete Intel Atom SoC (codenamed Penwell), produced on 32 nm. Clover Trail+ platform (2012) Intel's MID/smartphone platform (codenamed Clover Trail+) based on its Clover Trail tablet platform. It contains a 32 nm Intel Atom SoC (codenamed Cloverview). Merrifield platform (2013) Intel's fifth generation MID/smartphone platform (codenamed Merrifield ) contains a 22 nm Intel Atom SoC (codenamed Tangier). Moorefield platform (2014) Intel's sixth generation MID/smartphone platform (codenamed Moorefield) contains a 22 nm Intel Atom SoC (codenamed Anniedale). Morganfield platform Intel's seventh generation MID/smartphone platform (codenamed Morganfield) contains a 14 nm Intel Atom SoC (codenamed Broxton). Operating system Intel announced collaboration with Ubuntu to create Ubuntu for mobile internet devices distribution, known as Ubuntu Mobile. Ubuntu's website said the new distribution "will provide a rich Internet experience for users of Intel’s 2008 Mobile Internet Device (MID) platform." Ubuntu Mobile ended active development in 2009. See also Centrino Phablet Android (operating system) CrunchPad Moblin project Netbook / smartbook Ubuntu Mobile Ultra-mobile PC (UMPC) WiMAX Mobile web References Mobile computers Classes of computers Mobile web
Mobile Internet device
[ "Technology" ]
832
[ "Mobile web", "Wireless networking", "Computer systems", "Computers", "Classes of computers" ]
10,758,209
https://en.wikipedia.org/wiki/Molecular%20promiscuity
Molecular promiscuity indicates the ability of a molecule to bind to interact with one or more other classes and subtypes of molecules, in synergistic or antagonistic ways. These interactions may involve multiple paracrine, endocrine and autocrine features. References Chemical reactions
Molecular promiscuity
[ "Chemistry" ]
61
[ "Chemical reaction stubs", "nan" ]
10,758,793
https://en.wikipedia.org/wiki/Comparison%20of%20Microsoft%20Windows%20versions
Microsoft Windows is the name of several families of computer software operating systems created by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs). All versions of Microsoft Windows are commercial proprietary software. General information Basic general information about Windows. DOS shells Has partial 32-bit compatibility with Win32s Windows 9x Windows NT has also an N-edition has also an N-edition has also an N-edition has a separate x64-edition has also a Core-edition has also an edition without HyperV has also a Core-edition without HyperV Windows Embedded Compact Windows Embedded Compact (Windows CE) is a discontinued variation of Microsoft's Windows operating system for minimalistic computers and embedded systems. Windows CE was a distinctly different kernel, rather than a trimmed-down version of desktop Windows. It is supported on Intel x86 and is compatible on MIPS, ARM, and Hitachi SuperH processors. Windows IoT The Windows IoT family is the successor to the now-discontinued Windows Embedded family. Windows Mobile Windows Mobile is Microsoft's discontinued line of operating systems for smartphones. Windows Phone Windows Phone is Microsoft's discontinued line of operating systems for smartphones. Technical information DOS shells Windows 9x It is possible to install the MS-DOS variants 7.0 and 7.1 without the graphics user interface of Windows. If an independent installation of both, DOS and Windows is desired, DOS ought to be installed prior to Windows, at the start of a small partition. The system must be transferred by the (dangerous) "SYSTEM" DOS-command, while the other files constituting DOS can simply be copied (the files located in the DOS-root and the entire COMMAND directory). Such a stand-alone installation of MS-DOS 8 is not possible, as it is designed to work as real mode for Windows Me and nothing else. Windows NT The Windows NT kernel powers all recent Windows operating systems. It has run on IA-32, x64, DEC Alpha, MIPS architecture, PowerPC, Itanium, ARMv7, and ARM64 processors, but currently supported versions run on IA-32, x64, ARMv7, and ARM64. Windows Phone Supported file systems Various versions of Windows support various file systems, including:FAT12, FAT16, FAT32, HPFS, or NTFS, along with network file systems shared from other computers, and the ISO 9660 and UDF file systems used for CDs, DVDs, and other optical disc drives such as Blu-ray. Each file system is usually limited in application to certain media, for example CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. Windows Embedded CE 6.0, Windows Vista Service Pack 1, and Windows Server 2008 onwards support exFAT, a file system more suitable for USB flash drives. Windows 9x Windows NT Windows Phone Hardware requirements Installing Windows requires an internal or external optical drive, or a USB flash drive. A keyboard and mouse are the recommended input devices, though some versions support a touchscreen. For operating systems prior to Vista, an optical drive must be capable of reading CD media, while in Windows Vista onwards, such a drive must be DVD-compatible. The drive may be detached after installing Windows. Windows 9x Windows NT Windows Phone Physical memory limits Maximum limits on physical memory (RAM) that Windows can address vary depending on both the Windows version and between IA-32 and x64 versions. Windows Windows NT Security features Features Timeline See also Other lists List of Microsoft Windows versions List of operating systems Comparison of operating systems Comparison of operating system kernels Comparison of Windows Vista and Windows XP Microsoft Windows version history Comparison of DOS operating systems Architecture of Windows NT List of Microsoft codenames Windows clones and emulators Freedows OS–Windows clone ReactOS–project to develop an operating system that is binary compatible with application software and device drivers for Microsoft Windows NT version 5.x Wine (software)–compatibility layer which allows to execute programs that were originally written for Microsoft Windows References External links Time line from Microsoft Microsoft Windows
Comparison of Microsoft Windows versions
[ "Technology" ]
860
[ "Computing platforms", "Microsoft Windows" ]
10,759,133
https://en.wikipedia.org/wiki/West%20Pharmaceutical%20Services%20explosion
The West Pharmaceutical Plant explosion was an industrial disaster that occurred on January 29, 2003, at the West Pharmaceutical Plant in Kinston, North Carolina, United States. Six people were killed and thirty-six people were injured when a large explosion ripped through the facility. Two firefighters were injured in the subsequent blaze. The disaster occurred twelve years and from the 1991 Hamlet chicken processing plant fire, North Carolina's second-worst industrial disaster. Background The West Pharmaceutical Plant was owned by West Pharmaceutical Services, and opened in the early 1980s. The plant employed 255 people with wages of between $12 and $14, some of the highest in the area. The facility's purpose was trifold; to manufacture syringe plungers, to manufacture intravenous components and rubber compounding. In October 2002, an inspector found a total of 22 "serious violations" at the plant, but said that these were routine findings for numerous industrial premises in North Carolina. West Pharmaceutical Services was fined $10,000 as a result. As part of the manufacturing process, rubber strips were doused in a solution of polyethylene powder and water to reduce their stickiness and then blown dry. This led to polyethylene dust being dispersed around the plant. While dust was regularly cleaned from visible surfaces in the production area, some was sucked in by air intakes and collected above the facility's dropped tile ceiling. Event The plant was ripped apart by a violent explosion. Witnesses reported hearing "a sound like rolling thunder", as what was later determined to be a chain reaction of explosions rapidly propagated. The shock wave broke windows at distances of up to away, and propelled debris as far as , some of which started additional fires in wooded areas at this distance. The blast could be felt away. A student at a school over half a mile away was injured by glass fragments. The explosion severed the plant's sprinkler system water supply, rendering it inoperable. A large fire raged for two days at the site of the plant. Damage to the plant was estimated to cost around $150 million. One half of the plant was completely destroyed. Investigation The investigation initially focused on two separate possibilities: a failure of a newly installed gas line, and a large dust explosion. From an early stage, the main theory pursued was that of the dust explosion. Within 24 hours of the explosion, the Chemical Safety and Hazard Investigation Board, who conducted the investigation, had determined from eyewitness interviews that the explosion originated in an area known as the "Automated Compounding System". This was a synthetic rubber-processing system. It was the site for mixing, rolling, coating, and drying of a type of rubber called polyisoprene. The process adds oils and fillers to the material, as well as creating significant quantities of dust. Therefore, the working theory from an early point was the rubber dust explosion theory. One particular machine was identified. It coated strips of rubber by dipping in "Acumist", a finely powdered grade of combustible polyethylene. This machine had operated for 24 hours a day, five or six days a week, since 1987. The spaces around the machine, including a suspended ceiling above the machine, were regularly cleaned by the factory's maintenance personnel. But they were unaware that ventilation systems within the room pulled the dust up into the ceiling, where a layer thick had accumulated. Several weeks prior to the accident, maintenance personnel did notice a thick coating of dust on surfaces above the suspended ceiling, but failed to realize the imminent danger it posed. The investigation determined that a major explosion occurred when something disturbed the dust, creating a cloud, which ignited. The investigation was unable to determine what disturbed the dust or what ignited it, due to the extensive damage at the plant. However, it is known that the machine had suffered multiple internal fires, including one that was powerful enough to blow off the mixer door. Four other theories were developed regarding possible causes: a batch of rubber that overheated and ignited; an electrical ballast or light fixture that ignited accumulated dust; a spark caused by a possible electrical fault; or ignition of dust in a cooling air duct feeding an electric motor. It was determined that West had in their possession material safety data sheets (MSDSs) supplied by the powder manufacturer that warned of the danger of such explosions, but did not refer to them. Instead, they relied on the MSDS supplied by Crystal Inc. PMC, who supplied West with a polyethylene-water slurry. However, this second MSDS neglected to mention the hazard posed by dust, as it was not thought to be hazardous once the slurry had dried. The final report into the disaster was highly critical of West, saying that the four "root causes" of the disaster were West's inadequate engineering assessment for combustible powders, inadequate consultation with fire safety standards, lack of appropriate review of MSDSs, and inadequate communication of dust hazards to workers. It also criticized West for not investigating a minor incident in which dust ignited during welding, from which West could have realized the imminent danger posed by the dust. Recommendations The final report made a number of recommendations to prevent a recurrence. A brief summary: North Carolina's Building Code Council should adopt NFPA 654, a set of building codes which controls operations in environments involving large quantities of combustible dust. In particular, it limits combustible dust accumulations to . North Carolina's Department of Labor should identify industries at risk of future explosions, and educate people involved with these industries about the potential risk of dust explosions. North Carolina fire and building code officials should be trained to recognize the hazards posed by flammable dust. West Pharmaceuticals should improve its material safety review procedures, revise its project engineering practices, communicate with its workers about combustible dust hazards, and follow safety practices contained in NFPA 654 at all company facilities that use combustible powders. Crystal inc. PMC should modify their MSDSs to discuss the hazards posed by potential dust explosions. Aftermath Less than a week after the disaster, the local county commission voted to donate $600,000 to West to rebuild. A local landlord also offered temporary free office space to company executives. On February 20, 2003, a private memorial service entitled "A Service of Healing and Remembrance" was held at Lenoir Community College, Kinston, for surviving plant employees and their families. The plant was so severely damaged that it had to be demolished and rebuilt from scratch. One year into the investigation, the disaster, coupled with the CTA Acoustics fiberglass insulation manufacturing plant explosion and the Hayes Lemmerz automotive parts plant explosion (with death tolls of seven and one respectively, also involving dust explosions in 2003), prompted the Chemical Safety and Hazard Investigation Board (CSB) to conduct a study into the number and severity of dust explosions throughout the United States over several decades. The board's chairman, Carolyn Merritt, described the accidents as collectively raising "safety questions of national significance... Workers and workplaces need to be protected from this insidious hazard". The study reviewed how the dust explosion hazard was controlled by regulatory codes, standards, and good operating practices, and also compared the US response to other nations' solutions to the same problem, in order to produce a review of potential initiatives to reduce the occurrence of industrial dust explosions. Lawsuits followed, with Scott Scurfield providing litigation defense for West Pharmaceutical Services. In 2004, Science Channel broadcast a documentary about the explosion and subsequent investigation, titled Failure Analysis: Dust Explosion. The CSB expressed their approval of the documentary, saying that it would "help spread the word about the dangers of combustible dust in the workplace". See also List of explosions References Further reading 2003 disasters in the United States 2003 industrial disasters 2003 in North Carolina Disasters in North Carolina Dust explosions Explosions in 2003 Industrial fires and explosions in the United States Lenoir County, North Carolina
West Pharmaceutical Services explosion
[ "Chemistry" ]
1,622
[ "Dust explosions", "Explosions" ]
10,759,380
https://en.wikipedia.org/wiki/GADGET
GADGET is free software for cosmological N-body/SPH simulations written by Volker Springel at the Max Planck Institute for Astrophysics. The name is an acronym of "GAlaxies with Dark matter and Gas intEracT". It is released under the GNU GPL. It can be used to study for example galaxy formation and dark matter. Description GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed-particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited. GADGET can therefore be used to address a wide array of astrophysically interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes. History The first public version (GADGET-1, released in March 2000 was created as part of Volker's PhD project under the supervision of Simon White. Later, the code was continuously improved during postdocs of Volker Springel at the Center for Astrophysics Harvard & Smithsonian and the Max Planck Institute, in collaboration with Simon White and Lars Hernquist. The second public version (GADGET-2, released in May 2005 contains most of these improvements, except for the numerous physics modules developed for the code that go beyond gravity and ordinary gas-dynamics. The most important changes lie in a new time integration model, a new tree-code module, a new communication scheme for gravitational and SPH forces, a new domain decomposition strategy, a novel SPH formulation based on entropy as independent variable, and finally, in the addition of the TreePM functionality. See also Computational physics Millennium Run References External links GADGET homepage Free astronomy software Cosmological simulation
GADGET
[ "Physics" ]
497
[ "Cosmological simulation", "Computational physics" ]
10,759,683
https://en.wikipedia.org/wiki/Bush%20hid%20the%20facts
"Bush hid the facts" is a common name for a bug present in Microsoft Windows which causes text encoded in ASCII to be interpreted as if it were UTF-16LE, resulting in garbled text. When the string "Bush hid the facts", without quotes, was put in a new Notepad document and saved, closed, and reopened, the nonsensical sequence of the Chinese characters "" would appear instead. While "Bush hid the facts" is the sentence most commonly presented to induce the error, the bug can be triggered by other strings, for example or , and even or . The bug occurs when the string is passed to the Win32 charset detection function . guesses it is Unicode if the "hi byte" (the odd indexes) changes three times less than the "low byte", if so it returns , and the application then incorrectly interprets the text as UTF-16LE. The bug had existed since was introduced with in 1994, but was not discovered until early 2004. Many text editors and tools exhibit this behavior on Windows because they use to determine the encoding of text files. As of Windows Vista, Notepad has been modified to use a different detection algorithm that does not exhibit the bug, but remains unchanged in the operating system, so any other tools that use the function are still affected. Workarounds Several workarounds exist for this bug: Add a character so the string is an odd number of bytes long. Save the file as "UTF-8" (before 2018) or "UTF-8 with BOM" (after 2018) rather than "ANSI". This prepends a UTF-8 byte order mark which avoids the bug. UTF-8 without the byte order mark would still trigger the bug, as it is identical to the "ANSI" file. Saving as "Unicode", which in Microsoft Windows means UTF-16LE. When loading this text should (and does) return and the text is correct. To retrieve the original text using Notepad, bring up the "Open a file" dialog box, select the file, select "ANSI" or "UTF-8" in the "Encoding" list box, and click Open. Under Windows 2000, Notepad lacks the "Encoding" list box. WordPad appears to load the text correctly without choosing the encoding, since it uses its own encoding detection. References External links The Notepad file encoding problem, redux – Raymond Chen IsTextUnicode – Microsoft Docs Character encoding Software bugs Microsoft Windows
Bush hid the facts
[ "Technology" ]
528
[ "Natural language and computing", "Computing platforms", "Microsoft Windows", "Character encoding" ]
10,760,089
https://en.wikipedia.org/wiki/Conocybe%20apala
Conocybe apala is a basidiomycete fungus and a member of the genus Conocybe. It is a fairly common fungus, both in North America and Europe, found growing among short green grass. Until recently, the species was also commonly called Conocybe lactea or Conocybe albipes and is colloquially known as the white dunce cap or the milky conecap. Another common synonym, Bolbitius albipes G.H. Otth 1871, places the fungus in the genus Bolbitius. Taxonomy Since this species is very common it has a long taxonomic history having been described independently many times throughout the years. The basionym Agaricus apalus was described by the Swedish mycologist Elias Magnus Fries in 1818 and reclassified as Pluteolus apalus by the French mycologist Lucien Quélet in 1886. This was reclassified as Galera hapala (or Galera apala) in 1887 by Pier Andrea Saccardo, then as Bolbitius apalus in 1891 by Julien Noël Costantin and Léon Jean Marie Dufour and finally as Derminus apalus in 1898 by Paul Christoph Hennings. It was reclassified as Conocybe apala in 2003 by Everhardus Johannes Maria Arnolds. Description Very easily missed due to their very small size, C. apala fruit bodies are otherwise quite easy to identify. The cap has a pale cream to silvery-white colour and may sometimes have a darker yellow to brown coloration towards the central umbo. Its trademark hood-shaped conical cap expands with age and may flatten out, the surface being marked by minute radiating ridges. The cap ranges from 1–3 cm in diameter. The gills may be visible through the thin cap and these are coloured rust or cinnamon brown and quite dense. They are adnexed or free and release brown to reddish-brown elliptical spores producing a spore print of the same colour. The stem is cap-coloured, elongated, thin, hollow and more or less equal along its length with a height up to 11 cm and diameter of 1–3 mm. It can bear minuscule striations or hairs. The flesh of C. apala has no discernible taste or smell and is extremely fragile to the touch. Its cap can be from 1-2.5 centimeters. Gallery Similar species Similar species include Pholiotina rugosa and Conocybe tenera. Habitat Conocybe apala is a saprobe found in areas with rich soil and short grass such as pastures, playing fields, lawns, meadows as well as rotting manured straw, fruiting single or sparingly few ephemeral bodies. It is commonly found fruiting during humid, rainy weather with generally overcast skies. It will appear on sunny mornings while there is dew but will not persist once it evaporates. In most cases, by midday the delicate fruiting bodies shrivel, dry and bend from sight. C.apala's fruiting season begins in spring and ends in autumn. It is distributed across Europe and North America. Edibility Completely unknown, one study found phallotoxin in the caps. External links and resources Mushroom Expert - Conocybe albipes MykoWeb California Fungi - Conocybe lactea Bolbitiaceae Fungi of Europe Fungi of North America Fungi described in 2003 Fungus species
Conocybe apala
[ "Biology" ]
704
[ "Fungi", "Fungus species" ]
10,760,797
https://en.wikipedia.org/wiki/Zobel%20network
For the wave filter invented by Zobel and sometimes named after him see m-derived filters. Zobel networks are a type of filter section based on the image-impedance design principle. They are named after Otto Zobel of Bell Labs, who published a much-referenced paper on image filters in 1923. The distinguishing feature of Zobel networks is that the input impedance is fixed in the design independently of the transfer function. This characteristic is achieved at the expense of a much higher component count compared to other types of filter sections. The impedance would normally be specified to be constant and purely resistive. For this reason, Zobel networks are also known as constant resistance networks. However, any impedance achievable with discrete components is possible. Zobel networks were formerly widely used in telecommunications to flatten and widen the frequency response of copper land lines, producing a higher performance line from one originally intended for ordinary telephone use. Analogue technology has given way to digital technology and they are now little used. When used to cancel out the reactive portion of loudspeaker impedance, the design is sometimes called a Boucherot cell. In this case, only half the network is implemented as fixed components, the other half being the real and imaginary components of the loudspeaker impedance. This network is more akin to the power factor correction circuits used in electrical power distribution, hence the association with Boucherot's name. A common circuit form of Zobel networks is in the form of a bridged T network. This term is often used to mean a Zobel network, sometimes incorrectly when the circuit implementation is not a bridged T. Derivation The basis of a Zobel network is a balanced bridge circuit as shown in the circuit to the right. The condition for balance is that; If this is expressed in terms of a normalised Z0 = 1 as is conventionally done in filter tables, then the balance condition is simply; Or, is simply the inverse, or dual impedance of . The bridging impedance ZB is across the balance points and hence has no potential across it. Consequently, it will draw no current and its value makes no difference to the function of the circuit. Its value is often chosen to be Z0 for reasons which will become clear in the discussion of bridged T circuits further on. Input impedance The input impedance is given by Substituting the balance condition, yields The input impedance can be designed to be purely resistive by setting The input impedance will then be real and independent of ω in band and out of band no matter what complexity of filter section is chosen. Transfer function If the Z0 in the bottom right of the bridge is taken to be the output load then a transfer function of Vo/Vin can be calculated for the section. Only the RHS (right-hand side) branch needs to be considered in this calculation. The reason for this can be seen by considering that there is no current flow through ZB. None of the current flowing through the LHS (left-hand side) branch is going to flow into the load. The LHS branch, therefore, cannot possibly affect the output. It certainly affects the input impedance (and hence the input terminal voltage) but not the transfer function. The transfer function can now easily be seen to be; Bridged T implementation The load impedance is actually the impedance of the following stage or of a transmission line and can sensibly be omitted from the circuit diagram. If we also set; then the circuit to the right results. This is referred to as a bridged T circuit because the impedance Z is seen to "bridge" across the T section. The purpose of setting ZB = Z0 is to make the filter section symmetrical. This has the advantage that it will then present the same impedance, Z0, at both the input and the output port. Types of section A Zobel filter section can be implemented for low-pass, high-pass, band-pass or band-stop. It is also possible to implement a flat frequency response attenuator. This last is of some importance for the practical filter sections described later. Attenuator For an attenuator section, Z is simply and, The attenuation of the section is given by; Low pass For a low-pass filter section, Z is an inductor and Z ' is a capacitor; and where The transfer function of the section is given by The 3 dB point occurs when ωL = R0 so the 3 dB cut-off frequency is given by where ω is in the stop band well above ωc, it can be seen from this that A(ω) is falling away in the stop band at the classic 6 dB/8ve (or 20 dB/decade). High pass For a high-pass filter section, Z is a capacitor and Z' is an inductor: and where The transfer function of the section is given by The 3 dB point occurs when ωC =  so the 3 dB cut-off frequency is given by In the stop band, falling at 6 dB/8ve with decreasing frequency. Band pass For a band-pass filter section, Z is a series resonant circuit and Z' is a shunt resonant circuit; and The transfer function of the section is given by The 3 dB point occurs when |1 − ω2LC| = ωCR0 so the 3 dB cut-off frequencies are given by from which the centre frequency, ωm, and bandwidth, Δω, can be determined: Note that this is different from the resonant frequency the relationship between them being given by Band stop For a band-stop filter section, Z is a shunt resonant circuit and Z' is a series resonant circuit: and The transfer function and bandwidth can be found by analogy with the band-pass section. And, Practical sections Zobel networks are rarely used for traditional frequency filtering. Other filter types are significantly more efficient for this purpose. Where Zobels come into their own is in frequency equalisation applications, particularly on transmission lines. The difficulty with transmission lines is that the impedance of the line varies in a complex way across the band and is tedious to measure. For most filter types, this variation in impedance will cause a significant difference in response to the theoretical, and is mathematically difficult to compensate for, even assuming that the impedance is known precisely. If Zobel networks are used however, it is only necessary to measure the line response into a fixed resistive load and then design an equaliser to compensate it. It is entirely unnecessary to know anything at all about the line impedance as the Zobel network will present exactly the same impedance to line as the measuring instruments. Its response will therefore be precisely as theoretically predicted. This is a tremendous advantage where high quality lines with flat frequency responses are desired. Basic loss For audio lines, it is invariably necessary to combine L/C filter components with resistive attenuator components in the same filter section. The reason for this is that the usual design strategy is to require the section to attenuate all frequencies down to the level of the frequency in the passband with the lowest level. Without the resistor components, the filter, at least in theory, would increase attenuation without limit. The attenuation in the stop band of the filter (that is, the limiting maximum attenuation) is referred to as the "basic loss" of the section. In other words, the flat part of the band is attenuated by the basic loss down to the level of the falling part of the band which it is desired to equalise. The following discussion of practical sections relates in particular to audio transmission lines. 6 dB/octave roll-off The most significant effect that needs to be compensated for is that at some cut-off frequency the line response starts to roll-off like a simple low-pass filter. The effective bandwidth of the line can be increased with a section that is a high-pass filter matching this roll-off, combined with an attenuator. In the flat part of the pass-band only the attenuator part of the filter section is significant. This is set at an attenuation equal to the level of the highest frequency of interest. All frequencies up to this point will then be equalised flat to an attenuated level. Above this point, the output of the filter will again start to roll-off. Mismatched lines Quite commonly in telecomms networks, a circuit is made up of two sections of line which do not have the same characteristic impedance. For instance 150 Ω and 300 Ω. One effect of this is that the roll-off can start at 6 dB/octave at an initial cut-off frequency , but then at can become suddenly steeper. This situation then requires (at least) two high-pass sections to compensate each operating at a different . Bumps and dips Bumps and dips in the passband can be compensated for with band-stop and band-pass sections respectively. Again, an attenuator element is also required, but usually rather smaller than that required for the roll-off. These anomalies in the pass-band can be caused by mismatched line segments as described above. Dips can also be caused by ground temperature variations. Transformer roll-off Occasionally, a low-pass section is included to compensate for excessive line transformer roll-off at the low frequency end. However, this effect is usually very small compared to the other effects noted above. Low frequency sections will usually have inductors of high values. Such inductors have many turns and consequently tend to have significant resistance. In order to keep the section constant resistance at the input, the dual branch of the bridge T must contain a dual of the stray resistance, that is, a resistor in parallel with the capacitor. Even with the compensation, the stray resistance still has the effect of inserting attenuation at low frequencies. This in turn has the effect of slightly reducing the amount of LF lift the section would otherwise have produced. The basic loss of the section can be increased by the same amount as the stray resistance is inserting and this will return the LF lift achieved to that designed for. Compensation of inductor resistance is not such an issue at high frequencies were the inductors will tend to be smaller. In any case, for a high-pass section the inductor is in series with the basic loss resistor and the stray resistance can merely be subtracted from that resistor. On the other hand, the compensation technique may be required for resonant sections, especially a high Q resonator being used to lift a very narrow band. For these sections the value of inductors can also be large. Temperature compensation An adjustable attenuation high-pass filter can be used to compensate for changes in ground temperature. Ground temperature is very slow varying in comparison to surface temperature. Adjustments are usually only required 2-4 times per year for audio applications. Typical filter chain A typical complete filter will consist of a number of Zobel sections for roll-off, frequency dips and temperature followed by a flat attenuator section to bring the level down to a standard attenuation. This is followed by a fixed gain amplifier to bring the signal back up to a usable level, typically . The gain of the amplifier is usually at most . Any more and the amplification of line noise will tend to cancel out the quality benefits of improved bandwidth. This limit on amplification essentially limits how much the bandwidth can be increased by these techniques. No one part of the incoming signal band will be amplified by the full . The is made up of the line loss in the flat part of its spectrum plus the basic loss of each section. In general, each section will be minimum loss at a different frequency band, hence the amplification in that band will be limited to the basic loss of just that one filter section, assuming insignificant overlap. A typical choice for R0 is 600 Ω. A good quality transformer (usually essential, but not shown on the diagram), known as a repeating coil, is at the beginning of the chain where the line terminates. Other section implementations Besides the Bridged T, there are a number of other possible section forms that can be used. L-sections As mentioned above, can be set to any desired impedance without affecting the input impedance. In particular, setting it as either an open circuit or a short circuit results in a simplified section circuit, called L–sections. These are shown above for the case of a high pass section with basic loss. The input port still presents an impedance of (provided that the output is terminated in ) but the output port no longer presents a constant impedance. Both the open-circuit and the short-circuit L–sections are capable of being reversed so that is then presented at the output and the variable impedance is presented at the input. To retain the benefit of Zobel networks constant impedance, the variable impedance port must not face the line impedance. Nor should it face the variable impedance port of another L-section. Facing the amplifier is acceptable since the input impedance of the amplifier is normally arranged to be within acceptable tolerances. In other words, variable impedance must not face variable impedance. Balanced bridged T The Zobel networks described here can be used to equalise land lines composed of twisted pair or star quad cables. The balanced circuit nature of these lines delivers a good common mode rejection ratio (CMRR). To maintain the CMRR, circuits connected to the line should maintain the balance. For this reason, balanced versions of Zobel networks are sometimes required. This is achieved by halving the impedance of the series components and then putting identical components in the return leg of the circuit. Balanced C-sections A C–section is a balanced version of an L–section. The balance is achieved in the same way as a balanced full bridged T section by placing half of the series impedance in, what was, the common conductor. C–sections, like the L–section from which they are derived, can come in both open-circuit and short circuit varieties. The same restrictions apply to C–sections regarding impedance terminations as to L–sections. X-section It is possible to transform a bridged–T section into a Lattice, or X–section (see Bartlett's bisection theorem). The X–section is a kind of bridge circuit, but usually drawn as a lattice, hence the name. Its topology makes it intrinsically balanced but it is never used to implement the constant resistance filters of the kind described here because of the increased component count. The component count increase arises out of the transformation process rather than the balance. There is however, one common application for this topology, the lattice phase equaliser, which is also constant resistance and also invented by Zobel. This circuit differs from those described here in that the bridge circuit is not generally in the balanced condition. Half sections In respect of constant resistance filters, the term half section has a somewhat different meaning to other kinds of image filter. Generally, a half section is formed by cutting through the midpoint of the series impedance and shunt admittance of a full section of a ladder network. It is literally half a section. Here, however, there is a somewhat different definition. A half section is either the series impedance (series half-section) or shunt admittance (shunt half-section) that, when connected between source and load impedances of R0, will result in the same transfer function as some arbitrary constant resistance circuit. The purpose of using half sections is that the same functionality is achieved with a drastically reduced component count. If a constant resistance circuit has an input Vin, then a generator with an impedance R0 must have an open-circuit voltage of E=2Vin in order to produce Vin at the input of the constant resistance circuit. If now the constant resistance circuit is replaced by an impedance of 2Z, as in the diagram above, it can be seen by simple symmetry that the voltage Vin will appear halfway along the impedance 2Z. The output of this circuit can now be calculated as, which is precisely the same as a bridged T section with series element Z. The series half-section is thus a series impedance of 2Z. By corresponding reasoning, the shunt half-section is a shunt impedance of Z' (or twice the admittance). It must be emphasised that these half sections are far from being constant resistance. They have the same transfer function as a constant resistance network, but only when correctly terminated. An equaliser will not give good results if a half-section is positioned facing the line since the line will have a variable (and probably unknown) impedance. Likewise, two half-sections cannot be connected directly to each other as these both will have variable impedances. However, if a sufficiently large attenuator is placed between the two variable impedances, this will have the effect of masking the effect. A high value attenuator will have an input impedance no matter what the terminating impedance on the other side. In the example practical chain shown above there is a 22 dB attenuator required in the chain. This does not need to be at the end of the chain, it can be placed anywhere desired and used to mask two mismatched impedances. It can also be split into two or more parts and used for masking more than one mismatch. Zobel networks and loudspeaker drivers See also Boucherot cell Zobel networks can be used to make the impedance a loudspeaker presents to its amplifier output appear as a steady resistance. This is beneficial to the amplifier performance. The impedance of a loudspeaker is partly resistive. The resistance represents the energy transferred from the amplifier to the sound output plus some heating losses in the loudspeaker. However, the speaker also possesses inductance due to the windings of its coil. The impedance of the loudspeaker is thus typically modelled as a series resistor and inductor. A parallel circuit of a series resistor and capacitor of the correct values will form a Zobel bridge. It is obligatory to choose because the centre point between the inductor and resistor is inaccessible (and, in fact, fictitious - the resistor and inductor are distributed quantities as in a transmission line). The loudspeaker may be modelled more accurately by a more complex equivalent circuit. The compensating Zobel network will also become more complex to the same degree. Note that the circuit will work just as well if the capacitor and resistor are interchanged. In this case the circuit is no longer a Zobel balanced bridge but clearly the impedance has not changed. The same circuit could have been arrived at by designing from Boucherot's minimising reactive power point of view. From this design approach there is no difference in the order of the capacitor and the resistor and Boucherot cell might be considered a more accurate description. Video equalisers Zobel networks can be used for the equalisation of video lines as well as audio lines. There is, however, a noticeably different approach taken with the two types of signal. The difference in the cable characteristics can be summarised a s follows; Video commonly uses co-axial cable which requires an unbalanced topology for the filters whereas audio commonly uses twisted pair which requires a balanced topology. Video requires a wider bandwidth and tighter differential phase specification which in turn results in a tighter dimensional specification for the cable. The tighter specifications for video cable tends to produce a substantially constant characteristic impedance over a wide band (usually nominally 75 Ω). On the other hand, audio cable may be nominally 600 Ω (300 Ω and 150 Ω are also standard values), but it will only actually measure this value at 800 Hz. At a lower frequencies it will be much higher and at higher frequencies will be lower and more reactive. These characteristics result in a smoother, more well behaved response for video lines with none of the nasty discontinuities typically found with audio lines. These discontinuities in the frequency response are often caused by the habit of the telecom companies of forming a connection by joining two shorter lines of differing characteristic impedance. Video lines on the other hand tend to roll off smoothly with frequency in a predictable way. This more predictable response of video allows a different design approach. The video equaliser is built as a single bridged T section but with a rather more complex network for Z. For short lines, or for a trimming equaliser, a Bode filter topology might be used. For longer lines a network with Cauer filter topology might be used. Another driver for this approach is the fact that a video signal occupies a large number of octaves, around 20 or so. If equalised with simple basic sections, a large number of filter sections would be required. Simple sections are designed, typically, to equalise a range of one or two octaves. Bode equaliser A Bode network, as with a Zobel network, is a symmetrical bridge T network which meets the constant k condition. It does not however meet the constant resistance condition, that is, the bridge is not in balance. Any impedance network, Z, can be used in a Bode network, just as with a Zobel network, but the high pass section shown for correcting high-end frequencies is the most common. A Bode network terminated in a variable resistor can be used to produce a variable impedance at the input terminals of the network. A useful property of this network is that the input impedance can be made to vary from a capacitive impedance through a purely resistive impedance to an inductive impedance all by adjusting the single load potentiometer, RL. The bridging resistor, R0, is chosen to equal the nominal impedance so that in the special case when RL is set to R0 the network behaves as a Zobel network and Zin is also equal to R0. The Bode network is used in an equaliser by connecting the whole network such that the input impedance of the Bode network, Zin, is in series with the load. Since the impedance of the Bode network can be either capacitive or inductive depending on the position of the adjustment potentiometer, the response may be a boost or a cut to the band of frequencies it is acting on. The transfer function of this arrangement is: The Bode equaliser can be converted into a constant resistance filter by using the entire Bode network as the Z branch of a Zobel network, resulting in a rather complex network of bridge T networks embedded in a larger bridge T. It can be seen that this results in the same transfer function by noting that the transfer function of the Bode equaliser is identical to the transfer function of the general form of Zobel equaliser. Note that the dual of a constant resistance bridge T network is the identical network. The dual of a Bode network is therefore the same network except for the load resistance RL, which must be the inverse, RL', in the dual circuit. To adjust the equaliser RL and RL' must be ganged, or otherwise kept in step such that as RL increases RL' will decrease and vice versa. Cauer equaliser To equalise long video lines, a network with Cauer topology is used as the Z impedance of a Zobel constant resistance network. Just as the input impedance of a Bode network is used as the Z impedance of a Zobel network to form a Zobel Bode equaliser, so the input impedance of a Cauer network is used to make a Zobel Cauer equaliser. The equaliser is required to correct an attenuation increasing with frequency and for this a Cauer ladder network consisting of series resistors and shunt capacitors is required. Optionally, there may be an inductor included in series with the first capacitor which increases the equalisation at the high end due to the steeper slope produced as resonance is approached. This may be required on longer lines. The shunt resistor R1 provides the basic loss of the Zobel network in the usual way. The dual of a RC Cauer network is a LR Cauer network which is required for the Z' impedance as shown in the example. Adjustment is a bit problematic with this equaliser. In order to maintain the constant resistance, the pairs of components C1/L1', C2/L2' etc., must remain dual impedances as the component is adjusted, so both parts of the pair must be adjusted together. With the Zobel Bode equaliser, this is a simple matter of ganging two pots together - a component configuration available off-the-shelf. Ganging together a variable capacitor and inductor is not, however, a very practical solution. These equalisers tend to be "hand built", one solution being to select the capacitors on test and fit fixed values according to the measurements and then adjust the inductors until the required match is achieved. The furthest element of the ladder from the driving point is equalising the lowest frequency of interest. This is adjusted first as it will also have an effect on higher frequencies and from there progressively higher frequencies are adjusted working along the ladder towards the driving point. See also Electronic filter topology Image impedance Constant k filters m-derived filters Boucherot cell References Zobel, O. J., Distortion correction in electrical circuits with constant resistance recurrent networks, Bell System Technical Journal, Vol. 7 (1928), p. 438. Redifon Radio Diary, 1970, William Collins Sons & Co, 1969 Analog circuits Bridge circuits Image impedance filters Electronic filter topology
Zobel network
[ "Engineering" ]
5,392
[ "Analog circuits", "Electronic engineering" ]
10,760,804
https://en.wikipedia.org/wiki/Alloy%20Computer%20Products
Alloy Computer Products is an Australian manufacturer of information technology products based near Melbourne. As of 2007, the company currently markets networking and VoIP products. The company was originally based in Framingham, Massachusetts, and at one point was a major producer of QIC format tape drives and other computer peripherals. In the mid 1990s the company was no longer profitable. It filed for bankruptcy in the U.S. and the Australian subsidiary was bought out by the management team from the Australian division. Alloy Computer Products, Inc., was founded in 1979. Alloy was initially founded to supply hard drive and tape backup systems for S-100 bus computers running CP/M. When IBM's PC was released, Alloy provided hard drive storage and tape backup solutions for the new system. Alloy Computer Products later developed and marketed multi-user computer systems for the emerging microcomputer marketplace. Alloy later developed printing accelerator hardware. In 1984 Alloy developed the PC-Slave card which consisted of an x86 (8086 or V20) processor, either 256 KB or 1 MB of memory and two serial ports. Later, an Intel 80286-based version was released, called the PC-Slave/286. These cards used RTNX (later renamed NTNX) to allow the host computer to provide disk storage and printing support. Dumb PC-terminals were attached to the PC-Slave to allow the running of DOS programs. At the time, using this solution was more cost-effective than using separate networked computers, but as computers and networking hardware became cheaper and cheaper, Alloy's advantage was overshadowed by the disadvantages of not being able to support graphics, etc. Alloy also developed a PC-Bus expansion bus system to allow the install of up to 32 PC-Slave cards attached to a single host PC. This allowed 32 user networks to be created, but each network was completely standalone. Based on the knowledge learned by developing the PC-Slave card, in 1985 Alloy developed the DOS-73 co-processor board for the AT&T UNIX PC, allowing AT&T's Unix-based UNIX PC (aka the PC 7300 and the 3B1) to run MS-DOS based programs. Alloy grew to $50 million in annual sales by 1986 and executed a successful IPO in June of that year. Alloy had an installed base of 150,000 users by the early 1990s, largely small businesses, comprising a relatively significant portion of the multi-user DOS marketplace. One DOS based computer was equipped with a multi-user/multi-tasking operating system called "386/MultiWare" which along with specialized hardware could provide serial connectivity to up to 20 dumb terminal clients. Each dumb terminal was connected to a session running up to 8 concurrent DOS virtual machines, all running on the host computer. If a problem arose with a single DOS virtual machine it could be rebooted without an effect on other terminals attached. Later "MultiNode" was introduced to meet client needs using the Novell network operating system allowing both Client/Server network connectivity as well as serial terminal users. See also Multiuser DOS Federation References External links Corporate website. Buyout info. Scans of magazine advertisements from 1983 illustrating their then-current product line. Alloy files for bankruptcy. Alloy Computer Products to market Micro Advice's JETstream! printer accelerator card AT&T Unix-PC system Computer companies of Australia Companies based in Framingham, Massachusetts
Alloy Computer Products
[ "Technology" ]
690
[ "Computer hardware companies", "Computers" ]
10,761,675
https://en.wikipedia.org/wiki/Form%20classification
Form classification is the classification of organisms based on their morphology, which does not necessarily reflect their biological relationships. Form classification, generally restricted to palaeontology, reflects uncertainty; the goal of science is to move "form taxa" to biological taxa whose affinity is known. Form taxonomy is restricted to fossils that preserve too few characters for a conclusive taxonomic definition or assessment of their biological affinity, but whose study is made easier if a binomial name is available by which to identify them. The term "form classification" is preferred to "form taxonomy"; taxonomy suggests that the classification implies a biological affinity, whereas form classification is about giving a name to a group of morphologically-similar organisms that may not be related. A "parataxon" (not to be confused with parataxonomy), or "sciotaxon" (Gr. "shadow taxon"), is a classification based on incomplete data: for instance, the larval stage of an organism that cannot be matched up with an adult. It reflects a paucity of data that makes biological classification impossible. A sciotaxon is defined as a taxon thought to be equivalent to a true taxon (orthotaxon), but whose identity cannot be established because the two candidate taxa are preserved in different ways and thus cannot be compared directly. Examples In zoology Form taxa are groupings that are based on common overall forms. Early attempts at classification of labyrinthodonts was based on skull shape (the heavily armoured skulls often being the only preserved part). The amount of convergent evolution in the many groups led to a number of polyphyletic taxa. Such groups are united by a common mode of life, often one that is generalist, in consequence acquiring generally similar body shapes by convergent evolution. Ediacaran biota — whether they are the precursors of the Cambrian explosion of the fossil record, or are unrelated to any modern phylum — can currently only be grouped in "form taxa". Other examples include the seabirds and the "Graculavidae". The latter were initially described as the earliest family of Neornithes but are nowadays recognized to unite a number of unrelated early neornithine lineages, several of which probably later gave rise to the "seabird" form taxon of today. Fossil eggs are classified according to the parataxonomic system called Veterovata. There are three broad categories in the scheme, on the pattern of organismal phylogenetic classification, called oofamilies, oogenera and oospecies (collectively known as ootaxa). The names of oogenera and oofamilies conventionally contain the root "oolithus" meaning "stone egg", but this rule is not always followed. They are divided up into several basic types: Testudoid, Geckoid, Crocodiloid, Dinosauroid-spherulitic, Dinosauroid-prismatic, and Ornithoid. In botany In paleobotany, two terms were formerly used in the codes of nomenclature, "form genera" and "organ genera", to mean groups of fossils of a particular part of a plant, such as a leaf or seed, whose parent plant is not known because the fossils were preserved unattached to the parent plant. A later term "morphotaxa" also allows for differences in preservational state. These three terms have been replaced as of 2011 by provisions for "fossil-taxa" that are more similar to the provisions for other types of plants. Names given to organ genera could only be applied to the organs in question, and could not be extended to the entire organism. Fossil-taxon names can cover several parts of an organism, or several preservational states, but do not compete for priority with any names for the same organism that are based on a non-fossil type. The part of the plant was often, but not universally, indicated by the use of a suffix in the generic name: wood fossils may have generic names ending in -xylon leaf fossils generic names ending in fruit fossils generic names ending in , -carpum or -carpus pollen fossils generic names ending in or -pollenoides. See also Glossary of scientific naming Folk taxonomy Phenetics Taphonomy Wastebasket taxon Footnotes Biological classification Botanical nomenclature Morphology (biology) Paleobotany Plant taxonomy Taxonomy (biology)
Form classification
[ "Biology" ]
891
[ "Botanical nomenclature", "Plants", "Morphology (biology)", "Botanical terminology", "Biological nomenclature", "Taxonomy (biology)", "Plant taxonomy", "nan" ]
10,761,967
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20experiment
The Eötvös experiment was a physics experiment that measured the correlation between inertial mass and gravitational mass, demonstrating that the two were one and the same, something that had long been suspected but never demonstrated with the same accuracy. The earliest experiments were done by Isaac Newton (1642–1727) and improved upon by Friedrich Wilhelm Bessel (1784–1846). A much more accurate experiment using a torsion balance was carried out by Loránd Eötvös starting around 1885, with further improvements in a lengthy run between 1906 and 1909. Eötvös's team followed this with a series of similar but more accurate experiments, as well as experiments with different types of materials and in different locations around the Earth, all of which demonstrated the same equivalence in mass. In turn, these experiments led to the modern understanding of the equivalence principle encoded in general relativity, which states that the gravitational and inertial masses are the same. It is sufficient for the inertial mass to be proportional to the gravitational mass. Any multiplicative constant will be absorbed in the definition of the unit of force. Eötvös's original experiment Eötvös's original experimental device consisted of two masses on opposite ends of a rod, hung from a thin fiber. A mirror attached to the rod, or fiber, reflected light into a small telescope. Even tiny changes in the rotation of the rod would cause the light beam to be deflected, which would in turn cause a noticeable change when magnified by the telescope. As seen from the Earth's frame of reference (or "lab frame", which is not an inertial frame of reference), the primary forces acting on the balanced masses are the string tension, gravity, and the centrifugal force due to the rotation of the Earth. Gravity is calculated by Newton's law of universal gravitation, which depends on gravitational mass. The centrifugal force is calculated by Newton's laws of motion and depends on inertial mass. The experiment was arranged so that if the two types of masses were different, the two forces will not act in exactly the same way on the two bodies, and over time the rod will rotate. As seen from the rotating "lab frame", the string tension plus the (much smaller) centrifugal force cancels the weight (as vectors), while as seen from any inertial frame the (vector) sum of the weight and the tension makes the object rotate along with the earth. For the rod to be at rest in the lab frame, the reactions, on the rod, of the tensions acting on each body, must create a zero net torque (the only degree of freedom is rotation on the horizontal plane). Supposing that the system was constantly at rest – this meaning mechanical equilibrium (i.e. net forces and torques zero) – with the two bodies thus hanging also at rest, but having different centrifugal forces upon them and consequently exerting different torques on the rod through the reactions of the tensions, the rod then would spontaneously rotate, in contradiction with our assumption that the system is at rest. So the system cannot exist in this state; any difference between the centrifugal forces on the two bodies will set the rod in rotation. Further improvements Initial experiments around 1885 demonstrated that there was no apparent difference, and Eötvös improved the experiment to demonstrate this with more accuracy. In 1889 he used the device with different types of sample materials to see if there was any change in gravitational force due to materials. This experiment proved that no such change could be measured, to a claimed accuracy of 1 in 20 million. In 1890 he published these results, as well as a measurement of the mass of Gellért Hill in Budapest. The next year he started work on a modified version of the device, which he called the "horizontal variometer". This modified the basic layout slightly to place one of the two rest masses hanging from the end of the rod on a fiber of its own, as opposed to being attached directly to the end. This allowed it to measure torsion in two dimensions, and in turn, the local horizontal component of g. It was also much more accurate. Now generally referred to as the Eötvös balance, this device is commonly used today in prospecting by searching for local mass concentrations. Using the new device a series of experiments taking 4000 hours was carried out with Dezsö Pekár (1873–1953) and Jenő Fekete (1880–1943) starting in 1906. These were first presented at the 16th International Geodesic Conference in London in 1909, raising the accuracy to 1 in 100 million. Eötvös died in 1919, and the complete measurements were only published in 1922 by Pekár and Fekete. Related studies Eötvös also studied similar experiments being carried out by other teams on moving ships, which led to his development of the Eötvös effect to explain the small differences they measured. These were due to the additional accelerative forces due to the motion of the ships in relation to the Earth, an effect that was demonstrated on an additional run carried out on the Black Sea in 1908. In the 1930s a former student of Eötvös, János Renner (1889–1976), further improved the results to between 1 in 2 to 5 billion. Robert H. Dicke with P. G. Roll and R. Krotkov re-ran the experiment much later using improved apparatus and further improved the accuracy to 1 in 100 billion. They also made several observations about the original experiment which suggested that the claimed accuracy was somewhat suspect. Re-examining the data in light of these concerns led to an apparent very slight effect that appeared to suggest that the equivalence principle was not exact, and changed with different types of material. In the 1980s several new physics theories attempting to combine gravitation and quantum mechanics suggested that matter and anti-matter would be affected slightly differently by gravity. Combined with Dicke's claims there appeared to be a possibility that such a difference could be measured, this led to a new series of Eötvös-type experiments (as well as timed falls in evacuated columns) that eventually demonstrated no such effect. A side-effect of these experiments was a re-examination of the original Eötvös data, including detailed studies of the local stratigraphy, the physical layout of the Physics Institute (which Eötvös had personally designed), and even the weather and other effects. The experiment is therefore well recorded. Table of measurements over time Tests on the Equivalence principle See also Fifth force Inertial frame General relativity Foucault pendulum Eddington experiment Tests of general relativity References Physics experiments Gravimetry
Eötvös experiment
[ "Physics" ]
1,366
[ "Experimental physics", "Physics experiments" ]
8,608,584
https://en.wikipedia.org/wiki/Acoustic%20approximation
In acoustics, the acoustic approximation is a fundamental principle that states that an acoustic wave is created by a small, adiabatic, pressure ripple riding on a comparatively large equilibrium (bias) pressure. Typically, the acoustic pressure is on the order of a few ppm of the equilibrium pressure. By extension, the acoustic approximation also guarantees that an acoustic wave travels at a speed exactly equal to the local speed of sound. See also Sound Acoustics
Acoustic approximation
[ "Physics" ]
91
[ "Classical mechanics", "Acoustics" ]
8,608,695
https://en.wikipedia.org/wiki/Mobil%20Economy%20Run
Mobil Economy Run was an annual event that took place from 1936 to 1968, except during World War II. It was designed to provide real fuel efficiency numbers during a coast-to-coast test on public roads and with regular traffic and weather conditions. The Mobil Oil Corporation sponsored it and the United States Auto Club (USAC) sanctioned and operated the run. In the United States The Mobil Economy Run determined the fuel economy or gas mileage potentials of passenger cars under typical driving conditions encountered by average motorists. This was rather different from the current method of computing fuel consumption by the United States Environmental Protection Agency (EPA) by running cars on chassis dynamometer in a climate-controlled environment. To prevent special preparation or modifications to the participating automobiles for the run, the United States Auto Club purchased the cars at dealerships, checked them and, if certified as "stock", their hoods and chassis were sealed. The factory gas tank was disconnected so fuel use could be accurately measured by using a special tank mounted in the trunk. Because of the many types of automobiles, the Mobil Economy Run had eight classes based on wheelbase, engine and body size, as well as price. The leading automakers provided drivers and in each car was a USAC observer to prevent any deviations and penalize for traffic or speed limit violations. Women were permitted to participate in the Mobil-gas contest only from 1957. The event was a marketing contest between the automakers. The objective was the coveted title as the Mobilgas Economy Run winner in each class. However, starting in 1959, entries were judged on an actual miles-per-gallon basis, instead of the ton-mileage formula used previously which favored bigger, heavier cars. As a result, compact cars became the top mileage champs. In the 47-car field for 1959, a Rambler American was first - averaging - while a Rambler Six was second - with an average of - for the five-day, trip from Los Angeles, California to Kansas City, Missouri. The efficiency of models as AMC's more compact Ramblers caused them to be all but banned from the event. As a result, Ramblers and Studebakers were put in a separate class. This was because the 'Big Three' auto makers (General Motors, Ford, and Chrysler) did not have competitive cars at the time and were trounced in the fuel efficiency rankings until they introduced smaller platforms (GM X platform, Ford Falcon, Chrysler A platform). Automakers tried to "prepare" their cars to achieve better results. An example was to use lightweight motor oil during the allowable "break-in" period "to promote faster wear and loosen the engines up quickly." Moreover, the factory-supplied drivers were highly trained and experienced to drive in a manner that conserved fuel. An average driver in the same car and over the same course would be lucky to achieve the Run's results. The tests only show the "ultimate" economy potential of the cars tested and their relative efficiency of fuel use. The event received criticism in the form of literary fiction, from the book Balloons are Available by Jordan Crittenden. In the novel, a fictional character is hit by an automobile during the event. An excerpt from the novel reads "'It was terrible,' she says. 'The driver couldn't stop because he was competing in a Mobilgas Economy Run.'" Over the years, the Mobil Oil Corporation sponsored many Mobil Economy Run events across the country for various car classes and automobile associations for short distances. In 1963, a "day-day" test between Los Angeles and the Grand Canyon eventually evolved into a "six-day" endurance test between Los Angeles and New York by the U.S. Auto Club. The last run started in Anaheim CA on April 2, 1968, but was cancelled in Indianapolis on April 5 due to civil unrest across the country because of the death of Martin Luther King Jr. on April 4. In December 1968, it was announced by Richard F Tucker, vice president of marketing for Mobil in North America, the event will be cancelled in the United States citing "changing advertising patterns and changing emphasis in automotive performance as major factors influencing the decision." In the United Kingdom Mobil entered the United Kingdom service station market in 1952, as Mobilgas. It copied the annual Economy Run from the US. However, in the 1970s, the Economy Run was taken over in the UK by Total S.A., but the event was discontinued in the UK after just a few years. In Australia In Australia the Mobilgas Economy Run was staged in various years including 1955, 1956, 1958, 1959, 1960 (the fifth running), 1961, and, as the Mobil Economy Run in 1962, 1963, 1964, and 1966 (the tenth running). In Italy In Italy the competition started in 1959, and it lasted until at least 1985. Since 1969, it was organised by Mobil with FIAT, which provided the cars. From 1969 to 1984 it was also called Mobil Fiat Economy Run, while in 1985 the name was changed in Lancia Mobil Economy Run. In France The competition was also present in France at the end of the 1970s. In the claims for a patent The Mobil Economy Run is used to explain the claims made for United States Patent # 3937202 - an "Economy driving aid": "... The experience obtained by skilled drivers in the Mobil Economy run indicates that for best fuel economy, a car should be operated at nearly constant speed in the range of 30 to 50 mph. Rapid accelerations or decelerations and operation at (or near) full throttle should be avoided. To practice for economy runs, skilled drivers used special instrumentation to determine the operating conditions for best fuel economy. This instrumentation usually included a vacuum gauge to indicate intake manifold vacuum, a special odometer to measure distance traveled to hundredths of a mile, and a burette to measure gasoline usage. However, instrumentation of this type is extremely complex for the normal driver and is additionally quite expensive. ..." "... It is an objective of this invention to provide a signal to the operator of a variable speed, variable power internal combustion engine when the engine is being accelerated or decelerated too fast, in addition to a signal when the engine is being operated at too high or too low a power output. ..." References Inline General * Motorsport competitions in the United States Energy economics 1936 establishments in the United States 1968 disestablishments in the United States Recurring sporting events established in 1936 Recurring events disestablished in 1968
Mobil Economy Run
[ "Environmental_science" ]
1,355
[ "Energy economics", "Environmental social science" ]
8,609,886
https://en.wikipedia.org/wiki/Pterostilbene
Pterostilbene () (trans-3,5-dimethoxy-4-hydroxystilbene) is a stilbenoid chemically related to resveratrol. In plants, it serves a defensive phytoalexin role. Natural occurrence Pterostilbene is found in almonds, various Vaccinium berries (including blueberries), grape leaves and vines, and Pterocarpus marsupium heartwood. Safety and regulation Pterostilbene is considered to be a corrosive substance, is dangerous upon exposure to the eyes, and is an environmental toxin, especially to aquatic life. A preliminary study of healthy human subjects given pterostilbene for 6–8 weeks, showed pterostilbene to be safe for human use at dosages up to 250 mg per day, although this study did not assess metabolic effects on the lipid profile. Other studies have reported dose-based elevations of low density lipoprotein cholesterol (LDL-C, "bad cholesterol") and decreased high density lipoprotein cholesterol (HDL-C, "good cholesterol") within 4 to 8 weeks of daily dosing. The elevation of LDL-C may move previously normal ranges into borderline high or high reference range and has raised questions about the longterm cardiovascular risk of pterostilbene supplementation in humans. Its chemical relative, resveratrol, received FDA GRAS status in 2007, and approval of synthetic resveratrol as a safe compound by the European Food Safety Authority (EFSA) in 2016. Pterostilbene differs from resveratrol by exhibiting increased bioavailability (80% compared to 20% in resveratrol) due to the presence of two methoxy groups which cause it to exhibit increased lipophilic and oral absorption. Research Pterostilbene is being studied in laboratory and preliminary clinical research. See also Piceatannol, a stilbenoid related to both resveratrol and pterostilbene References Nutrients Stilbenoids Phytoalexins
Pterostilbene
[ "Chemistry" ]
446
[ "Phytoalexins", "Chemical ecology" ]
8,610,048
https://en.wikipedia.org/wiki/Paternal%20age%20effect
The paternal age effect is the statistical relationship between the father's age at conception and biological effects on the child. Such effects can relate to birthweight, congenital disorders, life expectancy, and psychological outcomes. A 2017 review found that while severe health effects are associated with higher paternal age, the total increase in problems caused by paternal age is low. Average paternal age at birth reached a low point between 1960 and 1980 in many countries and has been increasing since then, but has not reached historically unprecedented levels. The rise in paternal age is not seen as a major public health concern. The genetic quality of sperm, as well as its volume and motility, may decrease with age, leading the population geneticist James F. Crow to claim that the "greatest mutational health hazard to the human genome is fertile older males". The paternal age effect was first proposed implicitly by physician Wilhelm Weinberg in 1912 and explicitly by psychiatrist Lionel Penrose in 1955. DNA-based research started more recently, in 1998, in the context of paternity testing. Health effects Evidence for a paternal age effect has been proposed for a number of conditions, diseases and other effects. In many of these, the statistical evidence of association is weak, and the association may be related by confounding factors or behavioural differences. Conditions proposed to show correlation with paternal age include the following: Single-gene disorders Advanced paternal age may be associated with a higher risk for certain single-gene disorders caused by mutations of the FGFR2, FGFR3 and RET genes. These conditions are Apert syndrome, Crouzon syndrome, Pfeiffer syndrome, achondroplasia, thanatophoric dysplasia, multiple endocrine neoplasia type 2, and multiple endocrine neoplasia type 2b. The most significant effect concerns achondroplasia (a form of dwarfism), which might occur in about 1 in 1,875 children fathered by men over 50, compared to 1 in 15,000 in the general population. However, the risk for achondroplasia is still considered clinically negligible. The FGFR genes may be particularly prone to a paternal age effect due to selfish spermatogonial selection, whereby the influence of spermatogonial mutations in older men is enhanced because cells with certain mutations have a selective advantage over other cells (see § DNA mutations). Pregnancy effects Several studies have reported that advanced paternal age is associated with an increased risk of miscarriage. The strength of the association differs between studies. It has been suggested that these miscarriages are caused by chromosome abnormalities in the sperm of aging men. An increased risk for stillbirth has also been suggested for pregnancies fathered by men over 45. Birth outcomes A systematic review published in 2010 concluded that the graph of the risk of low birthweight in infants with paternal age is "saucer-shaped" (U-shaped); that is, the highest risks occur at low and at high paternal ages. Compared with a paternal age of 25–28 years as a reference group, the odds ratio for low birthweight was approximately 1.1 at a paternal age of 20 and approximately 1.2 at a paternal age of 50. There was no association of paternal age with preterm births or with small for gestational age births. Mental illness Schizophrenia is associated with advanced paternal age. Some studies examining autism spectrum disorder (ASD) and advanced paternal age have demonstrated an association between the two, although there also appears to be an increase with maternal age. In one study, the risk of bipolar disorder, particularly for early-onset disease, is J-shaped, with the lowest risk for children of 20- to 24-year-old fathers, a twofold risk for younger fathers and a threefold risk for fathers >50 years old. There is no similar relationship with maternal age. A second study also found a risk of schizophrenia in both fathers above age 50 and fathers below age 25. The risk in younger fathers was noted to affect only male children. A 2010 study found the relationship between parental age and psychotic disorders to be stronger with maternal age than paternal age. A 2016 review concluded that the mechanism behind the reported associations was still not clear, with evidence both for selection of individuals liable to psychiatric illness into late fatherhood and evidence for causative mutations. The mechanisms under discussion are not mutually exclusive. A 2017 review concluded that the vast majority of studies supported a relationship between older paternal age and autism and schizophrenia but that there is less convincing and also inconsistent evidence for associations with other psychiatric illnesses. Cancers Paternal age may be associated with an increased risk of breast cancer, but the association is weak and there are confounding effects. According to a 2017 review, there is consistent evidence of an increase in incidence of childhood acute lymphoblastic leukemia with paternal age. Results for associations with other childhood cancers are more mixed (e.g. retinoblastoma) or generally negative. Diabetes mellitus High paternal age has been suggested as a risk factor for type 1 diabetes, but research findings are inconsistent, and a clear association has not been established. Down syndrome It appears that a paternal-age effect might exist with respect to Down syndrome, but it is very small in comparison to the maternal-age effect. Intelligence A review in 2005 found a U-shaped relationship between paternal age and low intelligence quotients (IQs). The highest IQ was found at paternal ages of 25–29; fathers younger than 25 and older than 29 tended to have children with lower IQs. It also found that "at least a half dozen other studies ... have demonstrated significant associations between paternal age and human intelligence." A 2009 study examined children at 8 months, 4 years and 7 years and found that higher paternal age was associated with poorer scores in almost all neurocognitive tests used but that higher maternal age was associated with better scores on the same tests; this was a reverse effect to that observed in the 2005 review, which found that maternal age began to correlate with lower intelligence at a younger age than paternal age, however two other past studies were in agreement with the 2009 study's results. An editorial accompanying the 2009 paper emphasized the importance of controlling for socioeconomic status in studies of paternal age and intelligence. A 2010 study from Spain also found an association between advanced paternal age and intellectual disability. On the other hand, later research concluded that previously reported negative associations might be explained by confounding factors, especially parental intelligence and education. A re-analysis of the 2009 study found that the paternal age effect could be explained by adjusting for maternal education and number of siblings. A 2012 Scottish study found no significant association between paternal age and intelligence, after adjusting what was initially an inverse-U association for both parental education and socioeconomic status as well as number of siblings. A 2013 study of half a million Swedish men adjusted for genetic confounding by comparing brothers and found no association between paternal age and offspring IQ. Another study from 2014 found an initially positive association between paternal age and offspring IQ that disappeared when adjusting for parental IQs. Life expectancy A 2008 paper found a U-shaped association between paternal age and the overall mortality rate in children (i.e., mortality rate up to age 18). Although the relative mortality rates were higher, the absolute numbers were low, because of the relatively low occurrence of genetic abnormality. The study has been criticized for not adjusting for maternal health, which could have a large effect on child mortality. The researchers also found a correlation between paternal age and offspring death by injury or poisoning, indicating the need to control for social and behavioral confounding factors. In 2012, a study showed that greater age at paternity tends to increase telomere length in offspring for up to two generations. Since telomere length has effects on health and mortality, this may have effects on health and the rate of aging in these offspring. The authors speculated that this effect may provide a mechanism by which populations have some plasticity in adapting longevity to different social and ecological contexts. Associated social and genetic characteristics Parents do not decide when to reproduce randomly. This implies that paternal age effects may be confounded by social and genetic predictors of reproductive timing. A simulation study concluded that reported paternal age effects on psychiatric disorders in the epidemiological literature are too large to be explained only by mutations. They conclude that a model in which parents with a genetic liability to psychiatric illness tend to reproduce later better explains the literature. Later age at parenthood is also associated with a more stable family environment, with older parents being less likely to divorce or change partners. Older parents also tend to occupy a higher socio-economic position and report feeling more devoted to their children and satisfied with their family. On the other hand, the risk of the father dying before the child becomes an adult increases with paternal age. To adjust for genetic liability, some studies compare full siblings. Additionally, or alternatively, studies statistically adjust for some or all of these confounding factors. Using sibling comparisons or adjusting for more covariates frequently changes the direction or magnitude of paternal age effects. For example, one study drawing on Finnish census data concluded that increases in offspring mortality with paternal age could be explained completely by parental loss. On the other hand, a population-based cohort study drawing on 2.6 million records from Sweden found that risk of attention deficit hyperactivity disorder was only positively associated with paternal age when comparing siblings. Mechanisms Several hypothesized chains of causality exist whereby increased paternal age may lead to health effects. There are different types of genome mutations, with distinct mutation mechanisms: DNA length mutations of repetitive DNA (such as telomeres and microsatellites), caused by cellular copying errors DNA point mutations, caused by cellular copying errors and also by chemical and physical insults such as radiation chromosome breaks and rearrangements, which can occur in the resting cell epigenetic changes, i.e. methylation of the DNA, which can activate or silence certain genes, and is sometimes passed down from parent to child Telomere and microsatellite length Telomeres are repetitive genetic sequences at both ends of each chromosome that protect the structure of the chromosome. As men age, most telomeres shorten, but sperm telomeres increase in length. The offspring of older fathers have longer telomeres in both their sperm and white blood cells. A large study showed a positive paternal, but no independent maternal age effect on telomere length. Because the study used twins, it could not compare siblings who were discordant for paternal age. It found that telomere length was 70% heritable. Regarding the mutation of microsatellite DNA, also known as short tandem repeat (STR) DNA, a survey of over 12,000 paternity-tested families shows that the microsatellite DNA mutation rate in both very young teenage fathers and in middle-aged fathers is elevated, while the mother's age has no effect. DNA point mutations In contrast to oogenesis, the production of sperm cells is a lifelong process. Each year after puberty, spermatogonia (precursors of the spermatozoa) divide meiotically about 23 times. By the age of 40, the spermatogonia will have undergone about 660 such divisions, compared to 200 at age 20. Copying errors might sometimes happen during the DNA replication preceding these cell divisions, which may lead to new (de novo) mutations in the sperm DNA. The selfish spermatogonial selection hypothesis proposes that the influence of spermatogonial mutations in older men is further enhanced because cells with certain mutations have a selective advantage over other cells. Such an advantage would allow the mutated cells to increase in number through clonal expansion. In particular, mutations that affect the RAS pathway, which regulates spermatogonial proliferation, appear to offer a competitive advantage to spermatogonial cells, while also leading to diseases associated with paternal age. DNA fragmentation During the past two decades evidence has accumulated that pregnancy loss as well as reduced rate of success with assisted reproductive technologies is linked to impaired sperm chromosome integrity and DNA fragmentation. Advanced paternal age was shown to be associated with a significant increase in DNA fragmentation in a recent systematic review (where 17 out of the 19 studies considered showed such an association). Epigenetic changes The production of sperm cells involves DNA methylation, an epigenetic process that regulates the expression of genes. Improper genomic imprinting and other errors sometimes occur during this process, which can affect the expression of genes related to certain disorders, increasing the offspring's susceptibility. The frequency of these errors appears to increase with age. This could explain the association between paternal age and schizophrenia.; Paternal age affects offspring's behavior, possibly via an epigenetic mechanism recruiting a transcriptional repressor REST. Semen A 2001 review on variation in semen quality and fertility by male age concluded that older men had lower semen volume, lower sperm motility, a decreased percent of normal sperm, as well as decreased pregnancy rates, increased time to pregnancy and increased infertility at a given point in time. When controlling for the age of the female partner, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%. A 2014 review indicated that increasing male age is associated with declines in many semen traits, including semen volume and percentage motility. However, this review also found that sperm concentration did not decline as male age increased. X-linked effects Some classify the paternal age effect as one of two different types. One effect is directly related to advanced paternal age and autosomal mutations in the offspring. The other effect is an indirect effect in relation to mutations on the X chromosome which are passed to daughters who are then at risk for having sons with X-linked diseases. History Birth defects were acknowledged in the children of older men and women even in antiquity. In book six of Plato's Republic, Socrates states that men and women should have children in the "prime of their life" which is stated to be twenty in a woman and thirty in a man. He states that in his proposed society men should be forbidden to father children in their fifties and that the offspring of such unions should be considered "the offspring of darkness and strange lust." He suggests appropriate punishments be administered to the offenders and their offspring. In 1912, Wilhelm Weinberg, a German physician, was the first person to hypothesize that non-inherited cases of achondroplasia could be more common in last-born children than in children born earlier to the same set of parents. Weinberg "made no distinction between paternal age, maternal age and birth order" in his hypothesis. In 1953, Krooth used the term "paternal age effect" in the context of achondroplasia, but mistakenly thought the condition represented a maternal age effect. The paternal age effect for achondroplasia was described by Lionel Penrose in 1955. At a DNA level, the paternal age effect was first reported in 1998 in routine paternity tests. Scientific interest in paternal age effects is relevant because the average paternal age increased in countries such as the United Kingdom, Australia and Germany, and because birth rates for fathers aged 30–54 years have risen between 1980 and 2006 in the United States. Possible reasons for the increases in average paternal age include increasing life expectancy and increasing rates of divorce and remarriage. Despite recent increases in average paternal age, however, the oldest father documented in the medical literature was born in 1840: George Isaac Hughes was 94 years old at the time of the birth of his son by his second wife, a 1935 article in the Journal of the American Medical Association stated that his fertility "has been definitely and affirmatively checked up medically," and he fathered a daughter in 1936 at age 96. Medical assessment The American College of Medical Genetics recommends obstetric ultrasonography at 18–20 weeks gestation in cases of advanced paternal age to evaluate fetal development, but it notes that this procedure "is unlikely to detect many of the conditions of interest." They also note that there is no standard definition of advanced paternal age; it is commonly defined as age 40 or above, but the effect increases linearly with paternal age, rather than appearing at any particular age. According to a 2006 review, any adverse effects of advanced paternal age "should be weighed up against potential social advantages for children born to older fathers who are more likely to have progressed in their career and to have achieved financial security." Geneticist James F. Crow described mutations that have a direct visible effect on the child's health and also mutations that can be latent or have minor visible effects on the child's health; many such minor or latent mutations allow the child to reproduce, but cause more serious problems for grandchildren, great-grandchildren and later generations. See also Maternal age effect Pregnancy over age 50 References Further reading External links Fatherhood Medical genetics Biology of bipolar disorder Senescence
Paternal age effect
[ "Chemistry", "Biology" ]
3,486
[ "Senescence", "Metabolism", "Cellular processes" ]
8,610,611
https://en.wikipedia.org/wiki/Goloid
Goloid is an alloy of silver, gold and copper patented by Dr. William Wheeler Hubbell on May 22, 1877 (U.S. patent #191,146). The patent specifies 1 part gold (about 3.6%), 24 parts silver (about 87.3%), and 2.5 parts copper (about 9.1%, all by weight); however, the patent also states that "The proportions may be slightly varied" and goes on to specify that the silver portion can range from 20 times to 30 times that of the gold, and the copper could range from one-eighth to one-twelfth (from 12.5% to 8.33%) of the total mixture. The patent specifies that the metals be separately melted, then mixed, along with "sulphate of sodium or sulphate of potassium" in the amount of one part sulfate to one thousand parts metal. The alloy, in varying proportions (sometimes slightly out of these specifications), was used by the United States Mint to strike pattern dollars, sometimes called "metric dollars" (some were marked with "metric" in the coin design, while all had metal proportions and total coin weight as design features) from 1878 to 1880. Patterns of the same design were struck in other metals, including aluminum, copper, normal coin silver, lead, and white metal. In the end, goloid was rejected as a coinage metal because it could not be distinguished from the normal U.S. 90% silver coin alloy without chemical analysis, thus inviting counterfeiters to use silver-copper alloys alone to make lower-value copies. References Judd, J. Hewett, United States Pattern Coins: Complete Source for History, Rarity, and Values, 9th ed., Atlanta: Whitman Publishing, 2005, pp. 222–3, 225–6, 236–7, 239–40, and 320. . USPTO online patent search; direct link to patent. Silver Gold Precious metal alloys Coins of the United States 1877 introductions
Goloid
[ "Chemistry" ]
412
[ "Precious metal alloys", "Alloys" ]
8,611,599
https://en.wikipedia.org/wiki/False%20ending
A false ending is a device in film and music that can be used to trick the audience into thinking that the work has ended, before it continues. The presence of a false ending can be anticipated through a number of ways. The medium itself might betray that the story will continue beyond the false ending. A supposed "ending" that occurs when many pages are still left in a book, when a film or song's running time has not fully elapsed, or when only half the world has been explored in a video game, is likely to be false. As such, stories with an indeterminate running length or a multi-story structure are much more likely to successfully deceive their audience with this technique. Another indicator is the presence of a large number of incomplete story lines, character arcs, or other unresolved story elements at the time of the false ending. These elements can leave the audience feeling that too much of the story is incomplete and there has to be more. Film In L.A. Confidential, it seems like the criminal case that the movie revolves around is completely closed with no loose ends until one of the witnesses admits that she lied about important details to give more importance towards the trial of the people who raped her, exposing a cover-up conspiracy. In The Lord of the Rings: The Return of the King, director Peter Jackson uses editing techniques that are indicative of endings in scenes that could be used as such, but continues until the movie finally ends. Spider-Man 3 has two false endings. Another example is in The Simpsons Movie, where, at a very climactic stage in the film, the screen fades away and says "To be continued", which is then followed by the word "Immediately." Also in The Lego Movie 2: The Second Part, at what appears to be a cliffhanger ending, a "The End" sign appears, only for Lucy (voiced by Elizabeth Banks) to break the fourth wall by insisting that the film will have a happy ending; the same sign appears again at the film's actual ending. After Evelyn (played by Michelle Yeoh) seemingly dies in the middle of Everything Everywhere All at Once, the words "The End" appear before a short portion of fake credits; this is followed by the reveal that the film was being watched by an audience in a universe where Evelyn becomes a movie star. Some movies come to a formal ending, followed by the rolling of the credits, which is almost universally used to indicate that the film has ended, only to have the actors reappear in one or more mid-credits scenes. In comedy films, these sequences may be bloopers or outtakes. In other types of films, the mid-credit scenes may continue the narrative set out in the movie. The Marvel Cinematic Universe movies have become notorious for this, in some cases featuring a mid-credits scene and an end-credits scene in the same movie. Music False endings are a known device in classical music. Josef Haydn was fond of them, for example inducing applause at the wrong place in the finales of his String Quartet, Op. 33 No. 2 (nicknamed "The Joke") and Symphony No. 90. The first movement of Prokoviev's Classical Symphony contains false endings. False endings are also a common custom in popular music. The Beatles used false endings in many of their songs, including "I'm Only Sleeping", "Get Back", "Hello, Goodbye", "Cry Baby Cry", "Helter Skelter", "Rain", and "Strawberry Fields Forever". Other songs that use false endings include Guns 'n' Roses' "November Rain", Bryan Adams' "(Everything I Do) I Do It For You" (full version), David Bowie's "Suffragette City", Gorillaz' "Dare", Natasha Bedingfield's "Unwritten", Foo Fighters' "Come Back", Alice in Chains' "Rain When I Die", and Beastie Boys' "Sabotage". See also Alternate ending Cliffhanger Multiple endings References Endings Film and video terminology Musical terminology Songwriting
False ending
[ "Physics" ]
845
[ "Spacetime", "Endings", "Physical quantities", "Time" ]
8,612,268
https://en.wikipedia.org/wiki/Auto%20reignition
Auto reignition is a process used in gas burners to control ignition devices based on whether a burner flame is lit. This information can be used to stop an ignition device from sparking, which is no longer necessary after the flame is lit. It can also be used to start the sparking device again if the flame goes out while the burner is still supplying gas, for example, from a gust of wind or vibration. Kitchen appliances Most gas ranges and cooktops use sparking devices to ignite the burner flame. This eliminates the need for a pilot flame, which wastes energy. Most of these sparking device-equipped ranges require the user to control the ignition sparking manually, resulting in a three-step process required to operate the burner: turn burner knob to a position that opens the gas valve and activates the sparking (typically labelled "Light") wait for ignition, typically 0.5 to 2 seconds turn burner knob past lite position, to stop the sparking noise and burning out the ignition electrode, to a desired flame intensity One implementation of a gas burner with auto reignition senses the electrical conductivity of the flame. This nonzero flame conductivity is because combustion of natural gas releases enough free electrons to support a small current in air. An electronic circuit then starts or stops the igniter from sparking, based on whether the flame is lit. This reduces the number of steps to turn a burner on from three to one: turn knob to a desired flame intensity—while confirming flame ignites This is an elegant solution, compared to detecting flame via a thermocouple, a photoresistor or a mercury-filled sensor. No extra components or electrical connections between the sparker electrode and the spark module electronics are required. This convenience and safety feature is found only (as of June 2009) on higher priced gas ranges and cooktops. The case for requiring auto reignition as a safety feature Auto reignition lowers the risk of gas leaks: if a flame goes out during operation, for example, from vibration or a gust of wind due to misoperation—a user might not understand the "light" position must be maintained for about 0.5 to 2 seconds before turning the burner knob on fully. The user might, as a result, turn the burner knob on quickly past the "light" position without the burning actually igniting and leave the kitchen; thus leaving the gas burner leaking gas into the room. This feature is especially valuable on gas burners with several different short-term users, less likely to bother with or learn multi-step procedures—for example, gas ranges in rental properties, guest houses, or in office kitchens. Fire making Cooking appliances
Auto reignition
[ "Chemistry" ]
559
[ "Chemical reaction stubs", "Chemical process stubs" ]
8,612,640
https://en.wikipedia.org/wiki/Candida%20stellata
Candida stellata is a species of yeast of the genus Candida. The year of 1978 saw work of Yarrow & Meyer the yeast was reclassified to its current name from Saccharomyces stellatus, which was initially described by Kroemer and Krumbholz in 1931. This yeast is present in the fermentation of traditional Italian balsamic vinegar (Zygosaccharomyces rouxii together with Zygosaccharomyces bailii, Z. pseudorouxii, Z. mellis, Z. bisporus, Z. lentus, Hanseniaspora valbyensis, Hanseniaspora osmophila, Candida lactis-condensi, Candida stellata, Saccharomycodes ludwigii, Saccharomyces cerevisiae) References Yeasts stellata
Candida stellata
[ "Biology" ]
181
[ "Yeasts", "Fungi" ]
8,612,679
https://en.wikipedia.org/wiki/Meyerozyma%20guilliermondii
Meyerozyma guilliermondii (formerly known as Pichia guilliermondii until its rename in 2010) is a species of yeast of the genus Meyerozyma whose asexual or anamorphic form is known as Candida guilliermondii. Candida guilliermondii has been isolated from numerous human infections, mostly of cutaneous origin, if only from immunosuppressed patients. C. guilliermondii has also been isolated from normal skin and in seawater, feces of animals, fig wasps, buttermilk, leather, fish, and beer. Morphology Candida guilliermondii colonies are flat, moist, smooth, and cream to yellow in color on Sabouraud dextrose agar. It does not grow on the surface when inoculated into Sabouraud broth. When grown on cornmeal-Tween 80 agar at 25 °C for 72 hours, C. guillermondii produces clusters of small blastospores along its pseudohyphae, especially at septal points. Pseudohyphae are short and few in number. References External links M. guilliermondii at NCBI Taxonomy browser Yeasts Saccharomycetaceae Fungus species
Meyerozyma guilliermondii
[ "Biology" ]
261
[ "Yeasts", "Fungi", "Fungus species" ]
8,612,712
https://en.wikipedia.org/wiki/Clavispora%20lusitaniae
Clavispora lusitaniae, formerly also known by the anamorph name Candida lusitaniae, is a species of yeast in the genus Candida or Clavispora. The species name is a teleomorph name. Clavispora lusitaniae was first identified as a human pathogen in 1979. Clavispora lusitaniae was initially described as a rare cause of fungemia, with fewer than 30 cases reported between 1979 and 1990. However, there has been a marked increase in the number of recognized cases of candidemia due to this organism in the last two decades. Bone marrow transplantation and high-dose cytoreductive chemotherapy have both been identified as risk factors for infections with this organism. These patients are often neutropenic for extended periods of time, leaving them susceptible to bacterial and fungal infections, including Candidal infections. A study found that C. lusitaniae was responsible for 19% of all breakthrough fungemia infections in cancer patients between 1998 and 2013. Some investigators have theorized that the widespread use of Amphotericin B empiric antifungal therapy selects for infections with Candida lusitaniae. The U.S. CDC has cautioned that the use of the teleomorph name Clavispora lusitaniae for the species can be misleading, because it does not "include the word Candida even though this organism is indeed a species of Candida". References Yeasts Saccharomycetes Fungal pathogens of humans Fungus species
Clavispora lusitaniae
[ "Biology" ]
324
[ "Yeasts", "Fungi", "Fungus species" ]
8,612,800
https://en.wikipedia.org/wiki/Candida%20oleophila
Candida oleophila is a species of yeast in the genus Candida in the family of Saccharomycetaceae. It is used in post-harvesting fruit and vegetables as an alternative for fungicides. Taxonomy Candida oleophila was described by Montrocher in 1967 as the family of Dipodascaceae; in the same year, it was described by Kaisha & Iizuka as a family of Saccharomycetales. Description Candida oleophila is a yeast, which is part of Aspire, a product that is used in commercial settings, and is recommended to control postharvest decay of fruit and vegetables. A species of yeast in the genus Candida has a hairpin structure shaped like a dumbbell called SMRTbell. One of the main modes of action is competition for nutrients and space. Also, a major role in the mechanism of action by yeast antagonists is the degradation of enzymes that degrade the fungal wall. Habitat and distribution This yeast is commonly found in plants and debris, which are the main natural habitat for most yeast species. Candida oleophila Strain O is a single-celled yeast found naturally on plant tissues (fruits, flowers, and wood) and in water. The exudations, better known as plant secretions, contain sugars and other compounds that help the nutrition of yeasts, better known as epiphytes. Bioactive compounds According to Siobhán A. Turner and Geraldine Butler, in conjunction with L Mikhailova of the Department of Morphology of Microorganisms and Electron Microscopy Institute of Microbiology, Bulgarian Academy of Sciences says: Candida olephila shows complex bioactive compounds, which are the primary basis of the benefit of the fruits after the post-harvest. Penicillium cell wall fragments and glucose stimulate the production of these three compounds. One main study on Candida oleophila showed that it could produce and secrete several cell wall-degrading enzymes; this includes compounds like exo-β-1,3-glucanase, protease, and chitinase. Exo-β-1,3-glucanase and chitinase could be produced in the early stages of growth, followed by protease reaching growth in a range of a week, approximately 6 to 8 days. This demonstrated that Candida oleophila could secrete exo-β-1,3-glucanase(CoEXG1) on the wound site of the fruit that was the subject of study for this experiment. These studies were also made based on in vitro in which they showed a proliferation of compounds made to fruits to test how the biocontrol activity on pathogen infection worked in a controlled setting. Geographical distribution Candida oleophila is found everywhere where biocontrol agents are needed to control post-harvest diseases of fruits and vegetables. Studies on prolonging the life of postharvest fruits and vegetables with Candida oleophila have concluded that biocontrol with C. oleophila can be used over fungicides. The fungicides used to maintain food control in agricultural areas, such as fruits and vegetables, are widely used, but a person is exposed for any reason, it can irritate the eyes and skin and cause harm if ingested. Aspect of fungus Candida oleophila has previously been used in the laboratory. Together with the active agent I-182 from Candida oleophila, it was engineered as an active agent for commercial products such as Aspire. It is used for the central postharvest handling inside fruits, such as apples and pears, to control the growth of pathogenic fungi, such as the gray mold (Botrytis cinerea) and blue mold (Penicillium expansum). This helps to avoid losses in postharvest supplies, which leads to monetary losses for farmers. Growth rate How the growth of Candida oleophila in wound samples can be characterized is reflected in the ability to compete against pathogens that could damage the nutrition and space of the fruit or vegetable. Candida oleophila showed a rapid expansion observed on the third day of healing in the fruit or vegetable. Candida oleophila could show slow growth for approximately 3 to 7 days. The number of Candida oleophila yeast cells could reach a maximum growth average of 2.2 × 1011 CFU mL−1, which was 20.3 times higher than on day 0; later, the number of yeast decrease quickly. In the end, Candida oleophila reached a 5.3 × 107 CFU mL−1 on day 28, 1/18 of the amount present on day 0. C. oleophilia can colonize wounds and multiply rapidly on the surface of the injury, healing the tissue. At the same time, with rapid growth, losses and infections that could damage another postharvest can be avoided, reducing economic losses. References Sources “Candida Oleophila.” Candida Oleophila - an overview | ScienceDirect Topics. Accessed. March 25, 2023. “Biopesticides Fact Sheet for Candida Oleophila Strain O - US EPA.” Accessed March 25, 2023. Sui, Yuan, Michael Wisniewski, Samir Droby, Edoardo Piombo, Xuehong Wu, and Junyang Yue. “Genome Sequence, Assembly, and Characterization of the Antagonistic Yeast Candida Oleophila Used as a Biocontrol Agent against Post-Harvest Diseases.” Frontiers. Frontiers, February 10, 2020. Zheng X;Jiang H;Silvy EM;Zhao S;Chai X;Wang B;Li Z;Bi Y;Prusky D; “Candida Oleophila Proliferated and Accelerated Accumulation of Suberin Poly Phenolic and Lignin at Wound Sites of Potato Tubers.” Foods (Basel, Switzerland). U.S. National Library of Medicine. Accessed March 25, 2023. “Massive Isolation of Anamorphous Ascomycete Yeasts Candida Oleophila ” Accessed March 25, 2023 “Characterization of Extracellular Lytic Enzymes Produced by the Yeast Biocontrol Agent Candida Oleophila.” Federal Register: Request Access. Accessed April 21, 2023. Zheng, F, Weiwei Zhang, Yuan Sui, Ruihan Ding, Wenfu Yi, Yuanyuan Hu, Hongsheng,Liu,andChunyu Zhu. “Sugar Protectants Improve the Thermotolerance and Biocontrol Efficacy of the Biocontrol Yeast, Candida Oleophila.” Frontiers in microbiology. U.S. National Library of Medicine, February 8, 2019. Turner, Siobhán A, and Geraldine Butler.“The Candida Pathogenic Species Complex” Cold Spring Harbor perspectives in medicine. U.S. National Library of Medicine, September 2, 2014. “Potential Health Effects of Pesticides.” Penn State Extension. Accessed April 17, 2023. Kurtzman, Cletus P. “Yarrowia Van Der Walt & Von Arx (1980).” The Yeasts, 2011, 927–29. Yeasts oleophila Fungi described in 1967 Fungus species
Candida oleophila
[ "Biology" ]
1,497
[ "Yeasts", "Fungi", "Fungus species" ]
8,612,835
https://en.wikipedia.org/wiki/Kluyveromyces%20marxianus
Kluyveromyces marxianus in ascomycetous yeast and member of the genus, Kluyveromyces. It is the sexual stage (teleomorph) of Atelosaccharomyces pseudotropicalis also known as Candida kefyr. This species has a homothallic mating system and is often isolated from dairy products. History Taxonomy This species was first described in the genus Saccharomyces as S. marxianus by the Danish mycologist, Emil Christian Hansen from beer wort. He named the species for the zymologist, Louis Marx of Marseille who first isolated it from grape. The species was transferred to the genus Kluyveromyces by van der Walt in 1956. Since then, 45 species have been recognized in this genus. The anamorphic basionym Saccharomyces kefyr was created by Beijerinck, M.W. in 1889 in an article titled Sur le kéfir ("On kefir" in English); the type material is a grain of kefir. The other commonly-used anamorphic basionym Endomyces pseudotropicalis was coined by Castell. in 1911, its type strain having been isolated from a Sri Lankan patient. Phylogeny The closest relative of Kluyveromyces marxianus is the yeast Kluyveromyces lactis, often used in the dairy industry. Both Kluyveromyces and Saccharomyces are considered a part of the "Saccharomyces complex", subclade of the Saccharomycetes. Using 18S rRNA gene sequencing, it was suggested that K. marxianus, K. aestuarii, K. dobzhanskii, K. lactic, K. wickerhamii, K. blattae, K. thermotolerans, and collectively constituted a distinct clade of separate ancestry from the central clade in the genus Kluyveromyces. Within this complex, two categories are defined based on the presence in certain taxa of a whole-genome duplication event: the two clades are referred to as pre-Whole Genome Duplication (WGD) and post-WGD. Kluyveromyces species are affiliated with the first of this clades while species of Saccharomyces belong to the latter. Separation of these clades based on the presence of the WGD event explains why, even though the two species are closely related, fundamental differences exist between them. Growth and morphology Colonies of K. marxianus are cream to brown in colour with the occasional pink pigmentation due to production of the iron chelate pigment, pulcherrimin. When grown on Wickerham's Yeast-Mold (YM) agar, the yeast cells appear globose, ellipsoidal or cylindrical, 2–6 x 3–11 μm in size. In a glucose-yeast extract broth, K. marxianus grows to produce a ring composed of sediment. A thin pellicle may be formed. In a Dalmau plate culture containing cornmeal agar and Polysorbate 80, K. marxianus forms a rudimentary to branched pseudomycelium with few blastospores. K. marxianus is thermotolerant, exhibiting a high growth rate at . Physiology and reproduction Kluyveromyces marxianus is an aerobic yeast capable of respiro-fermentative metabolism that consists of simultaneously generating energy from both respiration via the TCA cycle and ethanol fermentation. The balance between respiration and fermentation metabolisms is strain specific. This species also ferments inulin, glucose, raffinose, sucrose and lactose into ethanol. K. marxianus is widely used in industry because of its ability to use lactose. Two genes, LAC12 and LAC4, allow K. marxianus to absorb and use lactose as a carbon source. This species is considered to be a "crabtree negative fungus", meaning it is unable to convert sugars into ethanol as effectively as crabtree positive taxa such as S. cerevisiae. Studies, however, deem it to be crabtree positive which is likely due to strain differences since K. marxianus possesses the necessary genes to be crabtree positive. K. marxianus is highly thermotolerant and able to withstand temperatures up to . K. marxianus is also able to use multiple carbon substrata at the same time making it highly suited to industrial use. When glucose concentrations become depleted to 6 g/L, the lactose co-transport initiates. The formation of the ascospores occurs through the conjugation of the haploid cells preceding the formation of the ascus. Alternatively, ascosporogensis can arise directly from diploid cells. Each ascus contains 1–4 ascospores. The ploidy of K. marxianus was originally thought to be a haploid but recent research has shown that many strains used in research and industry are diploid. These conflicting findings suggest that K. marxianus can exist in vegetative form either as a haploid and a diploid. Habitat and ecology Kluyveromyces marxianus has been isolated in dairy products, sisal leaves, and sewage from sugar manufacturing factories. It is also a naturally occurring colonist of plants, including corn. Human disease Kluyveromyces marxianus is not usually an agent of human disease, although infection in humans can occur in immunocompromised individuals. This species has been associated with candidemia and has been recovered from catheters. It has also found in biofilms on other indwelling devices such as pacemakers and prosthetic heart valves. Between 1–3 % of cases involving K. marxianus that have been reported oncology patients, surgical wards, female genital infections and upper respiratory infections. Treatment with amphotericin B have been effective against K. marxianus in one case report. Industrial applications Industrial use of K. marxianus is chiefly in the conversion of lactose to ethanol as a precursor for the production of biofuel. The ability for K. marxianus to reduce lactose is useful because of the potential to transform industrial whey waste, a problematic waste product for disposal, into useful biomass for animal feed, food additives or fuel. Certain strains of the fungus can also be used to convert whey to ethyl acetate, an alternative fuel source. K. marxianus is also used to produce the industrial enzymes: inulinase, β-galactosidase, and pectinase. Due to the heat tolerance of K. marxianus, high heat fermentations are feasible, reducing the costs normally expended for cooling as well as the potential for contamination by other fungi or bacteria. In addition, fermentations at higher temperatures occur more rapidly, making production much more efficient. Due to the ability of K. marxianus to simultaneously utilize lactose and glucose, the prevalence of K. marxianus in industrial settings is high as it decreases production time and increases productivity. Recent efforts have attempted to use K. marxianus in the production of food flavourings from waste products tomato and pepper pomaces as substrata. References External links Yeasts Saccharomycetaceae Fungi described in 1888 Fungus species
Kluyveromyces marxianus
[ "Biology" ]
1,568
[ "Yeasts", "Fungi", "Fungus species" ]
8,612,907
https://en.wikipedia.org/wiki/Relative%20interior
In mathematics, the relative interior of a set is a refinement of the concept of the interior, which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces. Formally, the relative interior of a set (denoted ) is defined as its interior within the affine hull of In other words, where is the affine hull of and is a ball of radius centered on . Any metric can be used for the construction of the ball; all metrics define the same set as the relative interior. A set is relatively open iff it is equal to its relative interior. Note that when is a closed subspace of the full vector space (always the case when the full vector space is finite dimensional) then being relatively closed is equivalent to being closed. For any convex set the relative interior is equivalently defined as where means that there exists some such that . Comparison to interior The interior of a point in an at least one-dimensional ambient space is empty, but its relative interior is the point itself. The interior of a line segment in an at least two-dimensional ambient space is empty, but its relative interior is the line segment without its endpoints. The interior of a disc in an at least three-dimensional ambient space is empty, but its relative interior is the same disc without its circular edge. Properties See also References Further reading Topology
Relative interior
[ "Physics", "Mathematics" ]
277
[ "Spacetime", "Topology", "Space", "Geometry" ]
8,612,937
https://en.wikipedia.org/wiki/IS-41
IS-41, also known as ANSI-41, is a mobile, cellular telecommunications system standard to support mobility management by enabling the networking of switches. ANSI-41 is the standard now approved for use as the network-side companion to the wireless-side AMPS (analog), IS-136 (Digital AMPS), cdmaOne, and CDMA2000 networks. It competes with GSM MAP, but the two will eventually merge to support worldwide roaming. IS-41 facilitates inter-switch operations like handoff and roaming authentication. IS-41 evolved through revisions 0, A, B, C, D, and E with increasingly robust and distributed call processing between switches and their roamer databases. To describe IS-41 messaging requires special terminology to designate the telephone call's originating and terminating switch, called an MSC (anchor-MSC, candidate-MSC, homing-MSC, serving MSC and target MSC) and databases called VLR and HLR. For handoffs the messaging is between switches. For roaming and authentication, the messaging would include an HLR and a VLR. In both cases, the PSTN may be needed to carry messaging. References Mobile telecommunications standards
IS-41
[ "Technology" ]
245
[ "Mobile telecommunications", "Mobile telecommunications standards" ]
8,612,967
https://en.wikipedia.org/wiki/Kluyveromyces
Kluyveromyces is a genus of ascomycetous yeasts in the family Saccharomycetaceae. Some of the species, such as K. marxianus, are the teleomorphs of Candida species. The genus name of Kluyveromyces is in honour of Albert Jan Kluyver ForMemRS (1888-1956), who was a Dutch microbiologist and biochemist. The genus was circumscribed by Johannes P. Van der Walt in Antonie van Leeuwenhoek vol.22 on pages 268–271 in 1956. Mating and sporulation in Kluyveromyces are co-induced by poor environments and most often occur in succession without intervening diploid mitotic cell divisions. A RAD52 gene homolog from Kluyveromyces lactis was cloned and characterized. This gene, which has a central role in recombinational repair of DNA, can complement S. cerevisiae rad52 mutants. Species Kluyveromyces is widely cultured for microbiological and genetic research. Some important species include: Kluyveromyces lactis Kluyveromyces marxianus Kluyveromyces thermotolerans See also Yeast in winemaking References Saccharomycetaceae Yeasts Yeasts used in brewing
Kluyveromyces
[ "Biology" ]
289
[ "Yeasts", "Fungi" ]
8,613,030
https://en.wikipedia.org/wiki/Candida%20viswanathii
Candida viswanathii is a species of yeast in the genus Candida. It is named after the noted Indian pulmonologist, Raman Viswanathan. A strain found in oil-contaminated soil near Beijing in 2017 is able to oxidize dodecane into dodecanedioic acid. References Yeasts viswanathii Fungi described in 1962 Fungus species
Candida viswanathii
[ "Biology" ]
79
[ "Yeasts", "Fungi", "Fungus species" ]
8,613,084
https://en.wikipedia.org/wiki/Torulaspora
Torulaspora is a genus of ascomycetous yeasts in the family Saccharomycetaceae. See also Yeast in winemaking References External links Saccharomycetaceae Yeasts Yeasts used in brewing Ascomycota genera
Torulaspora
[ "Biology" ]
55
[ "Yeasts", "Fungi" ]
8,613,140
https://en.wikipedia.org/wiki/Debaryomyces
Debaryomyces is a genus of yeasts in the family Saccharomycetaceae. Species D. artagaveytiae D. carsonii D. castellii D. coudertii D. etchellsii D. globularis D. hansenii D. kloeckeri D. kursanovii D. marama D. macquariensis D. melissophilus D. mrakii D. mycophilus D. nepalensis D. occidentalis D. oviformis D. polymorphus D. prosopidis D. pseudopolymorphus D. psychrosporus D. robertsiae D. singareniensis D. udenii D. vanrijiae D. vietnamensis D. vindobonensis D. yamadae References Saccharomycetaceae Yeasts Osmophiles
Debaryomyces
[ "Biology" ]
191
[ "Yeasts", "Fungi" ]
8,613,180
https://en.wikipedia.org/wiki/Metschnikowiaceae
The Metschnikowiaceae are a family of yeasts in the order Saccharomycetales that reproduce by budding. It contains the genera Clavispora and Metschnikowia. Species in the family have a widespread distribution, especially in tropical areas. References Yeasts Saccharomycetes
Metschnikowiaceae
[ "Biology" ]
66
[ "Yeasts", "Fungi" ]
8,613,418
https://en.wikipedia.org/wiki/ROTAP
Rare or Threatened Australian Plants, usually abbreviated to ROTAP, is a list of rare or threatened Australian plant taxa. Developed and maintained by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the most recent edition lists 5031 taxa. The list uses a binary coding system based on the IUCN Red List categories for "Presumed Extinct", "Endangered", "Vulnerable", "Rare" or "Poorly Known". However, it also provides for additional information such as geographic range and occurrence in protected areas. It was first compiled in 1979, and published in 1981, with revisions published in 1988 and 1996. In its early days it was the only nationally recognised list of threatened plants, although it had no legal status. When the Endangered Species Protection Act 1992 was proclaimed, the ROTAP list was used as a basis for the publication of schedules to the Act. A third list was produced by ANZECC from 1996. In 2000, the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) was proclaimed. This superseded the Endangered Species Protection Act 1992, and published a single list of threatened flora which largely superseded the three lists then current. As the EPBC list has legal force, the ROTAP list is now little used. It continues to be maintained, however, and is often used and referred to in scientific publications. See also List of threatened flora of Australia Threatened fauna of Australia References Botany in Australia Nature conservation in Australia Endangered species
ROTAP
[ "Biology" ]
291
[ "Biota by conservation status", "Endangered species" ]
8,613,452
https://en.wikipedia.org/wiki/Habeas%20data
Habeas data is a writ and constitutional remedy available in certain nations. The literal translation from Latin of habeas data is "[we command] you have the data," or "you [the data subject] have the data." The remedy varies from country to country, but in general, it is designed to protect, by means of an individual complaint presented to a constitutional court, the data, image, privacy, honour, information self-determination and freedom of information of a person. Habeas data can be sought by any citizen against any manual or automated data register to find out what information is held about his or her person. That person can request the rectification, update or the destruction of the personal data held. The legal nature of the individual complaint of habeas data is that of voluntary jurisdiction, which means that the person whose privacy is being compromised can be the only one to present it. The courts do not have any power to initiate the process by themselves. History Habeas data is an individual complaint filed before a constitutional court and related to the privacy of personal data. The first such complaint is the habeas corpus (which is roughly translated as "[we command] you have the body"). Other individual complaints include the writ of mandamus (USA), amparo (Spain, Mexico and Argentina), and respondeat superior (Taiwan). The habeas data writ itself has a very short history, but its origins can be traced to certain European legal mechanisms that protected individual privacy. In particular, certain German constitutional rights can be identified as the direct progenitors of the habeas data right. In particular, the right to information self-determination was created by the German constitutional tribunal by interpretation of the existing rights of human dignity and personality. This is a right to know what type of data are stored in manual and automatic databases about an individual, and it implies that there must be transparency on the gathering and processing of such data. The other direct predecessor of the habeas data right is the Council of Europe's 108th Convention on Data Protection of 1981. The purpose of the convention is to secure the privacy of the individual regarding the automated processing of personal data. To achieve this, several rights are given to the individual, including a right to access their personal data held in an automated database. The first country to implement habeas data was Brazil. In 1988, the Brazilian legislature voted to introduce a new constitution, which included a novel right: the habeas data individual complaint. It is expressed as a full constitutional right under article 5, LXXII, of the constitution. Following the Brazilian example, Colombia incorporated the habeas data right to its new constitution in 1991. After that, many countries followed suit and adopted the new legal tool in their respective constitutions: Paraguay in 1992, Peru in 1993, Argentina in 1994, and Ecuador in 1996. Between 1999 and 2012, several Latin American countries have enacted data protection laws where the procedure to file an habeas data writ is regulated. Implementation Brazil: The 1988 Brazilian constitution stipulates that: "habeas data shall be granted: a) to ensure the knowledge of information related to the person of the petitioner, contained in records or databanks of government agencies or of agencies of a public character; b) for the correction of data, when the petitioner does not prefer to do so through a confidential process, either judicial or administrative". Paraguay: The 1992 Paraguay constitution follows the example set by Brazil, but enhances the protection in several ways. Article 135 of the Paraguayan constitution states: "Everyone may have access to information and data available on himself or assets in official or private registries of a public nature. He is also entitled to know how the information is being used and for what purpose. He may request a competent judge to order the updating, rectification, or destruction of these entries if they are wrong or if they are illegitimately affecting his rights." Argentina: the Argentinian version of habeas data is the most complete to date. Article 43 of the constitution, amended in 1994, states that: "Any person shall file this action to obtain information on the data about himself and their purpose, registered in public records or data bases, or in private ones intended to supply information; and in case of false data or discrimination, this action may be filed to request the suppression, rectification, confidentiality or updating of said data. The secret nature of the sources of journalistic information shall not be impaired." Philippines: On August 25, 2007, chief justice Reynato Puno announced that the Supreme Court of the Philippines was drafting the writ of habeas data. The new remedy is supposed to compel military and government agents to release information about the desaparecidos and enable access to military and police files. Reynato Puno had previously announced a draft of the writ of amparo, Spanish for protection, which will prevent military officials in judicial proceedings from simply denying cases of disappearances or extrajudicial killings. See also Habeas corpus Information privacy Privacy Privacy law References External links HabeasData.org Data Colombia Latin American Data Protection Law Review - Revista Latinoamericana de Proteccion de Datos Personales Data Privacy laws blog (data protection in Latin America) Constitutional law Latin legal terminology Information privacy Privacy law Data laws
Habeas data
[ "Engineering" ]
1,105
[ "Cybersecurity engineering", "Information privacy" ]
8,613,679
https://en.wikipedia.org/wiki/List%20of%20extinct%20cetaceans
The list of extinct cetaceans features the extinct genera and species of the order Cetacea. The cetaceans (whales, dolphins and porpoises) are descendants of land-living mammals, the even-toed ungulates. The earliest cetaceans were still hoofed mammals. These early cetaceans became gradually better adapted for swimming than for walking on land, finally evolving into fully marine cetaceans. This list currently includes only fossil genera and species. However, the Atlantic population of gray whales (Eschrichtius robustus) became extinct in the 18th century, and the baiji (or Chinese river dolphin, Lipotes vexillifer) was declared "functionally extinct" after an expedition in late 2006 failed to find any in the Yangtze River. Suborder Archaeoceti Family Ambulocetidae (Eocene) Ambulocetus Himalayacetus Gandakasia Family Basilosauridae (Late Eocene) Tutcetus Perucetus Basilosaurinae Basilosaurus Basiloterus Eocetus Pachycetinae Antaecetus Pachycetus Dorudontinae Ancalecetus Chrysocetus Cynthiacetus Dorudon Masracetus Ocucajea Saghacetus Supayacetus Zygorhiza Stromerius Family Kekenodontidae (Oligocene) Kekenodon Tohoraonepu Family Pakicetidae (Early to Middle Eocene) Gandakasia Pakicetus Nalacetus Ichthyolestes Family Protocetidae (Eocene) Georgiacetinae Aegicetus Babiacetus Carolinacetus Georgiacetus Natchitochia Pappocetus Pontobasileus Tupelocetus Makaracetinae Makaracetus Protocetinae Aegyptocetus Artiocetus Crenatocetus Dhedacetus Gaviacetus Indocetus Kharodacetus Maiacetus Peregocetus Protocetus Qaisracetus Rodhocetus Takracetus Togocetus Family Remingtonocetidae (Eocene) Andrewsiphius Attockicetus Dalanistes Kutchicetus Rayanistes Remingtonocetus Suborder Mysticeti Family Llanocetidae (Late Eocene-Early Oligocene) Llanocetus Mystacodon Family Mammalodontidae (jr synonym Janjucetidae) (Late Oligocene) Janjucetus Mammalodon Family incertae sedis Borealodon Coronodon Clade Kinetomenta Family Aetiocetidae (Oligocene) Aetiocetus Ashorocetus Chonecetus Fucaia Morawanocetus Niparajacetus Salishicetus Willungacetus Clade Chaeomysticeti Atlanticetus Family incertae sedis Horopeta Maiabalaena Sitsqwayk Tlaxcallicetus Toipahautea Whakakai Superfamily Eomysticetoidea Family Cetotheriopsidae (Oligocene to Miocene) Cetotheriopsis Family Eomysticetidae (Oligocene to early Miocene) Echericetus Eomysticetus Matapanui Micromysticetus Tohoraata Tokarahia Waharoa Yamatocetus Family Aglaocetidae (Miocene) Aglaocetus Superfamily Balaenoidea Family Balaenidae (Miocene to Recent) Antwerpibalaena Archaeobalaena Balaena Balaena affinis Balaena arcuata Balaena larteti Balaena macrocephalus Balaena montalionis Balaena ricei Balaenella Balaenotus Balaenula Charadrobalaena Eubalaena (extant) Eubalaena ianitrix Eubalaena shinshuensis Idiocetus Peripolocetus Protobalaena Family incertae sedis Morenocetus Clade Thalassotherii Cetotheriomorphus Heterocetus Imerocetus Isocetus Notiocetus Otradnocetus Palaeobalaena Rhegnopsis Family Cetotheriidae (Miocene - Pliocene) Classification follows Steeman (2007) unless otherwise noted. Adicetus Brandtocetus Cephalotropis Cetotherium Ciuciulea Eucetotherium Halicetus Herentalia Herpetocetus Hibacetus Joumocetus Kurdalagonus Metopocetus Mithridatocetus Nannocetus Piscobalaena Thinocetus Titanocetus Tiucetus Vampalus Zygiocetus Family Diorocetidae (Miocene to Pliocene) Amphicetus Diorocetus Plesiocetopsis Uranocetus Family Neobalaenidae (Miocene to Recent) Miocaperea Family Pelocetidae (Miocene) Cophocetus Parietobalaena Pelocetus Family incertae sedis Isanacetus Pinocetus Mauicetus Taikicetus Tiphyocetus Superfamily Balaenopteroidea Eobalaenoptera Family Balaenopteridae (Miocene to Recent) Archaebalaenoptera Balaenoptera (extant) Balaenoptera bertae Balaenoptera cephalus Balaenoptera colcloughi Balaenoptera davidsonii Balaenoptera siberi Balaenoptera sursiplana Balaenoptera taiwanica "Balaenoptera" cortesii "Balaenoptera" portisi "Balaenoptera" ryani Burtinopsis Cetotheriophanes Fragilicetus Incakujira Marzanoptera Miobalaenoptera Norrisanima Nehalaennia Parabalaenoptera Plesiobalaenoptera Plesiocetus Praemegaptera Protororqualus Family Eschrichtiidae (Miocene to Recent) Archaeschrichtius Eschrichtioides Gricetoides Megapteropsis Family Tranatocetidae Mesocetus Mixocetus Tranatocetus Family incertae sedis Mioceta (nomen dubium) Piscocetus Siphonocetus (nomen dubium) Tretulias (nomen dubium) Ulias (nomen dubium) Suborder Odontoceti Basal forms Family Agorophiidae (Early Oligocene) Agorophius Family Ashleycetidae (Early Oligocene) Ashleycetus Family Patriocetidae (Oligocene to Early Miocene) Patriocetus Family Simocetidae (Early Oligocene) Simocetus Family Xenorophidae (Late Oligocene) Albertocetus Archaeodelphis Cotylocara Echovenator Inermorostrum Mirocetus Xenorophus Family Inticetidae Inticetus Phococetus Family Microzeuglodontidae Microzeuglodon Family Squaloziphiidae (Late Oligocene to Early Miocene) Squaloziphius Yaquinacetus Family incertae sedis Ankylorhiza Agriocetus Atropatenocetus Ediscetus Olympicetus Saurocetus Superfamily Squalodontoidea Family Dalpiazinidae (Late Oligocene to Miocene) Dalpiazina Family Prosqualodontidae (Late Oligocene-Middle Miocene) Parasqualodon Prosqualodon Superfamily Physeteroidea Family Kogiidae (Miocene to recent) Aprixokogia Kogia (extant) Kogia pusilla Kogia danomurai Kogiopsis Koristocetus Nanokogia Platyscaphokogia Pliokogia Praekogia Scaphokogia Thalassocetus Family Physeteridae Cozzuoliphyseter Ferecetotherium Idiophyseter Idiorophus Orycterocetus Physeterula Placoziphius Preaulophyseter Scaldicetus Family incertae sedis Acrophyseter Albicetus Aulophyseter Brygmophyseter Diaphorocetus Eudelphis Hoplocetus (nomen dubium) Livyatan Miokogia (nomen dubium) Paleophoca (nomen dubium) Prophyseter (nomen dubium) Rhaphicetus Zygophyseter Superfamily "Eurhinodelphinoidea" Family Argyrocetidae (Late Oligocene to Early Miocene) Argyrocetus Chilcacetus Macrodelphinus Family Eoplatanistidae (Miocene) Eoplatanista Family Eurhinodelphinidae (Late Oligocene to Late Miocene) Ceterhinops Eurhinodelphis Iniopsis Mycteriacetus Phocaenopsis Schizodelphis Vanbreenia Xiphiacetus Ziphiodelphis Superfamily Platanistoidea Aondelphis Awamokoa Dolgopolis Ensidelphis Perditicetus Urkudelphis Family Allodelphinidae (Late Oligocene to Middle Miocene) Allodelphis Arktocara Goedertius Ninjadelphis Zarhinocetus Family Platanistidae (Early Miocene to Recent) Araeodelphis Dilophodelphis Pachyacanthus Pomatodelphis Prepomatodelphis Zarhachis Family Squalodelphinidae (Late Oligocene to Middle Miocene) Furcacetus Furcadelphis Huaridelphis Macrosqualodelphis Medocinia Notocetus Phocageneus Squalodelphis Family Squalodontidae (Late Oligocene to Middle Miocene) Austrosqualodon Eosqualodon Macrophoca Neosqualodon Pachyodon Phoberodon Squalodon (syn. Kelloggia, Rhizoprion, Crenidelphinus, Arionius, Phocodon) Smilocamptus Tangaroasaurus Family Waipatiidae (Late Oligocene to Early Miocene) ?Microcetus Otekaikea Papahu Sachalinocetus Waipatia Superfamily Ziphioidea Family Ziphiidae (Miocene to Recent) Basal forms Aporotus Beneziphius Chavinziphius Chimuziphius Choneziphius Dagonodum Globicetus Imocetus Messapicetus Ninoziphius Notoziphius Tusciziphius Ziphirostrum Subfamily Berardiinae Archaeoziphius Microberardius Subfamily Hyperoodontinae Africanacetus Belemnoziphius Ihlengesi Khoikhoicetus Mesoplodon (extant) Mesoplodon posti Mesoplodon slangkopi Mesoplodon tumidirostris Nenga Pterocetus Xhosacetus Subfamily Ziphiinae Caviziphius Izikoziphius Nazcacetus Subfamily incertae sedis Anoplonassa Cetorhynchus Eboroziphius Pelycorhamphus Clade Delphinida Family incertae sedis Anacharsis Belonodelphis Delphinavus Graamocetus Hadrodelphis Lamprolithax Leptodelphis Liolithax Lophocetus Loxolithax Macrokentriodon Microphocaena Miodelphis Nannolithax Oedolithax Oligodelphis Palaeophocaena Pithanodelphis Platylithax Prionodelphis Protodelphinus Sarmatodelphis Sophianacetus Tagicetus Superfamily Delphinoidea Family Albireonidae (Miocene to Pliocene) Albireo Family Delphinidae (Oligocene to Recent) Arimidelphis Astadelphis Australodelphis Delphinus (extant) Delphinus domeykoi Eodelphinus Etruridelphis Hemisyntrachelus Lagenorhynchus (extant) Lagenorhynchus harmatuki Norisdelphis Globicephala (extant) Globicephala baereckeii Globicephala etruriae Orcinus (extant) Orcinus citoniensis Orcinus meyeri Orcinus paleorca Platalearostrum Protoglobicephala Pseudorca (extant) Pseudorca yokoyamai Pseudorca yuanliensis Septidelphis Sinanodelphis Stenella (extant) Stenella rayi Tursiops (extant) Tursiops miocaenus Tursiops osennae Family Kentriodontidae (Oligocene to Pliocene) Kampholophos Kentriodon Rudicetus Sophianaecetus Wimahl Belonodelphis? Liolithax? Family Monodontidae (Miocene to Recent) Bohaskaia Casatia Denebola Haborodelphis Family Odobenocetopsidae (Late Miocene to Early Pliocene) Odobenocetops Family Phocoenidae (Miocene to Recent) Archaeophocaena Australithax Brabocetus Harborophocoena Lomacetus Miophocaena Numataphocoena Piscolithax Pterophocaena Salumiphocaena Semirostrum Septemtriocetus Superfamily Inioidea Awadelphis Brujadelphis Incacetus Meherrinia Family Iniidae (Miocene to Recent) Goniodelphis Hesperoinia Ischyrorhynchus Isthminia Kwanzacetus Saurocetes Family Pontoporiidae (Middle Miocene to Recent) Atocetus Auroracetus Brachydelphis Pliopontos Pontistes Protophocaena Samaydelphis Scaldiporia Stenasodelphis Superfamily Lipotoidea Family Lipotidae (Late Miocene to Recent) Parapontoporia Superfamily incertae sedis Delphinodon Heterodelphis Family incertae sedis Acrodelphis Champsodelphis Hesperocetus Imerodelphis (Miocene) Kharthlidelphis Lonchodelphis Macrochirifer Microsqualodon Pelodelphis Rhabdosteus (nomen dubium) Sulakocetus See also Evolution of cetaceans List of cetaceans List of prehistoric mammals Lists of extinct species References Extinct Cetaceans, extinct Cetaceans Cetaceans, extinct
List of extinct cetaceans
[ "Biology" ]
3,081
[ "Lists of biota", "Lists of animals", "Animals" ]
8,614,012
https://en.wikipedia.org/wiki/Ultra%201
The Ultra 1 is a family of Sun Microsystems workstations based on the 64-bit UltraSPARC microprocessor. It was the first model in the Ultra series of Sun computers, which succeeded the SPARCstation series. It launched in November 1995 alongside the MP-capable Ultra 2 and shipped with Solaris 2.5. It is capable of running other operating systems such as Linux and BSD. Specifications The Ultra 1 was available in a variety of specifications. The Ultra 1 Creator3D 170E launched with a list price of - along with the Ultra 1 Model 140, and Ultra 1 Creator 170E. CPU Three different CPU speeds were available: 143 MHz (Model 140), 167 MHz (Model 170) and 200 MHz (Model 200). Models Model numbers with an E suffix (Sun service code A12, code-named Electron) had two instead of three SBus slots, and added a UPA slot to allow the use of an optional Creator framebuffer. In addition, the E models had Wide SCSI and Fast Ethernet interfaces, in place of the narrow SCSI and 10BASE-T Ethernet of the standard Ultra 1 (service code A11, code-named Neutron). Memory The Ultra 1 uses 200-pin 5V ECC 60 ns SIMMs in pairs, the same memory used in the SPARCstation 20. Similar Machines Similar Sun machines were the Netra i 1 servers which had the same chassis and the UltraServer 1/Ultra Enterprise 1 servers . See also Ultra series References External links Ultra 1 Series Reference Manual Ultra 1 Series Service Manual Ultra 1 Creator Series Reference Manual Ultra 1 Creator Series Service Manual Workstations Product Library Documentation Sun workstations SPARC microprocessor products
Ultra 1
[ "Technology" ]
352
[ "Computing stubs", "Computer hardware stubs" ]
8,614,032
https://en.wikipedia.org/wiki/Sporterising
Sporterising, sporterisation or sporterization is the practice of modifying military-type firearms, either to make them more suitable for civilian hunting or sporting use, or to make them legal under gun law. Modifying for sporting use Modifying for sporting use can involve the addition of a commercial, variable power telescopic sight, the shortening of the fore-end, and (in some cases) the fitting of a new stock. Sporterised rifles may be re-finished or otherwise customized to the tastes or requirements of the individual owner- for example, shortening the barrel or rechambering the firearm in a different caliber. Integrated bayonets, if present, are removed, as are muzzle devices sometimes for legal reasons. Large numbers of military surplus rifles were sporterised in the 1950s and 1960s- especially Lee–Enfield, M1903 Springfield, and Mauser K98 rifles, which were in abundant supply after WWII, and therefore cheaper to acquire than a newly manufactured commercial hunting rifle. SMLE Mk III rifles, in particular, were popular for sporterisation in Australia, New Zealand, and South Africa, with many being converted to wildcat calibers such as .303/25 owing to both the difficulties of importing foreign-made rifles (due largely to economic factors), and also restrictions in the state of New South Wales on the ownership of firearms "of a military caliber", interpreted to mean the .303 British cartridge then in use by the British and Commonwealth militaries. Even in states and countries where there were no such restrictions, many sporting shooters at the time found it expedient to cut down their ex-military SMLEs, in the interests of reducing weight or improving handling. The practice of sporterising is frowned upon by most collectors and firearms enthusiasts because many military surplus rifles are highly collectible in original condition. Permanently altered sporterised firearms often sell for less money than military firearms in original condition. A number of "Commercial" sporting conversions of military surplus arms were undertaken in the 1950s by Interarms, Golden State Arms, the Gibbs Rifle Co. and Navy Arms in the United States. These rifles are often considered to be collectible in their own right, and are not generally regarded as being "sporterised" in the usual sense of the word. Modifying for compliance with legislation Semiautomatic and civilian versions of assault rifles are marketed as Sporter or S models. The term "sporterising" is also used by some to describe the practice by gun manufacturers of producing civilian models of military-style weapons by removing legally restricted features. For example, a manufacturer might have replaced a pistol grip with a thumb-hole stock, or a flash suppressor with a muzzle brake, in order to comply with legislation such as the 1994-2004 US Federal Assault Weapons Ban. Similarly the design of a rifle may be altered in order to prevent it being fired in automatic or burst mode in order to comply with a region's statutes, with some models having entirely different receivers that prevent the fitting of military select-fire trigger groups. Many manufacturers simply settle for semi automatic-only trigger groups without undergoing extensive modification, and select-fire trigger groups are what is often considered to be the actual machine gun part and are thus heavily restricted. Some gun-control advocates consider these civilian models an attempt to circumvent the intent of the laws. See also Glossary of firearms terminology Modern sporting rifle Featureless rifles References Firearm components
Sporterising
[ "Technology" ]
703
[ "Firearm components", "Components" ]
8,614,958
https://en.wikipedia.org/wiki/Social%20connection
Social connection is the experience of feeling close and connected to others. It involves feeling loved, cared for, and valued, and forms the basis of interpersonal relationships."Connection is the energy that exists between people when they feel seen, heard and valued; when they can give and receive without judgement; and when they derive sustenance and strength from the relationship." —Brené Brown, Professor of social work at the University of HoustonIncreasingly, social connection is understood as a core human need, and the desire to connect as a fundamental drive. It is crucial to development; without it, social animals experience distress and face severe developmental consequences. In humans, one of the most social species, social connection is essential to nearly every aspect of health and well-being. Lack of connection, or loneliness, has been linked to inflammation, accelerated aging and cardiovascular health risk, suicide, and all-cause mortality. Feeling socially connected depends on the quality and number of meaningful relationships one has with family, friends, and acquaintances. Going beyond the individual level, it also involves a feeling of connecting to a larger community. Connectedness on a community level has profound benefits for both individuals and society. Related terms Social support is the help, advice, and comfort that we receive from those with whom we have stable, positive relationships. Importantly, it appears to be the perception, or feeling, of being supported, rather than objective number of connections, that appears to buffer stress and affect our health and psychology most strongly. Close relationships refer to those relationships between friends or romantic partners that are characterized by love, caring, commitment, and intimacy. Attachment is a deep emotional bond between two or more people, a "lasting psychological connectedness between human beings." Attachment theory, developed by John Bowlby during the 1950s, is a theory that remains influential in psychology today. Conviviality has many different interpretations and understandings, one of which denotes the idea of living together and enjoying each other's company. This understanding of the term is derived from the French convivialité, which can be traced back to Jean Anthelme Brillat-Savarin in the 19th century. Other interpretations of conviviality include the art of living in the company of others; everyday experiences of community cohesion and togetherness in diverse settings; and the capacity of individuals to interact creatively and autonomously with one another and their environment for the satisfaction of their needs. This third interpretation is rooted in the work of Ivan Illich from the 1970s onwards. Social connection is fundamental to all of these interpretations of conviviality. A basic need In his influential theory on the hierarchy of needs, Abraham Maslow proposed that our physiological needs are the most basic and necessary to our survival, and must be satisfied before we can move on to satisfying more complex social needs like love and belonging. However, research over the past few decades has begun to shift our understanding of this hierarchy. Social connection and belonging may in fact be a basic need, as powerful as our need for food or water. Mammals are born relatively helpless, and rely on their caregivers not only for affection, but for survival. This may be evolutionarily why mammals need and seek connection, and also for why they suffer prolonged distress and health consequences when that need is not met. In 1965, Harry Harlow conducted his landmark monkey studies. He separated baby monkeys from their mothers, and observed which surrogate mothers the baby monkeys bonded with: a wire "mother" that provided food, or a cloth "mother" that was soft and warm. Overwhelmingly, the baby monkeys preferred to spend time clinging to the cloth mother, only reaching over to the wire mother when they became too hungry to continue without food. This study questioned the idea that food is the most powerful primary reinforcement for learning. Instead, Harlow's studies suggested that warmth, comfort, and affection (as perceived from the soft embrace of the cloth mother) are crucial to the mother-child bond, and may be a powerful reward that mammals may seek in and of itself. Although historically significant, it is important to acknowledge that this study does not meet current research standards for the ethical treatment of animals. In 1995, Roy Baumeister proposed his influential belongingness hypothesis: that human beings have a fundamental drive to form lasting relationships, to belong. He provided substantial evidence that indeed, the need to belong and form close bonds with others is itself a motivating force in human behavior. This theory is supported by evidence that people form social bonds relatively easily, are reluctant to break social bonds, and keep the effect on their relationships in mind when they interpret situations. He also contends that our emotions are so deeply linked to our relationships that one of the primary functions of emotion may be to form and maintain social bonds, and that both partial and complete deprivation of relationships leads to not only painful but pathological consequences. Satisfying or disrupting our need to belong, our need for connection, has been found to influence cognition, emotion, and behavior. In 2011, Roy Baumeister furthered this notion of belongingness by proposing the Need to Belong Theory, which asserts that humans have an inherent drive to maintain a minimum number of social relationships to foster a sense of belonging. Baumeister highlights the importance of satiation and substitution in driving human behavior and social connection. Motivational satiation is a phenomenon in which an individual may desire something, but at a certain point, they may reach a point where they have had enough and no longer want or need any more of it. This concept can be applied to the formation of friendships, where an individual may desire social connections, but they may reach a point where they have enough friends and do not seek any more. However, Baumeister suggests that people still require a certain minimum amount of social connection, and to some extent, these bonds can substitute for each other. The Need to Belong Theory is a primary motivator of human behavior, providing a framework for understanding social relationships as a basic, fundamental need for psychological health and well-being. Neurobiology Brain areas While it appears that social isolation triggers a "neural alarm system" of threat-related regions of the brain (including the amygdala, dorsal anterior cingulate cortex (dACC), anterior insula, and periaqueductal gray (PAG)), separate regions may process social connection. Two brain areas that are part of the brain's reward system are also involved in processing social connection and attention to loved ones: the ventromedial prefrontal cortex (VMPFC), a region that also responds to safety and inhibits threat responding, and the ventral striatum (VS) and septal area (SA), part of a neural system that is activated by taking care of one's own young. Key neurochemicals Opioids In 1978, neuroscientist Jaak Panksepp observed that small doses of opiates reduced the distressed cries of puppies that were separated from their mothers. As a result, he developed the brain opioid theory of attachment, which posits that endogenous (internally produced) opioids underlie the pleasure that social animals derive from social connection, especially within close relationships. Extensive animal research supports this theory. Mice who have been genetically modified to not have mu-opioid receptors (mu-opioid receptor knockout mice), as well as sheep with their mu-receptors blocked temporarily following birth, do not recognize or bond with their mother. When separated from their mother and conspecifics, rats, chicks, puppies, guinea pigs, sheep, dogs, and primates emit distress vocalizations, however giving them morphine (i.e. activating their opioid receptors), quiets this distress. Endogenous opioids appear to be produced when animals engage in bonding behavior, while inhibiting the release of these opioids results in signs of social disconnection. In humans, blocking mu-opioid receptors with the opioid antagonist naltrexone has been found to reduce feelings of warmth and affection in response to a film clip about a moment of bonding, and to increase feelings of social disconnection towards loved ones in daily life as well as in the lab in response to a task designed to elicit feelings of connection. Although the human research on opioids and bonding behavior is mixed and ongoing, this suggests that opioids may underlie feelings of social connection and bonding in humans as well. Oxytocin In mammals, oxytocin has been found to be released during childbirth, breastfeeding, sexual stimulation, bonding, and in some cases stress. In 1992, Sue Carter discovered that administering oxytocin to prairie voles would accelerate their monogamous pair-bonding behavior. Oxytocin has also been found to play many roles in the bonding between mother and child. In addition to pair-bonding and motherhood, oxytocin has been found to play a role in prosocial behavior and bonding in humans. Nicknamed the “love drug” or “cuddle chemical,” plasma levels of oxytocin increase following physical affection, and are linked to more trusting and generous social behavior, positively biased social memory, attraction, and anxiety and hormonal responses. Further supporting a nuanced role in adult human bonding, greater circulating oxytocin over a 24-hour period was associated with greater love and perceptions of partner responsiveness and gratitude, however was also linked to perceptions of a relationship being vulnerable and in danger. Thus oxytocin may play a flexible role in relationship maintenance, supporting both the feelings that bring us closer and the distress and instinct to fight for an intimate bond in peril. Health Consequences of disconnection A wide range of mammals, including rats, prairie voles, guinea pigs, cattle, sheep, primates, and humans, experience distress and long-term deficits when separated from their parent. In humans, long-lasting health consequences result from early experiences of disconnection. In 1958, John Bowlby observed profound distress and developmental consequences when orphans lacked warmth and love of our first and most important attachments: our parents. Loss of a parent during childhood was found to lead to altered cortisol and sympathetic nervous system reactivity even a decade later, and affect stress response and vulnerability to conflict as a young adult. In addition to the health consequences of lacking connection in childhood, chronic loneliness at any age has been linked to a host of negative health outcomes. In a meta-analytic review conducted in 2010, results from 308,849 participants across 148 studies found that people with strong social relationships had a 50% greater chance of survival. This effect on mortality is not only on par with one of the greatest risks, smoking, but exceeds many other risk factors such as obesity and physical inactivity. Loneliness has been found to negatively affect the healthy function of nearly every system in the body: the brain, immune system, circulatory and cardiovascular systems, endocrine system, and genetic expression. Not only is social isolation harmful to health, but it is more and more common. As many as 80% of young people under 18 years old, and 40% of adults over the age of 65 report being lonely sometimes, and 15–30% of the general population feel chronic loneliness. These numbers appear to be on the rise, and researchers have called for social connection to be public health priority. Social immune system One of the main ways social connection may affect our health is through the immune system. The immune system's primary activity, inflammation, is the body's first line of defense against injury and infection. However, chronic inflammation has been tied to atherosclerosis, Type II diabetes, neurodegeneration, and cancer, as well as compromised regulation of inflammatory gene expression by the brain. Research over the past few decades has revealed that the immune system not only responds to physical threats, but social ones as well. It has become clear that there is a bidirectional relationship between circulating biomarkers of inflammation (e.g. the cytokine IL-6) and feelings of social connection and disconnection; not only are feelings of social isolation linked to increased inflammation, but experimentally induced inflammation alters social behavior and induces feelings of social isolation. This has important health implications. Feelings of chronic loneliness appear to trigger chronic inflammation. However, social connection appears to inhibit inflammatory gene expression and increase antiviral responses. Performing acts of kindness for others were also found to have this effect, suggesting that helping others provides similar health benefits. Why might our immune system respond to our perceptions of our social world? One theory is that it may have been evolutionarily adaptive for our immune system to "listen" in to our social world to anticipate the kinds of bacterial or microbial threats we face. In our evolutionary past, feeling socially isolated may have meant we were separated from our tribe, and therefore more likely to experience physical injury or wounds, requiring an inflammatory response to heal. On the other hand, feeling connected may have meant we were in relative physical safety of community, but at greater risk of socially transmitted viruses. To meet these threats with greater efficiency, the immune system responds with anticipatory changes. A genetic profile was discovered to initiate this pattern of immune response to social adversity and stress — up-regulation of inflammation, down-regulation of antiviral activity — known as Conserved Transcriptional Response to Adversity. The inverse of this pattern, associated with social connection, has been linked to positive health outcomes as well as eudaemonic well-being. Positive pathways Social connection and support have been found to reduce the physiological burden of stress and contribute to health and well-being through several other pathways as well, although there remains a subject of ongoing research. One way social connection reduces our stress response is by inhibiting activity in our pain and alarm neural systems. Brain areas that respond to social warmth and connection (notably, the septal area) have inhibitory connections to the amygdala, which have the structural capacity to reduce threat responding. Another pathway by which social connection positively affects health is through the parasympathetic nervous system (PNS), the "rest and digest" system which parallels and offsets the "flight or fight" sympathetic nervous system (SNS). Flexible PNS activity, indexed by vagal tone, helps regulate the heart rate and has been linked to a healthy stress response as well as numerous positive health outcomes. Vagal tone has been found to predict both positive emotions and social connectedness, which in turn result in increased vagal tone, in an "upward spiral" of well-being. Social connection often occurs along with and causes positive emotions, which themselves benefit our health. Measures Social Connectedness Scale This scale was designed to measure general feelings of social connectedness as an essential component of belongingness. Items on the Social Connectedness Scale reflect feelings of emotional distance between the self and others, and higher scores reflect more social connectedness. UCLA Loneliness Scale Measuring feelings of social isolation or disconnection can be helpful as an indirect measure of feelings of connectedness. This scale is designed to measure loneliness, defined as the distress that results when one feels disconnected from others. Relationship Closeness Inventory (RCI) This measure conceptualizes closeness in a relationship as a high level of interdependence in two people's activities, or how much influence they have over one another. It correlates moderately with self-reports of closeness, measured using the Subjective Closeness Index (SCI). Liking and Loving Scales These scales were developed to measure the difference between liking and loving another person—critical aspects of closeness and connection. Good friends were found to score highly on the liking scale, and only romantic partners scored highly on the loving scale. They support Zick Rubin's conceptualization of love as containing three main components: attachment, caring, and intimacy. Personal Acquaintance Measure (PAM) This measure identifies six components that can help determine the quality of a person's interactions and feelings of social connectedness with others: Duration of relationship Frequency of interaction with the other person Knowledge of the other person's goals Physical intimacy or closeness with the other person Self-disclosure to the other person Social network familiarity—how familiar is the other person with the rest of your social circle Experimental manipulations Social connection is a unique, elusive, person-specific quality of our social world. Yet, can it be manipulated? This is a crucial question for how it can be studied, and whether it can be intervened on in a public health context. There are at least two approaches that researchers have taken to manipulate social connection in the lab: Social connection task This task was developed at UCLA by Tristen Inagaki and Naomi Eisenberger to elicit feelings of social connection in the laboratory. It consists of collecting positive and neutral messages from 6 loved ones of a participant, and presenting them to the participant in the laboratory. Feelings of connection and neural activity in response to this task have been found to rely on endogenous opioid activity. Closeness-generating procedure Arthur Aron at the State University of New York at Stony Brook and collaborators designed a series of questions designed to generate interpersonal closeness between two individuals who have never met. It consists of 36 questions that subject pairs ask each other over a 45-minute period. It was found to generate a degree of closeness in the lab, and can be more carefully controlled than connection within existing relationships. See also Affection Attachment theory Friendship Interpersonal relationships Interpersonal ties Interpersonal emotion regulation Intimate relationships Human bonding Love Social isolation Social robot Social support References Emotion Interpersonal relationships Social psychology concepts
Social connection
[ "Biology" ]
3,613
[ "Emotion", "Behavior", "Interpersonal relationships", "Human behavior" ]
8,616,368
https://en.wikipedia.org/wiki/Trench%20shield
Trench shields (also called trench boxes or trench sheets) are steel or aluminum structures used to avoid cave-ins and protect utility workers while performing their duties within a trench. They are customarily constructed with sidewalls of varying thicknesses held apart by steel or aluminum spreaders. Spreaders can be interchanged to match the width of the trench. The different materials and building designs lead to a variety of depth ratings: the depth of a trench that the shield can withstand a collapse without buckling. Depth ratings are determined by registered professional engineers. A shield should not be confused with a shore. While they may serve a similar function, trench shoring is a different physical application that holds up the walls of a trench to prevent collapse. In the US, use of a trench shield is governed by OSHA 29 CFR Part 1926.650-.652 Subpart P-Excavations. External links National Utility Contractors Association References Building engineering Protective barriers
Trench shield
[ "Engineering" ]
193
[ "Building engineering", "Civil engineering", "Architecture" ]
8,616,527
https://en.wikipedia.org/wiki/Low%20voltage
In electrical engineering, low voltage is a relative term, the definition varying by context. Different definitions are used in electric power transmission and distribution, compared with electronics design. Electrical safety codes define "low voltage" circuits that are exempt from the protection required at higher voltages. These definitions vary by country and specific codes or regulations. IEC Definition The International Electrotechnical Commission (IEC) standard IEC 61140:2016 defines Low voltage as 0 to 1000 V AC RMS or 0 to 1500 V DC. Other standards such as IEC 60038 defines supply system low voltage as voltage in the range 50 to 1000 V AC or 120 to 1500 V DC in IEC Standard Voltages which defines power distribution system voltages around the world. In electrical power systems low voltage most commonly refers to the mains voltages as used by domestic and light industrial and commercial consumers. "Low voltage" in this context still presents a risk of electric shock, but only a minor risk of electric arcs through the air. United Kingdom British Standard BS 7671, Requirements for Electrical Installations. IET Wiring Regulations, defines supply system low voltage as: The ripple-free direct current requirement only applies to 120 V DC, not to any DC voltage above that. For example, a direct current that is exceeding 1500 V during voltage fluctuations is not categorized as low-voltage. United States In electrical power , the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V. The NFPA standard 79 defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases. Standard NFPA 70E, Article 130, 2021 Edition, omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established. UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. See also High voltage Low Voltage Directive References Further reading Defining Low Voltage Circuits Electricity Electrical engineering Electrical safety
Low voltage
[ "Engineering" ]
477
[ "Electrical engineering" ]
970,936
https://en.wikipedia.org/wiki/Matchstick%20model
Matchstick models are scale models made from matches as a hobby. Regular matches are not used, however, but a special modeling type which do not have the combustible heads, and can be bought from art and craft shops. Though before the serial production of these, actual matches were used with heads trimmed off, or kept on to add coloured detail. History Originally, matchstick models were a pastime of prisoners (especially naval prisoners of war) during the 18th century. At the time, better funded modelers preferred to use more replicated parts for their models, like professionals today, and the poor couldn't afford to use up so many matches. An early pioneer in matchstick models as an art form was Australian artist Len Hughes, whose first large-scale piece was a recreation of the Battle of the Spanish Armada that included 331 replica ships. Hughes went on to open the World of Matchcraft Museum in Caloundra, Queensland, which later closed. Construction The matches are cut by means of a sharp knife and fixed together using glue, often being held in place by paperboard "formers" until the glue is dry. While the smallest gaps can be filled with glue, larger ones can be filled with specially carved matches. A number of hobbyists prefer to build their models from scratch. Many kits are available, consisting of instructions, pre-cut card formers and sufficient modeling matches for the project. An exceptionally large and impressive matchstick model was a scratch-built replica of Notre Dame Cathedral which included electric lights and measured over six feet in length. Exhibitions Gladbrook, Iowa is home to the Matchstick Marvels Museum that includes numerous models by matchstick model artist Patrick Acton. His work includes a 13-foot scale model of the USS Iowa. References External links Scale modeling
Matchstick model
[ "Physics" ]
361
[ "Scale modeling" ]
971,177
https://en.wikipedia.org/wiki/Staffroom
A staffroom or teachers' lounge is a room in a school or college. It may refer to a communal work area where teachers have their desk and prepare lessons if they do not have a personal office, or may be a common room where teachers and/or school staff can relax, discuss work, eat, drink and socialise while not in class. See also Mailroom Staff room bullying School terminology Rooms
Staffroom
[ "Engineering" ]
83
[ "Rooms", "Architecture" ]
971,210
https://en.wikipedia.org/wiki/Michael%20Drew
Michael Drew is a professor emeritus of chemistry at the University of Reading. He used to hold the position of head of physical chemistry. His main area of study centres on computational chemistry. External links British physical chemists Academics of the University of Reading Living people Year of birth missing (living people) Computational chemists
Michael Drew
[ "Chemistry" ]
63
[ "Computational chemistry", "Theoretical chemists", "Computational chemists" ]
971,278
https://en.wikipedia.org/wiki/Aspic
Aspic () or meat jelly is a savory gelatin made with a meat stock or broth, set in a mold to encase other ingredients. These often include pieces of meat, seafood, vegetable, or eggs. Aspic is also sometimes referred to as aspic gelée or aspic jelly. In its simplest form, aspic is essentially a gelatinous version of conventional soup. History According to one poetic reference by Ibrahim ibn al-Mahdi, who described a version of a dish prepared with Iraqi carp, it was "like ruby on the platter, set in a pearl ... steeped in saffron thus, like garnet it looks, vibrantly red, shimmering on silver". Historically, meat aspics were made even before fruit- and vegetable-flavoured aspics. By the Middle Ages, cooks had discovered that a thickened meat broth could be made into a jelly. A detailed recipe for aspic is found in Le Viandier, written in or around 1375. In the early 19th century, the French chef Marie-Antoine Carême created chaudfroid. The term chaudfroid means "hot cold" in French, referring to foods that were prepared hot and served cold. Aspic was used as a chaudfroid sauce in many cold fish and poultry meals, where it added moisture and flavour to the food. Carême also invented various types of aspic and ways of preparing it. Aspic came into prominence in America in the early 20th century. By the 1950s, meat aspic was a popular dinner staple, as were other gelatin-based dishes such as tomato aspic. Cooks showed off their aesthetic skills by creating inventive aspics. Uses Aspic jelly may be colorless (white aspic) or contain various shades of amber. Aspic can be used to protect food from the air, to give food more flavor, or as a decoration. It can also be used to encase meats, preventing them from becoming spoiled. The gelatin keeps out air and bacteria, keeping the cooked meat or other ingredients fresh for longer. There are three types of aspic: delicate, sliceable, and inedible. The delicate aspic is soft. The sliceable aspic must be made in a terrine or in an aspic mold. It is firmer than the delicate aspic. The inedible aspic is never for consumption and is usually for decoration. Aspic is often used to glaze food pieces in food competitions to make the food glisten and make it more appealing to the eye. Foods dipped in aspic have a lacquered finish for a fancy presentation. Aspic can be cut into various shapes and be used as a garnish for deli meats or pâtés. Preparation The preparation of pork jelly includes placing lean pork meat, trotters, rind, ears, and snout in a pot of cold water and letting it cook over a slow fire for three hours. The broth is allowed to cool, while also removing any undesirable fats. Subsequently, white vinegar and the juice of half an orange or lemon can be added to the meat so that it is covered. The entire mixture is then allowed to cool and gel. Bay leaves or chili can be added to the broth for added taste (the Romanian variety is based on garlic and includes no vinegar, orange, lemon, chili, bay leaves, etc.). However, there are many alternate ways of preparing pork jelly, such as the usage of celery, beef and even pig bones. Poultry jellies are made the same way as making pork jelly, but less water is added to compensate for lower natural gelatin content. Almost any type of food can be set into aspics, and almost any type of meat (poultry or fish included) can be used to make gelatin, although in some cases, additional gelatin may be needed for the aspic to set properly. Stock can be clarified with egg whites and then filled and flavored just before the aspic sets. The most common are pieces of meat, seafood, eggs, fruits, or vegetables. Veal stock (in particular, stock from a boiled calf's foot) provides a great deal of gelatin, so other types of meat are often included when making stock. Fish consommés usually have too little natural gelatin, so fish stock may be double-cooked or supplemented. Since fish gelatin melts at a lower temperature than the gelatins of other meats, fish aspic is more delicate and melts more readily in the mouth. Most fish stocks usually do not maintain a molded shape with their natural gelatin alone, so additional gelatin is added. Vegetables have no natural gelatin. However, pectin serves a similar purpose in culinary applications such as jams and jellies. Global variations of aspic Pork jelly Pork jelly is an aspic made from low-grade cuts of pig meat, such as trotters, that contain a significant proportion of connective tissue. Pork jelly is a popular appetizer and, nowadays, is sometimes prepared in a more modern version using lean meat, with or without pig leftovers (which are substituted with store-bought gelatin). It is very popular in Croatia, Serbia, Poland, Czech Republic, Romania, Moldova, Estonia, Latvia, Lithuania, Slovakia (called ), Hungary, Greece, and Ukraine. In Russia, Belarus, Georgia and Ukraine, it is known as , during Christmas or Easter. In Russia, is a traditional winter and especially Christmas and New Year's dish, which is eaten with (horseradish paste) or mustard. It is also eaten in Vietnam () during Lunar New Year. The meat in pork pies is preserved using pork jelly. (), (), () is an aspic-like dish, generally made from lamb, chicken or pork meat, such as the head, shank, or hock, made into a semi-consistent gelatinous cake-like form. In some varieties, chicken is used instead of pork. Some recipes also include smoked meat and are well spiced. is commonly just one component of the traditional meal (or an appetizer), although it can be served as a main dish. It is usually accompanied by cold mastika or rakija (grape brandy) and turšija (pickled tomatoes, peppers, olives, cauliflower, cucumber). The recipe calls for the meat to be cleaned, washed, and then boiled for a short time, no longer than 10 minutes. Then the water is changed, and vegetables and spices are added. This is cooked until the meat begins to separate from the bones, then the bones are removed, the meat stock is filtered, and the meat and stock are poured into shallow bowls. Garlic is added as well as thin slices of tomatoes or green peppers (or something similar for decoration). It is left to sit in a cold spot, such as a fridge or outside if the weather is cold enough. It congeals into jelly and can be cut into cubes (it is often said that good are "cut like glass"). These cubes can be sprinkled with various spices or herbs as desired before serving. is usually cut and served in equal sized cubes. are frequently used in slavas and other celebratory occasions with Serbs. Romanian and Moldovan Romanian and Moldovan is also called (plural ), derived from the Romanian , meaning cold. has a different method of preparation. It is usually made with pig's trotter (but turkey or chicken meat can also be used), carrots and other vegetables, boiled to make a soup with high gelatin content. The broth containing gelatin is poured over the boiled meat and mashed garlic in bowls, the mixture being then cooled to become a jelly. is traditionally served for Epiphany. Korea () is a dish prepared by boiling beef and pork cuts with high collagen content such as the head, skin, tail, cow's trotters, or other cuts in water for a long time. The resulting stewing liquid sets to form a jelly-like substance when cooled. Nepal Among the Newars of Kathmandu Valley in Nepal, buffalo meat jelly, known as , is a major component of the winter festivity gourmet. It is eaten in combination with fish aspic (), which is made from dried fish and buffalo meat stock, soured, and containing a heavy mix of spices and condiments. Poland In Central, Eastern, and Northern Europe, aspic often takes the form of pork jelly and is popular around the Christmas and Easter holidays. In Poland, certain meats, fish and vegetables are set in aspic, creating a dish called . Eastern Europe In Belarusian, Russian, and Ukrainian cuisine, a meat aspic dish is called ( ; ; ; also written as holodetz outside these countries) derived from the word meaning "cold". In some areas it is called () or (), derived from a different root with a similar meaning. The dish is part of winter holiday celebrations such as the traditional Russian New Year (novy god) or Christmas meal. However, modern refrigeration allows for its year-round production, and it is not uncommon to see on a Russian table in summer. is usually made by boiling the bones and meat rich in collagen for about 5–8 hours to produce a thick and fatty broth, with the collagen hydrolizing into the natural gelatin, mixed with salt, pepper, and other spices. The meat is then separated from the bones, minced, recombined with the broth, dressed with the slices of boiled egg and herbs like parsley and cooled until it solidifies into a jelly. is usually eaten with or mustard. Croatia The Croatian version of this dish is called ( meaning cold). Variants range from one served in a dish with rather delicate gelatin, to more resembling the German sulze, a kind of head cheese. Slovenia In Slovenia, aspic is known as (derived from the German , meaning head cheese) or in Slovene. It is traditionally served at Easter. Denmark In Denmark, aspic is called and is made from meat juices, gelatin, and sometimes mushrooms. Sky is almost solely eaten as a topping for cold cuts or on Danish open faced sandwiches called . It is a key ingredient in , a dish combining , sliced salt beef and onions. Sky, with or without mushrooms, is an easy-to-find product in most supermarkets. Georgia or () is a traditional Georgian dish of cold jellied pork. Its ingredients include pork meat, tails, ears, feet, carrots, vinegar, garlic, herbs, onions, roots, bay leaves, allspice, and cinnamon. In some recipes, the dish is cooked in two separate processes, slightly pickled with wine vinegar and spiced with tarragon and basil. One part contains pork feet, tails and ears; the other contains the lean meat of piglets. They are combined into one dish, chilled and served with green onions and spicy herbs. Belgium Rog in 't zuur or rog in zure gelei is a Flemish traditional recipe to preserve ray wings which are otherwise notoriously quick to spoil. Ray wings are poached in a fish stock with vinegar, spices and onions, then preserved by adding gelatin to the stock and covering the fish with the gelatin stock. In this manner the fish would keep 2–4 days without refrigeration. The dish is served cold with bread for breakfast or as a snack, or can be served as an appetizer. China In Northern China, () is a traditional dish served in winter, especially during the Chinese New Year. This Chinese dish of aspic is usually made by boiling pork rind in water. The dishes cooled without pork rind are called () while those containing pork rind in the aspic are called (). In Zhenjiang, aspic using pig trotters is called Salted Pork in Jelly (). The dish has two layers of meat. The upper layer, about half an inch thick, is 'pigskin aspic', while the lower layer is half red and half white, made from boiling pig's trotter and pigskin until gelled, forming 'meat aspic'. The traditional method of preparing the dish involves boiling the trotter with Saltpeter, resulting in a crimson hue. However, due to the use of saltpeter in food being banned, the modern approach is using German pork knuckles. Vietnam Giò thủ, giò tai, also known by another popular name giò xào, is one of the traditional Vietnamese sausage dishes with the main ingredient being stir-fried meat with some other ingredients, then wrapped and compressed. Originating in Northern Vietnam and now popular throughout the country, more or less similar forms of preparation like this dish also exist in many other cuisines around the world. The processing process is relatively easy, the ingredients are easy to find, and the finished product is delicious and strangely chewy, making spring rolls a familiar dish of people all over the region. Giò thủ is often made by families during the traditional Lunar New Year, and is sold at sausage shops in Vietnam most markets nationwide. A more accurate variants of aspic in Vietnamese is called Thịt đông, or Vietnamese pork aspic. Health benefits Aspic is a source of various nutrients like iron, vitamin A, vitamin K, fatty acids, selenium, zinc, magnesium and phosphorus. An amino acid called glutamine in aspic may enhance the integrity of the intestinal barrier, which may be beneficial for inflammatory bowel disease and other digestive problems. Glycine from aspic can improve sleep and reduce fatigue during the day. See also Chaudfroid sauce Cretons Garde manger Galantine Head cheese Jell-O Kalvsylta Khash Meat-jelly Festival Pâté P'tcha Pig's trotters Terrine Larks' Tongues in Aspic (King Crimson album) References Notes Bibliography Allen, Gary; Ken Albala.The Business of Food: Encyclopedia of the Food and Drink Industries.Westport, Connecticut: Greenwood Publishing Group, October 2007. . Gisslen, Wayne. Professional Cooking, 6th edition. Hoboken, New Jersey: John Wiley and Sons, March 2006. Nenes, Michael. American Regional Cuisine, 2nd edition. Hoboken, New Jersey: Art Institute, March 2006. . Ruhlman, Michael; Anthony Bourdain. The Elements of Cooking: Translating the Chef's Craft for Every Kitchen. New York, New York: Simon and Schuster, November 2007. . Smith, Andrew. The Oxford Companion to American Food and Drink. New York, New York: Oxford University Press, March 2007. . External links Latvian pork aspic Russian Meat Aspic Cuisine of the Southern United States Russian cuisine Ukrainian cuisine Polish cuisine Lithuanian cuisine Romanian cuisine Brazilian cuisine Colombian cuisine Food ingredients Meat Garde manger Culinary terminology Gelatin dishes Romani cuisine
Aspic
[ "Technology" ]
3,122
[ "Food ingredients", "Components" ]
971,289
https://en.wikipedia.org/wiki/Dropsonde
A dropsonde is an expendable weather reconnaissance device created by the National Center for Atmospheric Research (NCAR), designed to be dropped from an aircraft at altitude over water to measure (and therefore track) storm conditions as the device falls to the surface. The sonde contains a GPS receiver, along with pressure, temperature, and humidity (PTH) sensors to capture atmospheric profiles and thermodynamic data. It typically relays this data to a computer in the aircraft by radio transmission. Usage Dropsonde instruments are typically the only current method to measure the winds and barometric pressure through the atmosphere and down to the sea surface within the core of tropical cyclones far from land-based weather radar. The data obtained is usually fed via radio into supercomputers for numerical weather prediction, enabling forecasters to better predict the effects and intensity, based on computer-generated models using data gathered from previous storms under similar conditions. This helps meteorologists to more reliably establish a storm's potential damage, based on those factors. Since the early 1970s, United States Air Force Reserves Hurricane Hunters of the 53rd Weather Reconnaissance Squadron based at Keesler Air Force Base in Biloxi, Mississippi, have employed dropsondes while flying over the ocean to obtain meteorological data on the structure of hurricanes deemed to be of possible concern to coastal and inland locations in the northern Atlantic ocean, northeastern Pacific ocean, and the Gulf of Mexico. During a typical hurricane season, Hurricane Hunters deploys 1,000 to 1,500 sondes on training and storm missions. Aircraft reconnaissance missions are also sometimes requested to investigate the broader atmospheric structure over the ocean when cyclones may pose a significant threat to the United States. These interests include not only potential hurricanes, but also possible snow events (like nor'easters) or significant tornado outbreaks. The dropsondes are used to supplement the large gaps over oceans within the global network of daily radiosonde launches. Typically satellite data provides an estimate of conditions in such areas, but the increased precision of sondes can improve forecasts, particularly of the storm path. Dropsondes may also be employed during meteorological research projects. Device and launch details The sonde is a lightweight system designed to be operated by one person and is launched through a chute installed in the measuring aircraft. The device's descent is slowed and stabilized by a small square-cone parachute, allowing for more readings to be taken before it reaches the ocean surface. The parachute is designed to immediately deploy after release so as to reduce or eliminate any pendulum effect, and the device typically drops for three to five minutes. The sonde has a casing of stiff cardboard to protect electronics and form a more stable aerodynamic profile. To obtain data in a tropical cyclone, an aircraft (in the US, operated either by NOAA or the U.S. Air Force) flies into the system. A series of dropsondes are typically released as the plane passes through the storm, typically launched with greatest frequency near the center of the storm, including into the eyewall and eye (center), if one exists. Most drops are performed at a flight level of around 10,000 feet (approx. 3,000 meters). The dropsonde sends back coded data, which includes: The date and time of the drop. Time is always in UTC. Location of the drop, indicated by the latitude, longitude, and Marsden square. The height, temperature, dewpoint depression, wind speed, and wind direction recorded at any standard isobaric surfaces encountered as the dropsonde descends, which are from the set of: 1000, 925, 850, 700, 500, 400, 300, 250 hectopascals (hPa), and at the sea surface. The temperature and dewpoint depression at all other atmospheric pressure deemed significant due to important changes or values in the atmospheric conditions found Air pressure, temperature, dewpoint depression, wind speed and wind direction of the tropopause. Also included in the report is information on the aircraft, the mission, the dropsonde itself, and other remarks. Driftsondes A driftsonde is a high altitude, durable weather balloon holding a transmitter and a bank (35 in the first models) of miniature dropsonde capsules which can then be dropped at automatic intervals or remotely. The water-bottle-sized transmitters in the dropsondes have enough power to send information to the balloon during their parachute-controlled fall. The balloon carries a larger transmitter powerful enough to relay readings to a satellite. The single-use sensor packages cost US$300 to $400 each. After being introduced in April 2007, around a thousand a year are expected to be used to track winds in hurricane breeding grounds off of West Africa, which are outside the operating region of hurricane hunter planes. See also Atmospheric science Radiosonde References External links Data from Expendable Probes (NOAA site) NCAR GPS Dropsonde System (AVAPS) Vaisala Dropsonde RD94 Meteomodem Dropsonde Meteorological instrumentation and equipment Atmospheric sounding
Dropsonde
[ "Technology", "Engineering" ]
1,033
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
971,437
https://en.wikipedia.org/wiki/Total%20least%20squares
In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models. The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, low-rank approximation of the data matrix. Linear model Background In the least squares method of data modeling, the objective function to be minimized, S, is a quadratic form: where r is the vector of residuals and W is a weighting matrix. In linear least squares the model contains equations which are linear in the parameters appearing in the parameter vector , so the residuals are given by There are m observations in y and n parameters in β with m>n. X is a m×n matrix whose elements are either constants or functions of the independent variables, x. The weight matrix W is, ideally, the inverse of the variance-covariance matrix of the observations y. The independent variables are assumed to be error-free. The parameter estimates are found by setting the gradient equations to zero, which results in the normal equations Allowing observation errors in all variables Now, suppose that both x and y are observed subject to error, with variance-covariance matrices and respectively. In this case the objective function can be written as where and are the residuals in x and y respectively. Clearly these residuals cannot be independent of each other, but they must be constrained by some kind of relationship. Writing the model function as , the constraints are expressed by m condition equations. Thus, the problem is to minimize the objective function subject to the m constraints. It is solved by the use of Lagrange multipliers. After some algebraic manipulations, the result is obtained. or alternatively where M is the variance-covariance matrix relative to both independent and dependent variables. Example When the data errors are uncorrelated, all matrices M and W are diagonal. Then, take the example of straight line fitting. in this case showing how the variance at the ith point is determined by the variances of both independent and dependent variables and by the model being used to fit the data. The expression may be generalized by noting that the parameter is the slope of the line. An expression of this type is used in fitting pH titration data where a small error on x translates to a large error on y when the slope is large. Algebraic point of view As was shown in 1980 by Golub and Van Loan, the TLS problem does not have a solution in general. The following considers the simple case where a unique solution exists without making any particular assumptions. The computation of the TLS using singular value decomposition (SVD) is described in standard texts. We can solve the equation for B where X is m-by-n and Y is m-by-k. That is, we seek to find B that minimizes error matrices E and F for X and Y respectively. That is, where is the augmented matrix with E and F side by side and is the Frobenius norm, the square root of the sum of the squares of all entries in a matrix and so equivalently the square root of the sum of squares of the lengths of the rows or columns of the matrix. This can be rewritten as where is the identity matrix. The goal is then to find that reduces the rank of by k. Define to be the singular value decomposition of the augmented matrix . where V is partitioned into blocks corresponding to the shape of X and Y. Using the Eckart–Young theorem, the approximation minimising the norm of the error is such that matrices and are unchanged, while the smallest singular values are replaced with zeroes. That is, we want so by linearity, We can then remove blocks from the U and Σ matrices, simplifying to This provides E and F so that Now if is nonsingular, which is not always the case (note that the behavior of TLS when is singular is not well understood yet), we can then right multiply both sides by to bring the bottom block of the right matrix to the negative identity, giving and so A naive GNU Octave implementation of this is: function B = tls(X, Y) [m n] = size(X); % n is the width of X (X is m by n) Z = [X Y]; % Z is X augmented with Y. [U S V] = svd(Z, 0); % find the SVD of Z. VXY = V(1:n, 1+n:end); % Take the block of V consisting of the first n rows and the n+1 to last column VYY = V(1+n:end, 1+n:end); % Take the bottom-right block of V. B = -VXY / VYY; end The way described above of solving the problem, which requires that the matrix is nonsingular, can be slightly extended by the so-called classical TLS algorithm. Computation The standard implementation of classical TLS algorithm is available through Netlib, see also. All modern implementations based, for example, on solving a sequence of ordinary least squares problems, approximate the matrix (denoted in the literature), as introduced by Van Huffel and Vandewalle. It is worth noting, that this is, however, not the TLS solution in many cases. Non-linear model For non-linear systems similar reasoning shows that the normal equations for an iteration cycle can be written as where is the Jacobian matrix. Geometrical interpretation When the independent variable is error-free a residual represents the "vertical" distance between the observed data point and the fitted curve (or surface). In total least squares a residual represents the distance between a data point and the fitted curve measured along some direction. In fact, if both variables are measured in the same units and the errors on both variables are the same, then the residual represents the shortest distance between the data point and the fitted curve, that is, the residual vector is perpendicular to the tangent of the curve. For this reason, this type of regression is sometimes called two dimensional Euclidean regression (Stein, 1983) or orthogonal regression. Scale invariant methods A serious difficulty arises if the variables are not measured in the same units. First consider measuring distance between a data point and the line: what are the measurement units for this distance? If we consider measuring distance based on Pythagoras' Theorem then it is clear that we shall be adding quantities measured in different units, which is meaningless. Secondly, if we rescale one of the variables e.g., measure in grams rather than kilograms, then we shall end up with different results (a different line). To avoid these problems it is sometimes suggested that we convert to dimensionless variables—this may be called normalization or standardization. However, there are various ways of doing this, and these lead to fitted models which are not equivalent to each other. One approach is to normalize by known (or estimated) measurement precision thereby minimizing the Mahalanobis distance from the points to the line, providing a maximum-likelihood solution; the unknown precisions could be found via analysis of variance. In short, total least squares does not have the property of units-invariance—i.e. it is not scale invariant. For a meaningful model we require this property to hold. A way forward is to realise that residuals (distances) measured in different units can be combined if multiplication is used instead of addition. Consider fitting a line: for each data point the product of the vertical and horizontal residuals equals twice the area of the triangle formed by the residual lines and the fitted line. We choose the line which minimizes the sum of these areas. Nobel laureate Paul Samuelson proved in 1942 that, in two dimensions, it is the only line expressible solely in terms of the ratios of standard deviations and the correlation coefficient which (1) fits the correct equation when the observations fall on a straight line, (2) exhibits scale invariance, and (3) exhibits invariance under interchange of variables. This solution has been rediscovered in different disciplines and is variously known as standardised major axis (Ricker 1975, Warton et al., 2006), the reduced major axis, the geometric mean functional relationship (Draper and Smith, 1998), least products regression, diagonal regression, line of organic correlation, and the least areas line (Tofallis, 2002). Tofallis (2015, 2023) has extended this approach to deal with multiple variables. The calculations are simpler than for total least squares as they only require knowledge of covariances, and can be computed using standard spreadsheet functions. See also Regression dilution Deming regression, a special case with two predictors and independent errors. Errors-in-variables model Gauss-Helmert model Linear regression Least squares Principal component analysis Principal component regression Notes References Others I. Hnětynková, M. Plešinger, D. M. Sima, Z. Strakoš, and S. Van Huffel, The total least squares problem in AX ≈ B. A new classification with the relationship to the classical works. SIMAX vol. 32 issue 3 (2011), pp. 748–770. Available as a preprint. M. Plešinger, The Total Least Squares Problem and Reduction of Data in AX ≈ B. Doctoral Thesis, TU of Liberec and Institute of Computer Science, AS CR Prague, 2008. Ph.D. Thesis C. C. Paige, Z. Strakoš, Core problems in linear algebraic systems. SIAM J. Matrix Anal. Appl. 27, 2006, pp. 861–875. S. Van Huffel and P. Lemmerling, Total Least Squares and Errors-in-Variables Modeling: Analysis, Algorithms and Applications. Dordrecht, The Netherlands: Kluwer Academic Publishers, 2002. S. Jo and S. W. Kim, Consistent normalized least mean square filtering with noisy data matrix. IEEE Trans. Signal Process., vol. 53, no. 6, pp. 2112–2123, Jun. 2005. R. D. DeGroat and E. M. Dowling, The data least squares problem and channel equalization. IEEE Trans. Signal Process., vol. 41, no. 1, pp. 407–411, Jan. 1993. S. Van Huffel and J. Vandewalle, The Total Least Squares Problems: Computational Aspects and Analysis. SIAM Publications, Philadelphia PA, 1991. T. Abatzoglou and J. Mendel, Constrained total least squares, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP’87), Apr. 1987, vol. 12, pp. 1485–1488. P. de Groen An introduction to total least squares, in Nieuw Archief voor Wiskunde, Vierde serie, deel 14, 1996, pp. 237–253 arxiv.org. G. H. Golub and C. F. Van Loan, An analysis of the total least squares problem. SIAM J. on Numer. Anal., 17, 1980, pp. 883–893. Perpendicular Regression Of A Line at MathPages A. R. Amiri-Simkooei and S. Jazaeri Weighted total least squares formulated by standard least squares theory, in Journal of Geodetic Science, 2 (2): 113–124, 2012 . Applied mathematics Curve fitting Least squares Regression models
Total least squares
[ "Mathematics" ]
2,431
[ "Applied mathematics" ]
971,549
https://en.wikipedia.org/wiki/Mathematical%20Alphanumeric%20Symbols
Mathematical Alphanumeric Symbols is a Unicode block comprising styled forms of Latin and Greek letters and decimal digits that enable mathematicians to denote different notions with different letter styles. The letters in various fonts often have specific, fixed meanings in particular areas of mathematics. By providing uniformity over numerous mathematical articles and books, these conventions help to read mathematical formulas. These also may be used to differentiate between concepts that share a letter in a single problem. Unicode now includes many such symbols (in the range U+1D400–U+1D7FF). The rationale behind this is that it enables design and usage of special mathematical characters (fonts) that include all necessary properties to differentiate from other alphanumerics, e.g. in mathematics an italic "𝐴" can have a different meaning from a roman letter "A". Unicode originally included a limited set of such letter forms in its Letterlike Symbols block before completing the set of Latin and Greek letter forms in this block beginning in version 3.1. Unicode expressly recommends that these characters not be used in general text as a substitute for presentational markup; the letters are specifically designed to be semantically different from each other. Unicode does include a set of normal serif letters in the set. Still they have found some usage on social media, for example by people who want a stylized user name, and in email spam, in an attempt to bypass filters. All these letter shapes may be manipulated with MathML's attribute mathvariant. The introduction date of some of the more commonly used symbols can be found in the Table of mathematical symbols by introduction date. Tables of styled letters and digits These tables show all styled forms of Latin and Greek letters, symbols and digits in the Unicode Standard, with the normal unstyled forms of these characters shown with a cyan background (the basic unstyled letters may be serif or sans-serif depending upon the font). The styled characters are mostly located in the Mathematical Alphanumeric Symbols block, but the 24 characters in cells with a pink background are located in the letterlike symbols block, for example, ℛ () is at U+211B rather than the expected U+1D4AD which is reserved. In the code charts for the Unicode Standard, the reserved code points corresponding to the pink cell are annotated with the name and code point of the correct character. There are a few characters which have names that suggest that they should belong in the tables below, but in fact do not because their official character names are misnomers: ; "despite its character name, this symbol is derived from a special italicized version of the small letter l". It has various other specialized uses, such as a liter symbol and as the azimuthal quantum number symbol. is a symbol for Weierstrass's elliptic function. It is officially aliased as . Latin letters The Unicode values of the characters in the tables below, except those shown with or index values of '–', are obtained by adding the base values from the "U+" header row to the index values in the left column (both values are hexadecimal). Greek letters and symbols The Unicode values of the characters in the tables below, except those shown with or index values of '–', are obtained by adding the base values from the "U+" header row to the index values in the left column (both values are hexadecimal). Digits The Unicode values of the characters in the tables below are obtained by adding the hexadecimal base values from the "U+" header row to the index values in the left column. Glyph variants Variation selectors may be used to specify chancery (U+FE00) vs roundhand (U+FE01) forms, if a computer font is available that supports them: The remainder of the set is at Letterlike Symbols. Chart for the Mathematical Alphanumeric Symbols block History The following Unicode-related documents record the purpose and process of defining specific characters in the Mathematical Alphanumeric Symbols block: See also Greek letters used in mathematics, science, and engineering List of mathematical uses of Latin letters Mathematical operators and symbols in Unicode OpenType fonts feature mgrk Mathematical notation References Mathematical notation Unicode blocks Mathematical symbols
Mathematical Alphanumeric Symbols
[ "Mathematics" ]
880
[ "Symbols", "Mathematical symbols", "nan" ]
971,594
https://en.wikipedia.org/wiki/Reductive%20dechlorination
In organochlorine chemistry, reductive dechlorination describes any chemical reaction which cleaves the covalent bond between carbon and chlorine via reductants, to release chloride ions. Many modalities have been implemented, depending on the application. Reductive dechlorination is often applied to remediation of chlorinated pesticides or dry cleaning solvents. It is also used occasionally in the synthesis of organic compounds, e.g. as pharmaceuticals. Chemical Dechlorination is a well-researched reaction in organic synthesis, although it is not often used. Usually stoichiometric amounts of dechlorinating agent are required. In one classic application, the Ullmann reaction, chloroarenes are coupled to biphenyl]]s. For example, the activated substrate 2-chloronitrobenzene is converted into 2,2'-dinitrobiphenyl with a copper - bronze alloy. Zerovalent iron effects similar reactions. Organophosphorus(III) compounds effect gentle dechlorinations. The products are alkenes and phosphorus(V). Alkaline earth metals and zinc are used for more difficult dechlorinations. The side product is zinc chloride. Biological Vicinal reduction involves the removal of two halogen atoms that are adjacent on the same alkane or alkene, leading to the formation of an additional carbon-carbon bond. Biological reductive dechlorination is often effected by certain species of bacteria. Sometimes the bacterial species are highly specialized for organochlorine respiration and even a particular electron donor, as in the case of Dehalococcoides and Dehalobacter. In other examples, such as Anaeromyxobacter, bacteria have been isolated that are capable of using a variety of electron donors and acceptors, with a subset of possible electron acceptors being organochlorines. These reactions depend on a molecule which tends to be very aggressively sought after by some microbes, vitamin B12. Bioremediation using reductive dechlorination Reductive dechlorination of chlorinated organic molecules is relevant to bioremediation of polluted groundwater. One example is the organochloride respiration of the dry-cleaning solvent, tetrachloroethylene, and the engine degreasing solvent trichloroethylene by anaerobic bacteria, often members of the candidate genera Dehalococcoides. Bioremediation of these chloroethenes can occur when other microorganisms at the contaminated site provide H2 as a natural byproduct of various fermentation reactions. The dechlorinating bacteria use this H2 as their electron donor, ultimately replacing chlorine atoms in the chloroethenes with hydrogen atoms via hydrogenolytic reductive dechlorination. This process can proceed in the soil provided the availability of organic electron donors and the appropriate strains of Dehalococcoides. Trichloroethylene is dechlorinated via dichloroethene and vinyl chloride to ethylene. A chloroform-degrading reductive dehalogenase enzyme has been reported in a Dehalobacter member. The chloroform reductive dehalogenase, termed TmrA, was found to be transcriptional up-regulated in response to chloroform respiration and the enzyme can be obtained both in native and recombinant forms. Reductive dechlorination has been investigated for bioremediation of polychlorinated biphenyls (PCB) and chlorofluorocarbons (CFC). The reductive dechlorination of PCBs is performed by anaerobic microorganisms that utilize the PCB as an electron sink. The result of this is the reduction of the "meta" site, followed by the "para" site, and finally the "ortho" site, leading to a dechlorinated product. In the Hudson River, microorganisms effect dechlorination over the course of weeks. The resulting monochlorobiphenyls and dichlorobiphenyls are less toxic and more easily degradable by aerobic organisms compared to their chlorinated counterparts. The prominent drawback that has prevented the widespread use of reductive dechlorination for PCB detoxification and has decreased its feasibility is the issue of the slower than desired dechlorination rates. It has been suggested that bioaugmentation with DF-1 can lead to enhanced reductive dechlorination rates of PCBs through stimulation of dechlorination. Additionally, high inorganic carbon levels do not affect dechlorination rates in low PCB concentration environments. The reductive dechlorination applies to CFCs. Reductive dechlorination of CFCs including CFC-11, CFC-113, chlorotrifluoroethene, CFC-12, HCFC-141b, and tetrachloroethene occur through hydrogenolysis. Reduction rates of CFC mirror theoretical rates calculated based on the Marcus theory of electron transfer rate. Electrochemical The electrochemical reduction of chlorinated chemicals such as chlorinated hydrocarbons and chlorofluorocarbons can be carried out by electrolysis in appropriate solvents, such as mixtures of water and alcohol. Some of the key components of an electrolytic cell are types of electrodes, electrolyte mediums, and use of mediators. The cathode transfers electrons to the molecule, which decomposes to produce the corresponding hydrocarbon (hydrogen atoms substitute the original chlorine atoms) and free chloride ions. For instance, the reductive dechlorination of CFCs is complete and produces several hydrofluorocarbons (HFC) plus chloride. Hydrodechlorination (HDC) is a type of reductive dechlorination that is useful due to its high reaction rate. It uses H2 as the reducing agent over a range of potential electrode reactors and catalysts. Amongst the types of catalysts studied such as precious metals (platinum, palladium, rhodium), transition metals (niobium and molybdenum), and metal oxides, a preference for precious metals overrides the others. As an example, palladium often adopts a lattice formation which can easily embed hydrogen gas making it more accessible to be readily oxidized. However a common issue for HDC is catalyst deactivation and regeneration. As catalysts are depleted, chlorine poisoning on surfaces can sometimes be observed, and on rare occasions, metal sintering and leaching occurs as a result. Electrochemical reduction can be performed at ambient pressure and temperature. This will not disrupt microbial environments or raise extra cost for remediation. The process of dechlorination can be highly controlled to avoid toxic chlorinated intermediates and byproducts such as dioxins from incineration. Trichloroethylene and perchloroethylene are common targets of treatment which are directly converted to environmentally benign products. Chlorinated alkenes and alkanes are converted to hydrogen chloride which is then neutralized with a base. However, even though there are many potential benefits to adopting this method, research have mainly been conducted in a laboratory setting with a few cases of field studies making it not yet well established. References Environmental chemistry Green chemistry Biological engineering
Reductive dechlorination
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
1,583
[ "Green chemistry", "Biological engineering", "Chemical engineering", "Environmental chemistry", "nan" ]
971,656
https://en.wikipedia.org/wiki/Tpoint
TPoint is computer software that implements a mathematical model of conditions leading to errors in telescope pointing and tracking. The model can then be used in a telescope control system to correct the pointing and tracking. Such errors are typically caused by mechanical or structural defects. For example, TPoint can analyze and compensate for systematic errors such as polar misalignment, mechanical and optical non-orthogonality, lack of roundness in telescope mounting drive gears, as well as for flexure of the mounting caused by gravity. TPoint is in use on the majority of professional telescopes worldwide, including among many others the Anglo-Australian Telescope, Keck Observatory, Gemini Observatory and the Large Binocular Telescope. It has significantly improved the performance and efficiency of telescope operation and has had an especially strong impact on the development of automated and robotic telescopes. TPoint is also widely used by amateur astronomers. Software Bisque distributes TPoint as an add-on to TheSkyX Serious Astronomer Edition and TheSkyX Professional; this version is used to improve the pointing on amateur telescopes. History TPoint was invented and developed by Patrick Wallace. It grew out of work he and John Straede performed at the Anglo-Australian Telescope (AAT) between 1974 and 1980 using Interdata 70 computers. In the early 1980s, it was ported to the Digital Equipment Corporation VAX running under the VMS operating system and between 1990 and 1992 was also ported to run on the PC/MS-DOS platform as well as various UNIX platforms. A TPoint add-on is available for TheSkyX Serious Astronomer Edition and TheSkyX Professional Edition from Software Bisque, and it runs under Linux, macOS and Microsoft Windows. External links TPoint official webpage Software Bisque TPoint page Use of TPoint on Atacama Large Millimeter/submillimeter Array antenna prototypes Use of TPoint on the Green Bank 100m radio telescope References Telescopes Numerical software
Tpoint
[ "Astronomy", "Mathematics" ]
390
[ "Numerical software", "Mathematical software", "Telescopes", "Astronomical instruments" ]
971,682
https://en.wikipedia.org/wiki/Iptables
iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules. The filters are organized in a set of tables, which contain chains of rules for how to treat network traffic packets. Different kernel modules and programs are currently used for different protocols; iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and to Ethernet frames. iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. On most Linux systems, iptables is installed as and documented in its man pages, which can be opened using man iptables when installed. It may also be found in /sbin/iptables, but since iptables is more like a service rather than an "essential binary", the preferred location remains . The term iptables is also commonly used to inclusively refer to the kernel-level components. x_tables is the name of the kernel module carrying the shared code portion used by all four modules that also provides the API used for extensions; subsequently, Xtables is more or less used to refer to the entire firewall (v4, v6, arp, and eb) architecture. iptables superseded ipchains; and the successor of iptables is nftables, which was released on 19 January 2014 and was merged into the Linux kernel mainline in kernel version 3.13. Overview iptables allows the system administrator to define tables containing chains of rules for the treatment of packets. Each table is associated with a different kind of packet processing. Packets are processed by sequentially traversing the rules in chains. A rule in a chain can cause a goto or jump to another chain, and this can be repeated to whatever level of nesting is desired. (A jump is like a “call”, i.e. the point that was jumped from is remembered.) Every network packet arriving at or leaving from the computer traverses at least one chain. The origin of the packet determines which chain it traverses initially. There are five predefined chains (mapping to the five available Netfilter hooks), though a table may not have all chains. Predefined chains have a policy, for example DROP, which is applied to the packet if it reaches the end of the chain. The system administrator can create as many other chains as desired. These chains have no policy; if a packet reaches the end of the chain it is returned to the chain which called it. A chain may be empty. PREROUTING: Packets will enter this chain before a routing decision is made. INPUT: Packet is going to be locally delivered. It does not have anything to do with processes having an opened socket; local delivery is controlled by the "local-delivery" routing table: ip route show table local. FORWARD: All packets that have been routed and were not for local delivery will traverse this chain. OUTPUT: Packets sent from the machine itself will be visiting this chain. POSTROUTING: Routing decision has been made. Packets enter this chain just before handing them off to the hardware. A chain does not exist by itself; it belongs to a table. There are three tables: nat, filter, and mangle. Unless preceded by the option -t, an iptables command concerns the filter table by default. For example, the command iptables -L -v -n, which shows some chains and their rules, is equivalent to iptables -t filter -L -v -n. To show chains of table nat, use the command iptables -t nat -L -v -n Each rule in a chain contains the specification of which packets it matches. It may also contain a target (used for extensions) or verdict (one of the built-in decisions). As a packet traverses a chain, each rule in turn is examined. If a rule does not match the packet, the packet is passed to the next rule. If a rule does match the packet, the rule takes the action indicated by the target/verdict, which may result in the packet being allowed to continue along the chain or may not. Matches make up the large part of rulesets, as they contain the conditions packets are tested for. These can happen for about any layer in the OSI model, as with e.g. the --mac-source and -p tcp --dport parameters, and there are also protocol-independent matches, such as -m time. The packet continues to traverse the chain until either a rule matches the packet and decides the ultimate fate of the packet, for example by calling one of the ACCEPT or DROP, or a module returning such an ultimate fate; or a rule calls the RETURN verdict, in which case processing returns to the calling chain; or the end of the chain is reached; traversal either continues in the parent chain (as if RETURN was used), or the base chain policy, which is an ultimate fate, is used. Targets also return a verdict like ACCEPT (NAT modules will do this) or DROP (e.g. the REJECT module), but may also imply CONTINUE (e.g. the LOG module; CONTINUE is an internal name) to continue with the next rule as if no target/verdict was specified at all. Userspace utilities Front-ends There are numerous third-party software applications for iptables that try to facilitate setting up rules. Front-ends in textual or graphical fashion allow users to click-generate simple rulesets; scripts usually refer to shell scripts (but other scripting languages are possible too) that call iptables or (the faster) iptables-restore with a set of predefined rules, or rules expanded from a template with the help of a simple configuration file. Linux distributions commonly employ the latter scheme of using templates. Such a template-based approach is practically a limited form of a rule generator, and such generators also exist in standalone fashion, for example, as PHP web pages. Such front-ends, generators and scripts are often limited by their built-in template systems and where the templates offer substitution spots for user-defined rules. Also, the generated rules are generally not optimized for the particular firewalling effect the user wishes, as doing so will likely increase the maintenance cost for the developer. Users who reasonably understand iptables and want their ruleset optimized are advised to construct their own ruleset. Other notable tools FireHOL – a shell script wrapping iptables with an easy-to-understand plain-text configuration file NuFW – an authenticating firewall extension to Netfilter Shorewall – a gateway/firewall configuration tool, making it possible to use easier rules and have them mapped to iptables See also nftables NPF (firewall) PF (firewall) ipfirewall (ipfw) ipfilter XDP ipchains Uncomplicated Firewall (firewall) References Literature External links The netfilter/iptables project Web page The netfilter/iptables documentation page (outdated) Detecting and deceiving network scans countermeasures against nmap The IPTables ManPage for syntax help Iptables Tutorial 1.2.2 by Oskar Andreasson IPTABLES: The Default Linux Firewall Acceleration of iptables Linux Packet Filtering using GPGPU Command-line software Firewall software Linux security software Linux kernel features Linux-only free software Free software programmed in C
Iptables
[ "Technology" ]
1,569
[ "Command-line software", "Computing commands" ]
971,691
https://en.wikipedia.org/wiki/Plane%20curve
In mathematics, a plane curve is a curve in a plane that may be a Euclidean plane, an affine plane or a projective plane. The most frequently studied cases are smooth plane curves (including piecewise smooth plane curves), and algebraic plane curves. Plane curves also include the Jordan curves (curves that enclose a region of the plane but need not be smooth) and the graphs of continuous functions. Symbolic representation A plane curve can often be represented in Cartesian coordinates by an implicit equation of the form for some specific function f. If this equation can be solved explicitly for y or x – that is, rewritten as or for specific function g or h – then this provides an alternative, explicit, form of the representation. A plane curve can also often be represented in Cartesian coordinates by a parametric equation of the form for specific functions and Plane curves can sometimes also be represented in alternative coordinate systems, such as polar coordinates that express the location of each point in terms of an angle and a distance from the origin. Smooth plane curve A smooth plane curve is a curve in a real Euclidean plane and is a one-dimensional smooth manifold. This means that a smooth plane curve is a plane curve which "locally looks like a line", in the sense that near every point, it may be mapped to a line by a smooth function. Equivalently, a smooth plane curve can be given locally by an equation where is a smooth function, and the partial derivatives and are never both 0 at a point of the curve. Algebraic plane curve An algebraic plane curve is a curve in an affine or projective plane given by one polynomial equation (or where is a homogeneous polynomial, in the projective case.) Algebraic curves have been studied extensively since the 18th century. Every algebraic plane curve has a degree, the degree of the defining equation, which is equal, in case of an algebraically closed field, to the number of intersections of the curve with a line in general position. For example, the circle given by the equation has degree 2. The non-singular plane algebraic curves of degree 2 are called conic sections, and their projective completion are all isomorphic to the projective completion of the circle (that is the projective curve of equation The plane curves of degree 3 are called cubic plane curves and, if they are non-singular, elliptic curves. Those of degree 4 are called quartic plane curves. Examples Numerous examples of plane curves are shown in Gallery of curves and listed at List of curves. The algebraic curves of degree 1 or 2 are shown here (an algebraic curve of degree less than 3 is always contained in a plane): See also Algebraic geometry Convex curve Differential geometry Osgood curve Plane curve fitting Projective varieties Skew curve References . . . External links Euclidean geometry es:Curva plana
Plane curve
[ "Mathematics" ]
565
[ "Planes (geometry)", "Euclidean plane geometry", "Plane curves" ]
971,922
https://en.wikipedia.org/wiki/Contrast%20effect
A contrast effect is the enhancement or diminishment, relative to normal, of perception, cognition or related performance as a result of successive (immediately previous) or simultaneous exposure to a stimulus of lesser or greater value in the same dimension. (Here, normal perception, cognition or performance is that which would be obtained in the absence of the comparison stimulus—i.e., one based on all previous experience.) Perception example: A neutral gray target will appear lighter or darker than it does in isolation when immediately preceded by, or simultaneously compared to, respectively, a dark gray or light gray target. Cognition example: A person will appear more or less attractive than that person does in isolation when immediately preceded by, or simultaneously compared to, respectively, a less or more attractive person. Performance example: A laboratory rat will work faster, or slower, during a stimulus predicting a given amount of reward when that stimulus and reward are immediately preceded by, or alternated with, respectively, different stimuli associated with either a lesser or greater amount of reward. Types Simultaneous contrast The oldest reference to simultaneous contrast in the scientific literature is by the hand of the 11th century physicist Ibn al-Haytham who describes spots of paint on a white background appearing almost black and conversely paler than their true colour on black: ⁙⁙⁙⁙ He also describes that a leaf green paint may appear clearer and younger on dark blue and darker and older on yellow: ●●●● Johann Wolfgang von Goethe writes in 1810 that a grey image on a black background appears much brighter than the same on white. And Johannes Peter Müller notes the same in 1838 and also that a strip of grey on a brightly coloured field appears to be tinted ever so slightly in the contrasting colour. ●●●●●●●● The subject of the impact of the surrounding field on colour perception has been a subject of ongoing research since. It has been found that the size of the surrounding field has an impact, as does the separation between colour and surround, similarity of chromaticity, luminance difference and the structure of the surround. There has been some debate over the degree to which simultaneous contrast is a physiological process caused by the connections of neurons in the visual cortex, or whether it is a psychological effect. Both appear to have some effect. A possible source of the effect are neurons in the V4 area that have inhibitory connections to neighboring cells. The most likely evolutionary rationale for this effect is that it enhances edges in the visual field, thus facilitating the recognition of shapes and objects. Successive contrast Successive contrast occurs when the perception of currently viewed stimuli is modulated by previously viewed stimuli. In the example below you can use the scrollbar to quickly swap the red and green disks for two orange disks. Staring at the dot in the centre of one of the top two coloured disks and then looking at the dot in the centre of the corresponding lower disk makes the two lower disks briefly appear to have different colours, though in reality their colour is identical. •   ••   • Metacontrast and paracontrast Metacontrast and paracontrast involve both time and space. When one half of a circle is lit for 10 milliseconds (ms), it is at its maximal intensity. If the other half is displayed at the same time (but 20–50 ms later), there is a mutual inhibition: the left side is darkened by the right half (metacontrast), and the center may be completely obliterated. At the same time, there is a slight darkening of the right side due to the first stimulus (paracontrast). Domains The contrast effect was noted by the 17th century philosopher John Locke, who observed that lukewarm water can feel like hair or feel cold depending on whether the hand touching it was previously in hot or cold water. In the early 20th century, Wilhelm Wundt identified contrast as a fundamental principle of perception, and since then the effect has been confirmed in many different areas. Contrast effects can shape not only visual qualities like color and brightness, but other kinds of perception, including the perception of weight. Whether a piece of music is perceived as good or bad can depend on whether the music heard before it was unpleasant or pleasant. For the effect to work, the objects being compared need to be similar to each other: a television reporter can seem to shrink when interviewing a tall basketball player, but not when standing next to a tall building. Furthermore, the contrast effect has been argued to apply to foreign policies of states. For example, African countries have increasingly looked to China and India as opposed to the US, the EU and the World Bank because these Asian states have highlighted their lack of "interference" and "conditionality" in exchange for foreign aid and FDI. See also Assimilation and contrast effects Checker shadow illusion Chubb illusion Less-is-better effect and distinction bias Negative (Positive) contrast effect List of cognitive biases References External links WebExhibits - Simultaneous Contrast Example of simultaneous contrast with simple gray objects Interactive Classic Black and White example of simultaneous contrast Pioneer article which explicates the relevance of the contrast effect to foreign policies of countries Case Examples of the Contrast Effect Perception Cognition Cognitive biases Vision Psychophysics
Contrast effect
[ "Physics" ]
1,058
[ "Psychophysics", "Applied and interdisciplinary physics" ]
971,961
https://en.wikipedia.org/wiki/Plant%20reproductive%20morphology
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction. Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators. Use of sexual terminology Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm). In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (including cycads, conifers, flowering plants, etc.) the sporophyte is the dominant generation; the obvious visible plant, whether a small herb or a large tree, is the sporophyte, and the gametophyte is very small. In bryophytes and ferns, the gametophytes are independent, free-living plants, while in seed plants, each female megagametophyte, and the megaspore that gives rise to it, is hidden within the sporophyte and is entirely dependent on it for nutrition. Each male gametophyte typically consists of two to four cells enclosed within the protective wall of a pollen grain. The sporophyte of a flowering plant is often described using sexual terms (e.g. "female" or "male") . For example, a sporophyte that produces spores that give rise only to male gametophytes may be described as "male", even though the sporophyte itself is asexual, producing only spores. Similarly, flowers produced by the sporophyte may be described as "unisexual" or "bisexual", meaning that they give rise to either one sex of gametophyte or both sexes of the gametophyte. Flowering plants Basic flower morphology The flower is the characteristic structure concerned with sexual reproduction in flowering plants (angiosperms). Flowers vary enormously in their structure (morphology). A perfect flower, like that of Ranunculus glaberrimus shown in the figure, has a calyx of outer sepals and a corolla of inner petals and both male and female sex organs. The sepals and petals together form the perianth. Next inwards there are numerous stamens, which produce pollen grains, each containing a microscopic male gametophyte. Stamens may be called the "male" parts of a flower and collectively form the androecium. Finally in the middle there are carpels, which at maturity contain one or more ovules, and within each ovule is a tiny female gametophyte. Carpels may be called the "female" parts of a flower and collectively form the gynoecium. Each carpel in Ranunculus species is an achene that produces one ovule, which when fertilized becomes a seed. If the carpel contains more than one seed, as in Eranthis hyemalis, it is called a follicle. Two or more carpels may be fused together to varying degrees and the entire structure, including the fused styles and stigmas may be called a pistil. The lower part of the pistil, where the ovules are produced, is called the ovary. It may be divided into chambers (locules) corresponding to the separate carpels. Variations A perfect flower has both stamens and carpels, and is described as "bisexual" or "hermaphroditic". A unisexual flower is one in which either the stamens or the carpels are missing, vestigial or otherwise non-functional. Each flower is either staminate (having only functional stamens and thus male), or carpellate or pistillate (having only functional carpels and thus female). If separate staminate and carpellate flowers are always found on the same plant, the species is described as monoecious. If separate staminate and carpellate flowers are always found on different plants, the species is described as dioecious. A 1995 study found that about 6% of angiosperm species are dioecious, and that 7% of genera contain some dioecious species. Members of the birch family (Betulaceae) are examples of monoecious plants with unisexual flowers. A mature alder tree (Alnus species) produces long catkins containing only male flowers, each with four stamens and a minute perianth, and separate stalked groups of female flowers, each without a perianth. (See the illustration of Alnus serrulata.) Most hollies (members of the genus Ilex) are dioecious. Each plant produces either functionally male flowers or functionally female flowers. In Ilex aquifolium (see the illustration), the common European holly, both kinds of flower have four sepals and four white petals; male flowers have four stamens, female flowers usually have four non-functional reduced stamens and a four-celled ovary. Since only female plants are able to set fruit and produce berries, this has consequences for gardeners. Amborella represents the first known group of flowering plants to separate from their common ancestor. It too is dioecious; at any one time, each plant produces either flowers with functional stamens but no carpels, or flowers with a few non-functional stamens and a number of fully functional carpels. However, Amborella plants may change their "sex" over time. In one study, five cuttings from a male plant produced only male flowers when they first flowered, but at their second flowering three switched to producing female flowers. In extreme cases, almost all of the parts present in a complete flower may be missing, so long as at least one carpel or one stamen is present. This situation is reached in the female flowers of duckweeds (Lemna), which consist of a single carpel, and in the male flowers of spurges (Euphorbia) which consist of a single stamen. A species such as Fraxinus excelsior, the common ash of Europe, demonstrates one possible kind of variation. Ash flowers are wind-pollinated and lack petals and sepals. Structurally, the flowers may be bisexual, consisting of two stamens and an ovary, or may be male (staminate), lacking a functional ovary, or female (carpellate), lacking functional stamens. Different forms may occur on the same tree, or on different trees. The Asteraceae (sunflower family), with close to 22,000 species worldwide, have highly modified inflorescences made up of flowers (florets) collected together into tightly packed heads. Heads may have florets of one sexual morphology – all bisexual, all carpellate or all staminate (when they are called homogamous), or may have mixtures of two or more sexual forms (heterogamous). Thus goatsbeards (Tragopogon species) have heads of bisexual florets, like other members of the tribe Cichorieae, whereas marigolds (Calendula species) generally have heads with the outer florets bisexual and the inner florets staminate (male). Like Amborella, some plants undergo sex-switching. For example, Arisaema triphyllum (Jack-in-the-pulpit) expresses sexual differences at different stages of growth: smaller plants produce all or mostly male flowers; as plants grow larger over the years the male flowers are replaced by more female flowers on the same plant. Arisaema triphyllum thus covers a multitude of sexual conditions in its lifetime: nonsexual juvenile plants, young plants that are all male, larger plants with a mix of both male and female flowers, and large plants that have mostly female flowers. Other plant populations have plants that produce more male flowers early in the year and as plants bloom later in the growing season they produce more female flowers. Terminology The complexity of the morphology of flowers and its variation within populations has led to a rich terminology. Androdioecious: having male flowers on some plants, bisexual ones on others. Androecious: having only male flowers (the male of a dioecious population); producing pollen but no seed. Androgynous: see bisexual. Androgynomonoecious: having male, female, and bisexual flowers on the same plant, also called trimonoecious. Andromonoecious: having both bisexual and male flowers on the same plant. Bisexual: each flower of each individual has both male and female structures, i.e. it combines both sexes in one structure. Flowers of this kind are called perfect, having both stamens and carpels. Other terms used for this condition are androgynous, hermaphroditic, monoclinous and synoecious. Dichogamous: having sexes developing at different times; producing pollen when the stigmas are not receptive, either protandrous or protogynous. This promotes outcrossing by limiting self-pollination. Some dichogamous plants have bisexual flowers, others have unisexual flowers. Diclinous: see Unisexual. Dioecious: having either only male or only female flowers. No individual plant of the population produces both pollen and ovules. (From the Greek for "two households". See also the Wiktionary entry for .) Gynodioecious: having hermaphrodite flowers and female flowers on separate plants. Gynoecious: having only female flowers (the female of a dioecious population); producing seed but not pollen. Gynomonoecious: having both bisexual and female flowers on the same plant. Hermaphroditic: see bisexual. Homogamous: male and female sexes reach maturity in synchrony; producing mature pollens when stigma is receptive. Imperfect: (of flowers) having some parts that are normally present not developed, e.g. lacking stamens. See also Unisexual. Monoclinous: see bisexual. Monoecious: In the commoner narrow sense of the term, it refers to plants with unisexual flowers which occur on the same individual. In the broad sense of the term, it also includes plants with bisexual flowers. Individuals bearing separate flowers of both sexes at the same time are called simultaneously or synchronously monoecious and individuals that bear flowers of one sex at one time are called consecutively monoecious. (From the Greek monos "single" + oikia "house". See also the Wiktionary entry for .) Perfect: (of flowers) see bisexual. Polygamodioecious: mostly dioecious, but with either a few flowers of the opposite sex or a few bisexual flowers on the same plant. Polygamomonoecious: see polygamous. Or, mostly monoecious, but also partly polygamous. Polygamous: having male, female, and bisexual flowers on the same plant. Also called polygamomonoecious or trimonoecious. Or, with bisexual and at least one of male and female flowers on the same plant. Protandrous: (of dichogamous plants) having male parts of flowers developed before female parts, e.g. having flowers that function first as male and then change to female or producing pollen before the stigmas of the same plant are receptive. (Protoandrous is also used.) Protogynous: (of dichogamous plants) having female parts of flowers developed before male parts, e.g. having flowers that function first as female and then change to male or producing pollen after the stigmas of the same plant are receptive. Subandroecious: having mostly male flowers, with a few female or bisexual flowers. Subdioecious: having some individuals in otherwise dioecious populations with flowers that are not clearly male or female. The population produces normally male or female plants with unisexual flowers, but some plants may have bisexual flowers, some both male and female flowers, and others some combination thereof, such as female and bisexual flowers. The condition is thought to represent a transition between bisexuality and dioecy. Subgynoecious: having mostly female flowers, with a few male or bisexual flowers. Synoecious: see bisexual. Trimonoecious: see polygamous and androgynomonoecious. Trioecious: with male, female and bisexual flowers on different plants. Unisexual: having either functionally male or functionally female flowers. This condition is also called diclinous, incomplete or imperfect. Outcrossing Outcrossing, cross-fertilization or allogamy, in which offspring are formed by the fusion of the gametes of two different plants, is the most common mode of reproduction among higher plants. About 55% of higher plant species reproduce in this way. An additional 7% are partially cross-fertilizing and partially self-fertilizing (autogamy). About 15% produce gametes but are principally self-fertilizing with significant out-crossing lacking. Only about 8% of higher plant species reproduce exclusively by non-sexual means. These include plants that reproduce vegetatively by runners or bulbils, or which produce seeds without embryo fertilization (apomixis). The selective advantage of outcrossing appears to be the masking of deleterious recessive mutations. The primary mechanism used by flowering plants to ensure outcrossing involves a genetic mechanism known as self-incompatibility. Various aspects of floral morphology promote allogamy. In plants with bisexual flowers, the anthers and carpels may mature at different times, plants being protandrous (with the anthers maturing first) or protogynous (with the carpels mature first). Monoecious species, with unisexual flowers on the same plant, may produce male and female flowers at different times. Dioecy, the condition of having unisexual flowers on different plants, necessarily results in outcrossing, and probably evolved for this purpose. However, "dioecy has proven difficult to explain simply as an outbreeding mechanism in plants that lack self-incompatibility". Resource-allocation constraints may be important in the evolution of dioecy, for example, with wind-pollination, separate male flowers arranged in a catkin that vibrates in the wind may provide better pollen dispersal. In climbing plants, rapid upward growth may be essential, and resource allocation to fruit production may be incompatible with rapid growth, thus giving an advantage to delayed production of female flowers. Dioecy has evolved separately in many different lineages, and monoecy in the plant lineage correlates with the evolution of dioecy, suggesting that dioecy can evolve more readily from plants that already produce separate male and female flowers. See also Apomixis Vegetative reproduction Botany Evolution of sexual reproduction Flower Evolutionary history of plants: Flowers Flower: Development Meiosis References Citations Sources Further reading External links Images of sexual systems in flowering plants at bioimages.vanderbilt.edu Plant morphology
Plant reproductive morphology
[ "Biology" ]
3,506
[ "Behavior", "Plants", "Plant sexuality", "Plant morphology", "Sexuality" ]
971,997
https://en.wikipedia.org/wiki/Primordial%20sandwich
The concept of the primordial sandwich was proposed by the chemist Günter Wächtershäuser to describe the possible origins of the first cell membranes, and, therefore, the first cell. According to the two main models of abiogenesis, RNA world and iron-sulfur world, prebiotic processes existed before the development of the cell membrane. The difficulty with this idea, however, is that it is almost impossible to create a complex molecule such as RNA (or even its molecular precursor, pre-RNA) directly from simple organic molecules dissolved in a global ocean (Joyce, 1991), because without some mechanism to concentrate these organic molecules, they would be too dilute to generate the necessary chemical reactions to transform them from simple organic molecules into genuine prebiotic molecules. To address this problem, Wächtershäuser proposed that concentration might occur by concentration upon ("adsorption to") the surfaces of minerals. With the accumulation of enough amphipathic molecules (such as phospholipids), a bilayer will self-organize, and any molecules caught inside will become the contents of a liposome, and would be concentrated enough to allow chemical reactions to transform organic molecules into prebiotic molecules. Although developed for his own iron-sulfur world model, the idea of the primordial sandwich has also been adopted by some adherents of the RNA world model. See also Primordial sea Primordial soup Notes External links Minerals and the Origin of Life Astrobiology, Volume 2, Number 4, "The First Cell Membranes" Origin of life
Primordial sandwich
[ "Biology" ]
316
[ "Biological hypotheses", "Origin of life" ]
972,005
https://en.wikipedia.org/wiki/Autofrettage
Autofrettage is a work-hardening process in which a pressure vessel (thick walled) is subjected to enormous pressure, causing internal portions of the part to yield plastically, resulting in internal compressive residual stresses once the pressure is released. The goal of autofrettage is to increase the pressure-carrying capacity of the final product. Inducing residual compressive stresses into materials can also increase their resistance to stress corrosion cracking; that is, non-mechanically assisted cracking that occurs when a material is placed in a corrosive environment in the presence of tensile stress. The technique is commonly used in manufacture of high-pressure pump cylinders, warship and gun barrels, and fuel injection systems for diesel engines. Due to work-hardening process it also enhances wear life of the barrel marginally. While autofrettage will induce some work hardening, that is not the primary mechanism of strengthening. The start point is a single steel tube of internal diameter slightly less than the desired calibre. The tube is subjected to internal pressure of sufficient magnitude to enlarge the bore and in the process the inner layers of the metal are stretched in tension beyond their elastic limit. This means that the inner layers have been stretched to a point where the steel is no longer able to return to its original shape once the internal pressure has been removed. Although the outer layers of the tube are also stretched, the degree of internal pressure applied during the process is such that they are not stretched beyond their elastic limit. The reason why this is possible is that the stress distribution through the walls of the tube is non-uniform. Its maximum value occurs in the metal adjacent to the source of pressure, decreasing markedly towards the outer layers of the tube. The strain is proportional to the stress applied within the elastic limit; therefore the expansion at the outer layers is less than at the bore. Because the outer layers remain elastic they attempt to return to their original shape; however, they are prevented from doing so completely by the new permanently stretched inner layers. The effect is that the inner layers of the metal are put under compression by the outer layers in much the same way as though an outer layer of metal had been shrunk on as with a built-up gun. This can be better understood by assuming thick walled tube as multilayer tube. The next step is to subject the compressively strained inner layers to a low-temperature treatment (LTT) which results in the elastic limit being raised to at least the autofrettage pressure employed in the first stage of the process. Finally, the elasticity of the barrel can be tested by applying internal pressure once more, but this time care is taken to ensure that the inner layers are not stretched beyond their new elastic limit. The end result is an inner surface of the gun barrel with a residual compressive stress able to counterbalance the tensile stress that would be induced when the gun is discharged. In addition the material has a higher tensile strength due to work hardening. Early in the history of artillery, people observed that, after firing a small number of rounds, the bore of a new gun slightly enlarges and hardens. Historically, the first type of autofrettage avant la lettre was mandrelling bronze gun barrels, invented and patented in 1869 by Samuel B. Dean of the South Boston Iron Company. But it found no use on the American continent and was copied without a license by Franz von Uchatius in mid-1870s. It found some use in several European countries lacking steel industry, but was quickly displaced by cast steel everywhere except Austro-Hungary, which stuck to the obsolete technology until WWI and therefore had their artillery handicapped. The problem of strengthening steel gun barrels using the same principle was tackled by French colonial artillery colonel Louis Frédéric Gustave Jacob, who suggested in 1907 to pressurize them hydraulically and coined the term "autofrettage". In 1913, Schneider-Creusot made a 14 cm L/50 naval gun by such a method and applied for a patent. However, implementing such a technique on an industrial scale required numerical methods to approximate the solutions of transcedental equations of plastic deformation, which were developed in France during WWI by math professor Maurice d'Ocagne and Schneider engineer Louis Potin. In modern practice, a slightly oversized die is pushed slowly through the barrel by a hydraulically driven ram. The amount of initial underbore and oversize of the die are calculated to strain the material around the bore past its elastic limit into plastic deformation. A residual compressive stress remains on the barrel's inner surface, even after final honing and rifling. The technique has been applied to the expansion of tubular components down hole in oil and gas wells. The method has been patented by the Norwegian oil service company, Meta, which uses it to connect concentric tubular components with sealing and strength properties outlined above. The term autofrettage is also used to describe a step in manufacturing of composite overwrapped pressure vessel (COPV) where the liner is expanded (by plastic deformation), inside the composite overwrap. See also Shot peening, which also induces compressive residual stresses Built-up gun, an older method for strengthening gun barrels References External links White Paper Autofrettage Metalworking Firearm construction
Autofrettage
[ "Engineering" ]
1,078
[ "Firearm construction", "Mechanical engineering" ]
972,019
https://en.wikipedia.org/wiki/Zigzag
A zigzag is a pattern made up of small corners at variable angles, though constant within the zigzag, tracing a path between two parallel lines; it can be described as both jagged and fairly regular. In geometry, this pattern is described as a skew apeirogon. From the point of view of symmetry, a regular zigzag can be generated from a simple motif like a line segment by repeated application of a glide reflection. Although the origin of the word is unclear, its first printed appearances were in French-language books and ephemera of the late 17th century. Examples of zigzags The trace of a triangle wave or a sawtooth wave is a zigzag. Pinking shears are designed to cut cloth or paper with a zigzag edge, to lessen fraying. In sewing, a zigzag stitch is a machine stitch in a zigzag pattern. The zigzag arch is an architectural embellishment used in Islamic, Byzantine, Norman and Romanesque architecture. In seismology, earthquakes recorded in a "zigzag line" form by using seismograph. See also Serpentine shape Infinite skew polygon References Bibliography Patterns Line (geometry)
Zigzag
[ "Mathematics" ]
258
[ "Line (geometry)" ]
972,026
https://en.wikipedia.org/wiki/Gunn%E2%80%93Peterson%20trough
In astronomical spectroscopy, the Gunn–Peterson trough is a feature of the spectra of quasars due to the presence of neutral hydrogen in the Intergalactic medium (IGM). The trough is characterized by suppression of electromagnetic emission from the quasar at wavelengths less than that of the Lyman-alpha line at the redshift of the emitted light. This effect was originally predicted in 1965 by James E. Gunn and Bruce Peterson, and independently by Peter Scheuer. First detection For over three decades after the prediction, no objects had been found distant enough to show the Gunn–Peterson trough. It was not until 2001, with the discovery of a quasar with a redshift z = 6.28 by Robert Becker and others using data from the Sloan Digital Sky Survey, that a Gunn–Peterson trough was finally observed. The article also included quasars at redshifts of z = 5.82 and z = 5.99, and, while each of these exhibited absorption at wavelengths on the blue side of the Lyman-alpha transition, there were numerous spikes in flux as well. The flux of the quasar at z = 6.28, however, was effectively zero beyond the Lyman-alpha limit, meaning that the neutral hydrogen fraction in the IGM must have been larger than ~10−3. Evidence for reionization The discovery of the trough in a z = 6.28 quasar, and the absence of the trough in quasars detected at redshifts just below z = 6 presented strong evidence for the hydrogen in the universe having undergone a transition from neutral to ionized around z = 6. After recombination, the universe was expected to be neutral, until the first objects in the universe started emitting light and energy which would reionize the surrounding IGM. However, as the scattering cross section of photons with energies near that of the Lyman-alpha limit with neutral hydrogen is very high, even a small fraction of neutral hydrogen will make the optical depth of the IGM high enough to cause the suppression of emission observed. Despite the fact that the ratio of neutral hydrogen to ionized hydrogen may not have been particularly high, the low flux observed past the Lyman-alpha limit indicates that the universe was in the final stages of reionization. Following the first release of data from the WMAP spacecraft in 2003, the determination by Becker that the end of reionization occurred at z ≈ 6 appeared to conflict with estimates made from the WMAP measurement of the electron column density. However, the WMAP III data released in 2006 now seems to be in much better agreement with the limits on reionization placed by observation of the Gunn–Peterson trough. See also Lyman-alpha forest References Concepts in astrophysics Physical cosmology Space plasmas
Gunn–Peterson trough
[ "Physics", "Astronomy" ]
574
[ "Space plasmas", "Astronomical sub-disciplines", "Concepts in astrophysics", "Theoretical physics", "Astrophysics", "Physical cosmology" ]