id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,205,037
https://en.wikipedia.org/wiki/Plant%20Resources%20of%20Tropical%20Africa
Plant Resources of Tropical Africa, known by its acronym PROTA, is a retired NGO and interdisciplinary documentation programme active between 2000 and 2013. PROTA produced a large database and various publications about Africa's useful plants. Purpose PROTA was concerned with increasing accessibility to traditional knowledge and scientific information about many types of African plants including: dyes & tannins, fibers, medicinal plants, stimulants, tropical timbers, vegetables, tubers (carbohydrates), oil seeds, ornamental plants, forage plants, and cereals. PROTA supported the sustainable use of these useful plants to preserve culture, reduce poverty and hunger, and respond to climate change. To this end, PROTA's overall goal was synthesize diverse, published information for approximately 8,000 plants used in tropical Africa, then make it widely accessible through an online database and various book publications. In other words, PROTA was dedicated to making the useful plant biodiversity of tropical Africa better-known and respected. PROTA's database and various publications are considered unique in their epistemological approach because they were compiled as much from obscure publications as from peer-reviewed and popular literature, gathered throughout Africa and Europe. In this way PROTA publications include Africa-centered references and perspectives, which is a major focus of the broader discipline of African studies. PROTA also was an international NGO registered in Nairobi, Kenya that used information from its publications to structure a number of community projects involving over 800 farmers in Benin, Botswana, Burkina Faso, Kenya, and Madagascar. Some of PROTA's other goals included: to promote the sustainable use of plants to the public and private sectors to facilitate socially inclusive, collaborative research about African plants from experts in Africa and elsewhere to make research about African plants more accessible to support intellectual property rights related to the commercial use of African plants to help graduate students and researchers identify research gaps to provide research-driven educational materials to vocational and farmer education programs in Africa Current status Funding PROTA retired in 2013 while facing large operational costs after its funding expired. At the point of its retirement, about 50% of PROTA's encyclopedia series was complete. During its operation, PROTA received funds from the European Union's Directorate-General for International Partnerships, Netherlands Ministry of Foreign Affairs, Netherlands Ministry of Agriculture, Netherlands Organization for Scientific Research, Wageningen University, COFRA Foundation, International Tropical Timber Organization, and the Bill & Melinda Gates Foundation. Since the program's retirement there have been ongoing efforts to fundraise and preserve PROTA's various publications and online database. Preservation As of 2022, the PROTA database Prota4U is still online in an archive-like capacity at Wageningen University with articles written in English and French. Information in the PROTA database can also be accessed at the website Pl@ntUse–though in a different format. As of 2019, Prota4U had about 1,500 daily visitors and 500,000 unique visitors each year. All of the PROTA's encyclopedia volumes have been digitized and are available for free as Open access publications from the Wageningen University library. It is uncertain how much of the PROTA Recommends Series has been digitized. Partners The programme operated through an international network of institutional partners and collaborators of the PROTA Foundation. PROTA had representatives in 20 African countries and dual headquarters in Wageningen, Netherlands and Nairobi, Kenya. PROTA also had regional offices with institutional partners in Burkina Faso, France, Gabon, Ghana, Madagascar, Malawi, Uganda, and the United Kingdom. In Wageningen, PROTA also partnered with the EU funded, Technical Centre for Agricultural and Rural Cooperation (CTA) and the now-retired Agromisa Foundation to help distribute its various publications. Agromisa and PROTA were considered suitable partners because they were both committed to bridging the gap between scientific knowledge and traditional knowledge and were open access publishers of books with practical information about sustainable agriculture for small-farmers in Africa. Publications PROTA Handbook Encyclopedia Series Description The PROTA Handbook Series is a large illustrated encyclopedia series of utility plant species found in Tropical Africa. PROTA's retirement in 2013 made it unfeasible to complete the encyclopedia series, therefore only 9 volumes were ever published. In 2002, the series was projected to contain 16 volumes with entires for 7,000-8000 species. It was estimated that the series would include 2,500 botanical line drawings, and 2,500 species distribution maps in about 11,000 pages. The existing PROTA encyclopedia volumes been described metaphorically in the Kew Bulletin as a treasure trove of information. The Food and Agriculture Organization and Biodiversity International described PROTA 2: Vegetables as a detailed collection of ethnobotanical knowledge. Some PROTA encyclopedias have received more than 376 citations. PROTA Encyclopedia editors included individuals such as G.J. Grubben, who had led projects commissioned by the United Nations International Board for Plant Genetic Resources; and Ameenah Gurib-Fakim, a biodiversity scientist who later became the President of Mauritius. Though organized by species according to conventional botanical nomenclature, PROTA encyclopedias also include vernacular names in major African languages such as Swahili where information was available. PROTA continued to distribute its encyclopedias after the organization's retirement. As of 2019, than 30,000 PROTA encyclopedias had been printed in English and French and were distributed widely with the help of the Technical Centre for Agricultural and Rural Cooperation (CTA) and the now-retired Agromisa Foundation. Several PROTA encyclopedias are also available at the International Union for Conservation of Nature (IUCN) Headquarters' Library in Switzerland. Content Species articles in the PROTA encyclopedia series were written by hundreds of authors from around the world and in Africa, and cover a range of information including: plant uses geographic distribution by African country cultivation information wild-collection data production and international trade data chemical properties botanical characteristics ecological information conservation status Digitization status Currently, all published PROTA encyclopedias volumes have been digitized and are available as Open access publications from the Wageningen University library. Several encyclopedias in the series were planned but not started at the time of PROTA's retirement in 2013. PROTA Recommends Series Other PROTA Publications Reception PROTA2: Vegetables According to Google Scholar PROTA 2: Vegetables has been widely cited, receiving more than 367 citations as of October, 2022. Nigerian ethnobotanists reported in 2004 that PROTA 2: Vegetables included contributions from over 100 authors and detailed cultivation practices for 280 indigenous vegetables. A 2004 report from the University Of Ile-Ife in Nigeria referenced PROTA 2: Vegetables to emphasize the importance of indigenous vegetables such as Solanum macrocarpon and Telfairia occidentalis in providing employment opportunities in informal economies and in incorporating indigenous vegetables into plant breeding programs. A 2004 book review in the Kew Bulletin regarded PROTA 2: Vegetables as being well cited, with over 1500 references. A 2004 book review in the Nordic Journal of Botany commented that PROTA 2: Vegetables "should be found on the bookshelves of every institution dealing with tropical botany, nutrition, health, and agriculture" A 2004 book review from the Food and Agriculture Organization (FAO) and Biodiversity International said that PROTA 2: Vegetables brought needed addition to literature about vegetable resources in Africa, and that many of the vegetables described in the volume are unique to Africa. The book review also commented that PROTA2: Vegetables was particular useful for its detailed collection of ethnobotanical knowledge about both domesticated and wild-harvested vegetables in Africa. PROTA3: Dyes and Tannins A 2006 book review of PROTA 3: Dyes and tannins published in Economic Botany noted that "the information contained in this volume highlights a number of lesser known species, and is a rich source of interesting information for anyone working at the interface of ethnobotany and domestication, and as such is a must have." About 64% of the 24 authors of PROTA 3: Dyes and tannins were from Africa. PROTA11: Medicinal Plants A 2014 book review of PROTA 11(2): Medicinal Plants noted that about 30% of the contributions were written by African ethnobotanists. PROTA4U Database The PROTA 4U Database was conceived to improve access to information in PROTA's printed publications. The PROTA web database PROTA4U is a combination of PROTA’s highly standardized expert-validated review articles (PROTAbase) and yet-to-be-validated ‘starter kits’ for all other useful plants. These ‘starter kits’ are pre-filled with basic information from PROTA’s databases SPECIESLIST (important synonyms, uses, basic sources of information) and AFRIREFS (‘grey’ literature). Furthermore, the records contain the results of a meta-analysis from a large collection of agricultural and botanical databases, conducted successfully in cooperation with the ICON Group International. The websites, which allowed their databases to be harvested, are properly acknowledged in the ‘starter kits’. Debate Some believe that the 2010–2012 world food price crisis and 2011 East Africa drought led to widespread interest in supporting research for intensive farming of popular food crops instead of traditional, diversified local plant resources which were the focus of PROTA. During this time, responses to these large crises in the international finance and philanthropy communities may have shifted interest away from ethnobotanical research programs like PROTA. This raises questions about the role of traditional, diversified local plant resources in the study of food security, economic development, biodiversity conservation, and the preservation of cultural heritage and traditional knowledge. See also Afrotropical realm Ecology of Africa International Tropical Timber Organization International Center for Ethnobotanical Education, Research and Service Neglected and underutilized crops African Languages Traditional African Medicine Pharmacopeia Bioprospecting Convention on Biological Diversity Hamilton's Pharmacopeia The useful plants of the Dutch East Indies Others Traditional Knowledge Digital Library References External Resources Refer to the Wageningen University library for open access versions of some PROTA publications Refer to the Agromisa Foundation for open access publications about sustainable agriculture with a focus on small-farmers in Africa Flora of Africa Ethnobotany African studies Biodiversity fr:Prota
Plant Resources of Tropical Africa
[ "Biology" ]
2,108
[ "Biodiversity" ]
9,205,662
https://en.wikipedia.org/wiki/Crepidoma
In classical Greek architecture, crepidoma () is the foundation of one or more steps on which the superstructure of a building is erected. Usually the crepidoma has three levels, especially in Doric temples. However, exceptions are common: For example, the Heraion at Olympia features only two steps, and the Olympeion at Agrigento, Sicily has four. Each level of crepidoma typically decreases in size incrementally going upwards, forming a series of steps along all or some sides of the building. The crepidoma rests on the euthynteria () or foundation, which historically was constructed of locally available stone for the sake of economy. The topmost level of the crepidoma is called the stylobate () and it is the platform for the columns ( - ). The lower levels of the crepidoma are called the stereobates. The step-like arrangement of the crepidoma may extend around all four sides of a structure like a temple, for example, on the Parthenon. On some temples, the steps extend only across the front façade, or they may wrap around the sides for a short distance, a detail that is called a return, as seen at the Sanctuary of Despoina at Lycosoura. It is common for the hidden portions of each level of the stereobate to be of a lower grade of material than the exposed elements of the steps and the stylobate; each higher level of the crepidoma typically covers the clamps used to hold the stones of the lower level together. The lower margins of each level of the crepidoma blocks are often cut back in a series of two or three steps to create shadow lines; this decorative technique is termed a reveal. References External links Architectural elements Ancient Greek architecture
Crepidoma
[ "Technology", "Engineering" ]
382
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
9,206,360
https://en.wikipedia.org/wiki/Orthostates
In the context of classical Greek architecture, orthostates are squared stone blocks much greater in height than depth that are usually built into the lower portion of a wall. They are so called because they seem to "stand upright" rather than to lie on their sides. In other contexts the English term is usually orthostat. It is typical in Greek architecture for pairs of orthostates to form the thickness of a wall, one serving as the inner and the other serving as the outer face of the wall. Above a course of orthostates, it is common to lay a course of stones spanning the width of the wall and joining its two faces (a binder course). The term has been generalized for use in the description of the architecture of many cultures. In Hittite and Assyrian sculpture, orthostats are often intricately carved. The term may be used more generally of other upright-standing stones, including megalithic menhirs. See also Glossary of architecture References Robertson, D. S. (1929) Handbook of Greek and Roman Architecture. Cambridge: Cambridge University Press. Architectural elements Ancient Greek architecture
Orthostates
[ "Technology", "Engineering" ]
232
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
9,206,417
https://en.wikipedia.org/wiki/Eagle%20Academy%20%28Belle%20Glade%29
Eagle Academy is a behavior modification facility. The facility is located on 33800 State Road 80, Belle Glade, Florida. The academy featured in the show High School Boot Camp. The target group is "at-risk" girls and boys between 13 and 16 years of age. They have to be resident in Palm Beach County. They also need to have no felonies on their police record. Discipline Eagle Academy strictly enforces discipline: a button not done up, a word out of place or any non conformity could result in 10 push-ups or 20 jumping jacks or more, depending on the infraction. The program claims to have helped about 80 percent of the detainees to lead a productive life. According to the new budget of 2013, The Eagle Academy was closed, saving $4.5M a year. In the news One of the drill instructors was arrested on the charge of 'battery touch or strike' and 'fraud, false statement.' Another staff member was arrested and charged of trying to cover up the incident. In February 2008, another staff member was charged with abuse. August 2008 a January 2008 graduate was shot suspected of stealing a car January 2010-June 2010 Recruits DeLaney, Gousse and Dew were the first to ever perform music that was written at the Academy, DeLaney being the first to ever have a guitar there. References External links Homepage of the facility Secret prisons for teens (Watch organization) about the facility. High School Boot Camp on IMDB Behavior modification Public high schools in Florida Boarding schools in Florida Schools in Palm Beach County, Florida Special schools in the United States
Eagle Academy (Belle Glade)
[ "Biology" ]
326
[ "Behavior modification", "Human behavior", "Behavior", "Behaviorism" ]
9,206,499
https://en.wikipedia.org/wiki/Metal%E2%80%93insulator%20transition
Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature, pressure or, in case of a semiconductor, doping. History The basic distinction between metals and insulators was proposed by Hans Bethe, Arnold Sommerfeld and Felix Bloch in 1928-1929. It distinguished between conducting metals (with partially filled bands) and nonconducting insulators. However, in 1937 Jan Hendrik de Boer and Evert Verwey reported that many transition-metal oxides (such as NiO) with a partially filled d-band were poor conductors, often insulating. In the same year, the importance of the electron-electron correlation was stated by Rudolf Peierls. Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott, after whom the insulating state is named Mott insulator. The first metal-insulator transition to be found was the Verwey transition of magnetite in the 1940s. Theoretical description The classical band structure of solid state physics predicts the Fermi level to lie in a band gap for insulators and in the conduction band for metals, which means metallic behavior is seen for compounds with partially filled bands. However, some compounds have been found which show insulating behavior even for partially filled bands. This is due to the electron-electron correlation, since electrons cannot be seen as noninteracting. Mott considers a lattice model with just one electron per site. Without taking the interaction into account, each site could be occupied by two electrons, one with spin up and one with spin down. Due to the interaction the electrons would then feel a strong Coulomb repulsion, which Mott argued splits the band in two. Having one electron per-site fills the lower band while the upper band remains empty, which suggests the system becomes an insulator. This interaction-driven insulating state is referred to as a Mott insulator. The Hubbard model is one simple model commonly used to describe metal-insulator transitions and the formation of a Mott insulator. Elementary mechanisms Metal–insulator transitions (MIT) and models for approximating them can be classified based on the origin of their transition. Mott transition: The most common transition, arising from intense electron-electron correlation. Mott-Hubbard transition: An extension incorporating the Hubbard model, approaching the transition from the correlated paramagnetic state. Brinkman-Rice transition: Approaching the transition from the non-interacting metallic state, where each orbital is half-filled. Dynamical mean-field theory: A theory that accommodates both Mott-Hubbard and Brinbkman-Rice models of the transition. Peierls transition: On some occasions, the lattice itself through electron-phonon interactions can give rise to a transition. An example of a Peierls insulator is the blue bronze K0.3MoO3, which undergoes transition at T = 180 K. Anderson transition: When an insulator behavior in metals arises from distortions and lattice defects. Polarization catastrophe The polarization catastrophe model describes the transition of a material from an insulator to a metal. This model considers the electrons in a solid to act as oscillators and the conditions for this transition to occur is determined by the number of oscillators per unit volume of the material. Since every oscillator has a frequency (ω0) we can describe the dielectric function of a solid as, where ε(ω) is the dielectric function, N is the number of oscillators per unit volume, ω0 is the fundamental oscillation frequency, m is the oscillator mass, and ω is the excitation frequency. For a material to be a metal, the excitation frequency (ω) must be zero by definition, which then gives us the static dielectric constant, where εs is the static dielectric constant. If we rearrange equation (1) to isolate the number of oscillators per unit volume we get the critical concentration of oscillators (Nc) at which εs becomes infinite, indicating a metallic solid and the transition from an insulator to a metal. This expression creates a boundary that defines the transition of a material from an insulator to a metal. This phenomenon is known as the polarization catastrophe. The polarization catastrophe model also theorizes that, with a high enough density, and thus a low enough molar volume, any solid could become metallic in character. Predicting whether a material will be metallic or insulating can be done by taking the ratio R/V, where R is the molar refractivity, sometimes represented by A, and V is the molar volume. In cases where R/V is less than 1, the material will have non-metallic, or insulating properties, while an R/V value greater than one yields metallic character. See also References Further reading http://rmp.aps.org/abstract/RMP/v70/i4/p1039_1 Condensed matter physics Phase transitions
Metal–insulator transition
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,100
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
9,206,525
https://en.wikipedia.org/wiki/Hydron%20%28chemistry%29
In chemistry, the hydron, informally called proton, is the cationic form of atomic hydrogen, represented with the symbol . The general term "hydron", endorsed by IUPAC, encompasses cations of hydrogen regardless of isotope: thus it refers collectively to protons (H) for the protium isotope, deuterons (H or D) for the deuterium isotope, and tritons (H or T) for the tritium isotope. Unlike most other ions, the hydron consists only of a bare atomic nucleus. The negatively charged counterpart of the hydron is the hydride anion, . Properties Solute properties Other things being equal, compounds that readily donate hydrons (Brønsted acids, see below) are generally polar, hydrophilic solutes and are often soluble in solvents with high relative static permittivity (dielectric constants). Examples include organic acids like acetic acid (CHCOOH) or methanesulfonic acid (CHSOH). However, large nonpolar portions of the molecule may attenuate these properties. Thus, as a result of its alkyl chain, octanoic acid (CHCOOH) is considerably less hydrophilic compared to acetic acid. The unsolvated hydron (a completely free or "naked" hydrogen atomic nucleus) does not exist in the condensed (liquid or solid) phase. As the surface Electric field strength is inverse to the radius, a tiny nucleus interacts thousands times stronger with nearby electrons than any partly ionized atom. Although superacids are sometimes said to owe their extraordinary hydron-donating power to the presence of "free hydrons", such a statement is misleading: even for a source of "free hydrons" like , one of the superacidic cations present in the superacid fluoroantimonic acid (HF:SbF), detachment of a free still comes at an enormous energetic penalty on the order of several hundred kcal/mol. This effectively rules out the possibility of the free hydron being present in solution. For this reason, in liquid strong acids, hydrons are believed to diffuse by sequential transfer from one molecule to the next along a network of hydrogen bonds through what is known as the Grotthuss mechanism. Acidity The hydron ion can incorporate an electron pair from a Lewis base into the molecule by adduction: + :L → Because of this capture of the Lewis base (L), the hydron ion has Lewis acidic character. In terms of Hard/Soft Acid Base (HSAB) theory, the bare hydron is an infinitely hard Lewis acid. The hydron plays a central role in Brønsted–Lowry acid–base theory: a species that behaves as a hydron donor in a reaction is known as the Brønsted acid, while the species accepting the hydron is known as the Brønsted base. In the generic acid–base reaction shown below, HA is the acid, while B (shown with a lone pair) is the base: + :B → + :A The hydrated form of the hydrogen cation, the hydronium (hydroxonium) ion (aq), is a key object of Arrhenius' definition of acid. Other hydrated forms, the Zundel cation , which is formed from a proton and two water molecules, and the Eigen cation , which is formed from a hydronium ion and three water molecules, are theorized to play an important role in the diffusion of protons though an aqueous solution according to the Grotthuss mechanism. Although the ion (aq) is often shown in introductory textbooks to emphasize that the hydron is never present as an unsolvated species in aqueous solution, it is somewhat misleading, as it oversimplifies infamously complex speciation of the solvated proton in water; the notation (aq) is often preferred, since it conveys aqueous solvation while remaining noncommittal with respect to the number of water molecules involved. Isotopes of hydron Proton, having the symbol p or H, is the +1 ion of protium, 1H. Deuteron, having the symbol H or D, is the +1 ion of deuterium, H or D. Triton, having the symbol H or T, is the +1 ion of tritium, H or T. Other isotopes of hydrogen are too unstable to be relevant in chemistry. History of the term The term "hydron" is recommended by IUPAC to be used instead of "proton" if no distinction is made between the isotopes proton, deuteron and triton, all found in naturally occurring isotope mixtures. The name "proton" refers to isotopically pure H. On the other hand, calling the hydron simply hydrogen ion is not recommended because hydrogen anions also exist. The term "hydron" was defined by IUPAC in 1988. Traditionally, the term "proton" was and is used in place of "hydron". The latter term is generally only used in the context where comparisons between the various isotopes of hydrogen is important (as in the kinetic isotope effect or hydrogen isotopic labeling). Otherwise, referring to hydrons as protons is still considered acceptable, for example in such terms as protonation, deprotonation, proton pump, or proton channel. The transfer of in an acid-base reaction is usually referred to as proton transfer. Acid and bases are referred to as proton donors and acceptors correspondingly. 99.9844% of natural hydrons (hydrogen nuclei) are protons, and the remainder (about 156 per million in sea water) are deuterons (see deuterium), except for some very rare natural tritons (see tritium). See also Deprotonation Dihydrogen cation Hydrogen ion cluster Solvated electron Superacid Trihydrogen cation References Cations Hydrogen Proton Deuterium Tritium
Hydron (chemistry)
[ "Physics", "Chemistry" ]
1,255
[ "Cations", "Ions", "Matter" ]
9,208,413
https://en.wikipedia.org/wiki/QuarkNet
QuarkNet is a long-term, research-based teacher professional development program in the United States jointly funded by the National Science Foundation and the US Department of Energy. Since 1999, QuarkNet has established centers at universities and national laboratories conducting research in particle physics (also called high-energy physics) across the United States, and has been bringing such physics to high school classrooms. QuarkNet programs are described in the National Research Council National Science Education Standards report (1995) and support the Next Generation Science Standards (2013). Overview Boot Camp The summer Boot Camp is an annual national activity allowing teachers to see detectors and colliders, as well as form research groups to process experimental data. Teachers have been working in separate groups investigating triggers released by CMS since early 2011. The groups search the data for evidence of the J/Psi, Z and W bosons. They used Excel to reconstruct the invariant mass of a particle when given the four-vector of that particle's decay products. In addition, participants attend several talks and tours of technical areas. Cosmic ray studies The main QuarkNet student investigations supported at the national level are cosmic ray studies. Working with Fermilab technicians and research physicists, QuarkNet staff have developed a classroom cosmic ray muon detector that uses the same technologies as the largest detectors at Fermilab and CERN. To support inter school collaboration, QuarkNet collaborates with the Interactions in Understanding the Universe Project (I2U2) to develop and support the Cosmic Ray e-Lab. An e-Lab is a student-led, teacher-guided investigation using experimental data. Students have an opportunity to organize and conduct authentic research and experience the environment of a scientific collaboration. Participating schools set up a detector somewhere at the school. Students collect and upload the data to a central server located at Argonne National Laboratory. Students can access data from detectors in the cluster for use in studies, such as determining the (mean) lifetime of muons, the overall flux of muons in cosmic rays, or a study of extended air showers. Fellowships & programs In summer 2007, QuarkNet inaugurated the QuarkNet Fellows Program to develop the leadership potential of teachers who would work with staff to provide professional development activities and support for centers. Three groups of fellows in the areas of cosmic ray studies, LHC and teaching and learning share responsibilities for offering workshops and sessions, developing workshop materials, supporting e-Labs and masterclasses, giving presentations at AAPT and more. In 2009, a new group of fellows joined the program. Leadership fellows work with staff to support centers and gather data about center performance. Masterclass Since 2007, QuarkNet has hosted a one-day national program for students called Masterclass, initially studying Large Electron–Positron Collider-era CERN data, and now studying ALICE, ATLAS or CMS data. In addition to analysis of data, the day offers lectures and the opportunity to discuss results. Summer Student Research Program Based on a model at the University of Notre Dame, QuarkNet has offered a summer student research program since 2004. Typically, teams of four high school students supervised by one teacher spend six weeks involved in various physics research projects. Some centers choose to modify this model, involving more students and/or less time. The research is associated with ATLAS and CMS, the International Linear Collider R&D, cosmic ray muon detectors, optical fiber R&D and more. Teams are supported at up to 25 centers each summer. Examples of recent research titles include: Search and Identification of Comparing the Amount of Muon Events to Daily Weather Changes, Cosmic Ray Signals in Radar Echo, Fibers for Forward Calorimeter, The Effects of Impurities on Radio Signal Detection in Ice, Quartz Plate Calorimetry, Galactic Asymmetry of the Milky Way and RF Magnet Design, and Weak Lensing Mass Estimates of the Elliot Arc Cluster. References External links QuarkNet Fermilab CERN Cosmic Ray e-Lab Interactions in Understanding the Universe More On DataCamp Physics education 1999 establishments in the United States
QuarkNet
[ "Physics" ]
832
[ "Applied and interdisciplinary physics", "Physics education" ]
9,208,663
https://en.wikipedia.org/wiki/Alpha%20Eta%20Mu%20Beta
Alpha Eta Mu Beta ( or AEMB) is a biomedical engineering honor society founded at Louisiana Tech University in 1979. History Alpha Eta Mu Beta was founded by Daniel Reneau of Louisiana Tech University in 1979. It is an honor society for students studying biomedical engineering. It has since chartered 40 chapters across the United States Symbols The society's colors are red and gold. Membership Membership of AEMB is offered to the top fifth of juniors and top third of seniors in biomedical engineering, who have completed at least six semester credit hours (or the equivalent) of biomedical engineering courses. Chapters Following is a list of the chapters of Alpha Eta Mu. Notable members Dan Reneau, president of Louisiana Tech University See also Honor society Professional fraternities and sororities References External links ACHS Alpha Eta Mu Beta entry Student organizations established in 1979 Association of College Honor Societies 1979 establishments in Louisiana Engineering honor societies
Alpha Eta Mu Beta
[ "Engineering" ]
179
[ "Engineering societies", "Engineering honor societies" ]
9,209,184
https://en.wikipedia.org/wiki/Solder%20mask
Solder mask, solder stop mask or solder resist is a thin lacquer-like layer of polymer that is usually applied to the copper traces of a printed circuit board (PCB) for protection against oxidation and to prevent solder bridges from forming between closely spaced solder pads. Soldermask is a printed circuit board (PCB) manufacturing process that uses a chemical or thermosetting resin to coat a thin film on a circuit board, which can effectively form a layer of reliable protection to avoid unwanted short-circuiting and leakage of the circuit board, and to improve the reliability and electrical performance of the circuit board. A solder bridge is an unintended electrical connection between two conductors by means of a small blob of solder. PCBs use solder masks to prevent this from happening. Solder mask is not always used for hand soldered assemblies, but is essential for mass-produced boards that are soldered automatically using reflow or wave soldering techniques. Once applied, openings must be made in the solder mask wherever components are soldered, which is accomplished using photolithography. Solder mask is traditionally green, but is also available in many other colors. Solder mask comes in different media depending upon the demands of the application. The lowest-cost solder mask is epoxy liquid that is silkscreened through the pattern onto the PCB. Other types are the liquid photoimageable solder mask (LPSM or LPI) inks and dry-film photoimageable solder mask (DFSM). LPSM can be silkscreened or sprayed on the PCB, exposed to the pattern and developed to provide openings in the pattern for parts to be soldered to the copper pads. DFSM is vacuum-laminated on the PCB then exposed and developed. All three processes typically go through a thermal cure of some type after the pattern is defined although LPI solder masks are also available in ultraviolet (UV) cure. The solder stop layer on a flexible board is also called coverlay or coverfilm. In electronic design automation, the solder mask is treated as part of the layer stack of the printed circuit board, and is described in individual Gerber files for the top and bottom side of the PCB like any other layer (such as the copper and silk-screen layers). Typical names for these layers include tStop/bStop aka STC/STS or TSM/BSM (EAGLE), F.Mask/B.Mask (KiCad), StopTop/StopBot (TARGET), maskTop/maskBottom (Fritzing), SMT/SMB (OrCAD), MT.PHO/MB.PHO (PADS), LSMVS/LSMRS (WEdirekt) or GTS/GBS (Gerber and many others). Notes References Further reading Printed circuit board manufacturing
Solder mask
[ "Engineering" ]
606
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
9,209,693
https://en.wikipedia.org/wiki/Helically%20Symmetric%20Experiment
The Helically Symmetric Experiment (HSX, stylized as Helically Symmetric eXperiment), is an experimental plasma confinement device at the University of Wisconsin–Madison, with design principles that are intended to be incorporated into a fusion reactor. The HSX is a modular coil stellarator which is a toroid-shaped pressure vessel with external electromagnets which generate a magnetic field for the purpose of containing a plasma. It began operation in 1999. Background A stellarator is a magnetic confinement fusion device that uses external magnetic coils to generate all of the magnetic fields needed to confine the high temperature plasma. In contrast, in tokamaks and reversed field pinches, the magnetic field is created by the interaction of external magnets and an electrical current flowing through the plasma. The lack of this large externally driven plasma current makes stellarators suitable for steady-state fusion power plants. However, due to non-axisymmetric nature of the fields, old stellarators have a combination of toroidal and helical modulation of the magnetic field lines, which leads to high transport of plasma out of the confinement volume at fusion-relevant conditions, solved in the Wendelstein 7-X which has a better particle confinement than the expected in ITER, and achieve plasma duration of 30 minutes. This large transport in old stellarators can limit their performance as fusion reactors. This problem can be largely reduced by tailoring the magnetic field geometry. The dramatic improvements in computer modeling capability in the last two decades has helped to "optimize" the magnetic geometry to reduce this transport, resulting in a new class of stellarators called "quasi-symmetric stellarators". Computer-modeled odd-looking electromagnets will directly produce the needed magnetic field configuration. These devices combine the good confinement properties of tokamaks and the steady-state nature of conventional stellarators. The Helically Symmetric Experiment (HSX) at the University of Wisconsin-Madison is such a quasi-helically symmetric stellarator (helical axis of symmetry). Device The magnetic field in HSX is generated by a set of 48 twisted coils arranged in four field periods. HSX typically operates at a magnetic field of 1 Tesla at the center of the plasma column. A set of auxiliary coils is used to deliberately break the symmetry to mimic conventional stellarator properties for comparison. The HSX vacuum vessel is made of stainless steel, and is helically shaped to follow the magnetic geometry. Plasma formation and heating is achieved using 28 GHz, 100 kW electron cyclotron resonance heating (ECRH). A second 100 kW gyrotron has recently been installed on HSX to perform heat pulse modulation studies. Operations Plasmas as high as 3 kiloelectronvolts in temperature and about 8/cc in density are routinely formed for various experiments. Experiments have shown that edge magnetic islands affect particle fueling and exhaust. In HSX, the presence of a magnetic island chain at the plasma edge increases the plasma sourcing to exhaust ratio but reduces fueling efficiency by 25%. Moving the island radially inward decreases both the effective and global particle confinement times. This process is effective for controlling plasma fueling and helium exhaust times. Subsystems, diagnostics HSX has a large set of diagnostics to measure properties of plasma and magnetic fields. The following gives a list of major diagnostics and subsystems. Thomson scattering Diagnostic neutral beam Electron cyclotron resonance heating system Electron cyclotron emission radiometers Charge exchange recombination spectroscopy Interferometer Motional Stark effect Heavy ion beam probe (coming soon) Laser blow-off Hard and soft-X-ray detectors Mirnov coils Rogowski coils Passive spectroscopy Goals and major achievements HSX has made and continues to make fundamental contributions to the physics of quasisymmetric stellarators that show significant improvement over the conventional stellarator concept. These include: Measuring large ion flows in the direction of quasisymmetry Reduced flow damping in the direction of quasisymmetry Reduced passing particle deviation from a flux surface Reduced direct loss orbits Reduced neoclassical transport Reduced equilibrium parallel currents because of the high effective transform Ongoing experiments A large number of experimental and computational research works are being done in HSX by students, staff and faculties. Some of them are in collaboration with other universities and national laboratories, both in the USA and abroad. Major research projects at present are listed below: Effect of quasi-symmetry on plasma flows Impurity transport Radio frequency heating Supersonic plasma fueling and the neutral population Heat pulse propagation experiments to study thermal transport Interaction of turbulence and flows in HSX and the effects of quasi-symmetry on the determination of the radial electric field Equilibrium reconstruction of the plasma density, pressure and current profiles Effects of viscosity and symmetry on the determination of the flows and the radial electric field Divertor flows, particle edge fluxes Effect of radial electric field on the bootstrap current Effect of quasi-symmetry on fast ion confinement References Additional resources External links Experimental Tests of Quasisymmetry in HSX. Talmadge Slide 4 compares with tokamak Stellarators Plasma physics facilities University of Wisconsin–Madison
Helically Symmetric Experiment
[ "Physics" ]
1,042
[ "Plasma physics facilities", "Plasma physics" ]
9,209,712
https://en.wikipedia.org/wiki/Thermoelectric%20generator
A thermoelectric generator (TEG), also called a Seebeck generator, is a solid state device that converts heat (driven by temperature differences) directly into electrical energy through a phenomenon called the Seebeck effect (a form of thermoelectric effect). Thermoelectric generators function like heat engines, but are less bulky and have no moving parts. However, TEGs are typically more expensive and less efficient. When the same principle is used in reverse to create a heat gradient from an electric current, it is called a thermoelectric (or Peltier) cooler. Thermoelectric generators could be used in power plants and factories to convert waste heat into additional electrical power and in automobiles as automotive thermoelectric generators (ATGs) to increase fuel efficiency. Radioisotope thermoelectric generators use radioisotopes to generate the required temperature difference to power space probes. Thermoelectric generators can also be used alongside solar panels. History In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two different conductors can produce electricity. At the heart of the thermoelectric effect is that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered the reverse effect, that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. George Cove had accidentally invented a photovoltaic panel, despite intending to invent a thermoelectric generator with thermocouples, in 1909. He notes that heat alone didn't produce any power, only incident light, but he had no explanation for how it could be working. The operational principle is now understood to have been a very simple form of Schottky junction. Efficiency The typical efficiency of TEGs is around 5–8%, although it can be higher. Older devices used bimetallic junctions and were bulky. More recent devices use highly doped semiconductors made from bismuth telluride (Bi2Te3), lead telluride (PbTe), calcium manganese oxide (Ca2Mn3O8), or combinations thereof, depending on application temperature. These are solid-state devices and unlike dynamos have no moving parts, with the occasional exception of a fan or pump to improve heat transfer. If the hot region is around 1273K and the ZT values of 3 - 4 are implemented, the efficiency is approximately 33-37%; allowing TEG's to compete with certain heat engine efficiencies. As of 2021, there are materials (some containing widely available and inexpensive arsenic and tin) reaching a ZT value > 3; monolayer AsP3 (ZT = 3.36 on the armchair axis); n-type doped InP3 (ZT = 3.23); p-type doped SnP3 (ZT = 3.46); p-type doped SbP3 (ZT = 3.5). Construction Thermoelectric power generators consist of three major components: thermoelectric materials, thermoelectric modules and thermoelectric systems that interface with the heat source. Thermoelectric materials Thermoelectric materials generate power directly from the heat by converting temperature differences into electric voltage. These materials must have both high electrical conductivity (σ) and low thermal conductivity (κ) to be good thermoelectric materials. Having low thermal conductivity ensures that when one side is made hot, the other side stays cold, which helps to generate a large voltage while in a temperature gradient. The measure of the magnitude of electrons flow in response to a temperature difference across that material is given by the Seebeck coefficient (S). The efficiency of a given material to produce a thermoelectric power is simply estimated by its “figure of merit” zT = S2σT/κ. For many years, the main three semiconductors known to have both low thermal conductivity and high power factor were bismuth telluride (Bi2Te3), lead telluride (PbTe), and silicon germanium (SiGe). Some of these materials have somewhat rare elements which make them expensive. Today, the thermal conductivity of semiconductors can be lowered without affecting their high electrical properties using nanotechnology. This can be achieved by creating nanoscale features such as particles, wires or interfaces in bulk semiconductor materials. However, the manufacturing processes of nano-materials are still challenging. Thermoelectric advantages Thermoelectric generators are all-solid-state devices that do not require any fluids for fuel or cooling, making them non-orientation dependent allowing for use in zero-gravity or deep-sea applications. The solid-state design allows for operation in severe environments. Thermoelectric generators have no moving parts which produce a more reliable device that does not require maintenance for long periods. The durability and environmental stability have made thermoelectrics a favorite for NASA's deep space explorers among other applications. One of the key advantages of thermoelectric generators outside of such specialized applications is that they can potentially be integrated into existing technologies to boost efficiency and reduce environmental impact by producing usable power from waste heat. Thermoelectric module A thermoelectric module is a circuit containing thermoelectric materials which generate electricity from heat directly. A thermoelectric module consists of two dissimilar thermoelectric materials joined at their ends: an n-type (with negative charge carriers), and a p-type (with positive charge carriers) semiconductor. Direct electric current will flow in the circuit when there is a temperature difference between the ends of the materials. Generally, the current magnitude is directly proportional to the temperature difference: where is the local conductivity, S is the Seebeck coefficient (also known as thermopower), a property of the local material, and is the temperature gradient. In application, thermoelectric modules in power generation work in very tough mechanical and thermal conditions. Because they operate in a very high-temperature gradient, the modules are subject to large thermally induced stresses and strains for long periods. They also are subject to mechanical fatigue caused by a large number of thermal cycles. Thus, the junctions and materials must be selected so that they survive these tough mechanical and thermal conditions. Also, the module must be designed such that the two thermoelectric materials are thermally in parallel, but electrically in series. The efficiency of a thermoelectric module is greatly affected by the geometry of its design. Thermoelectric design Thermoelectric generators are made of several thermopiles, each consisting of many thermocouples made of a connected n-type and p-type material. The arrangement of the thermocouples is typically in three main designs: planar, vertical, and mixed. Planar design involves thermocouples put onto a substrate horizontally between the heat source and cool side, resulting in the ability to create longer and thinner thermocouples, thereby increasing the thermal resistance and temperature gradient and eventually increasing voltage output. Vertical design has thermocouples arranged vertically between the hot and cool plates, leading to high integration of thermocouples as well as a high output voltage, making this design the most widely-used design commercially. The mixed design has the thermocouples arranged laterally on the substrate while the heat flow is vertical between plates. Microcavities under the hot contacts of the device allow for a temperature gradient, which allows for the substrate’s thermal conductivity to affect the gradient and efficiency of the device. For microelectromechanical systems, TEGs can be designed on the scale of handheld devices to use body heat in the form of thin films. Flexible TEGs for wearable electronics are able to be made with novel polymers through additive manufacturing or thermal spraying processes. Cylindrical TEGs for using heat from vehicle exhaust pipes can also be made using circular thermocouples arranged in a cylinder. Many designs for TEGs can be made for the different devices they are applied to. Thermoelectric systems Using thermoelectric modules, a thermoelectric system generates power by taking in heat from a source such as a hot exhaust flue. To operate, the system needs a large temperature gradient, which is not easy in real-world applications. The cold side must be cooled by air or water. Heat exchangers are used on both sides of the modules to supply this heating and cooling. There are many challenges in designing a reliable TEG system that operates at high temperatures. Achieving high efficiency in the system requires extensive engineering design to balance between the heat flow through the modules and maximizing the temperature gradient across them. To do this, designing heat exchanger technologies in the system is one of the most important aspects of TEG engineering. In addition, the system requires to minimize the thermal losses due to the interfaces between materials at several places. Another challenging constraint is avoiding large pressure drops between the heating and cooling sources. If AC power is required (such as for powering equipment designed to run from AC mains power), the DC power from the TE modules must be passed through an inverter, which lowers efficiency and adds to the cost and complexity of the system. Materials for TEG Only a few known materials to date are identified as thermoelectric materials. Most thermoelectric materials today have a zT, the figure of merit, value of around 1, such as in bismuth telluride (Bi2Te3) at room temperature and lead telluride (PbTe) at 500–700 K. However, in order to be competitive with other power generation systems, TEG materials should have a zT of 2–3. Most research in thermoelectric materials has focused on increasing the Seebeck coefficient (S) and reducing the thermal conductivity, especially by manipulating the nanostructure of the thermoelectric materials. Because both the thermal and electrical conductivity correlate with the charge carriers, new means must be introduced in order to conciliate the contradiction between high electrical conductivity and low thermal conductivity, as is needed. When selecting materials for thermoelectric generation, a number of other factors need to be considered. During operation, ideally, the thermoelectric generator has a large temperature gradient across it. Thermal expansion will then introduce stress in the device which may cause fracture of the thermoelectric legs or separation from the coupling material. The mechanical properties of the materials must be considered and the coefficient of thermal expansion of the n and p-type material must be matched reasonably well. In segmented thermoelectric generators, the material's compatibility must also be considered to avoid incompatibility of relative current, defined as the ratio of electrical current to diffusion heat current, between segment layers. A material's compatibility factor is defined as . When the compatibility factor from one segment to the next differs by more than a factor of about two, the device will not operate efficiently. The material parameters determining s (as well as zT) are temperature-dependent, so the compatibility factor may change from the hot side to the cold side of the device, even in one segment. This behavior is referred to as self-compatibility and may become important in devices designed for wide-temperature application. In general, thermoelectric materials can be categorized into conventional and new materials: Conventional materials Many TEG materials are employed in commercial applications today. These materials can be divided into three groups based on the temperature range of operation: Low temperature materials (up to around 450 K): Alloys based on bismuth (Bi) in combinations with antimony (Sb), tellurium (Te) or selenium (Se). Intermediate temperature (up to 850 K): such as materials based on alloys of lead (Pb) Highest temperatures material (up to 1300 K): materials fabricated from silicon-germanium (SiGe) alloys. Although these materials still remain the cornerstone for commercial and practical applications in thermoelectric power generation, significant advances have been made in synthesizing new materials and fabricating material structures with improved thermoelectric performance. Recent research has focused on improving the material’s figure-of-merit (zT), and hence the conversion efficiency, by reducing the lattice thermal conductivity. New materials Researchers are trying to develop new thermoelectric materials for power generation by improving the figure-of-merit zT. One example of these materials is the semiconductor compound ß-Zn4Sb3, which possesses an exceptionally low thermal conductivity and exhibits a maximum zT of 1.3 at a temperature of 670K. This material is also relatively inexpensive and stable up to this temperature in a vacuum, and can be a good alternative in the temperature range between materials based on Bi2Te3 and PbTe. Among the most exciting developments in thermoelectric materials was the development of single crystal tin selenide which produced a record zT of 2.6 in one direction. Other new materials of interest include Skutterudites, Tetrahedrites, and rattling ions crystals. Besides improving the figure-of-merit, there is increasing focus to develop new materials by increasing the electrical power output, decreasing cost and developing environmentally friendly materials. For example, when the fuel cost is low or almost free, such as in waste heat recovery, then the cost per watt is only determined by the power per unit area and the operating period. As a result, it has initiated a search for materials with high power output rather than conversion efficiency. For example, the rare earth compounds YbAl3 has a low figure-of-merit, but it has a power output of at least double that of any other material, and can operate over the temperature range of a waste heat source. Novel processing To increase the figure of merit (zT), a material’s thermal conductivity should be minimized while its electrical conductivity and Seebeck coefficient is maximized. In most cases, methods to increase or decrease one property result in the same effect on other properties due to their interdependence. A novel processing technique exploits the scattering of different phonon frequencies to selectively reduce lattice thermal conductivity without the typical negative effects on electrical conductivity from the simultaneous increased scattering of electrons. In a bismuth antimony tellurium ternary system, liquid-phase sintering is used to produce low-energy semicoherent grain boundaries, which do not have a significant scattering effect on electrons. The breakthrough is then applying a pressure to the liquid in the sintering process, which creates a transient flow of the Te rich liquid and facilitates the formation of dislocations that greatly reduce the lattice conductivity. The ability to selectively decrease the lattice conductivity results in reported zT value of 1.86, which is a significant improvement over the current commercial thermoelectric generators with zT ~ 0.3–0.6. These improvements highlight the fact that in addition to the development of novel materials for thermoelectric applications, using different processing techniques to design microstructure is a viable and worthwhile effort. In fact, it often makes sense to work to optimize both composition and microstructure. Uses Thermoelectric generators (TEG) have a variety of applications. Frequently, thermoelectric generators are used for low power remote applications or where bulkier but more efficient heat engines such as Stirling engines would not be possible. Unlike heat engines, the solid state electrical components typically used to perform thermal to electric energy conversion have no moving parts. The thermal to electric energy conversion can be performed using components that require no maintenance, have inherently high reliability, and can be used to construct generators with long service-free lifetimes. This makes thermoelectric generators well suited for equipment with low to modest power needs in remote uninhabited or inaccessible locations such as mountaintops, the vacuum of space, or the deep ocean. The main uses of thermoelectric generators are: Space probes, including the Mars Curiosity rover, generate electricity using a radioisotope thermoelectric generator whose heat source is a radioactive element. Waste heat recovery. Every human activity, transport and industrial process generates waste heat, being possible to harvest residual energy from cars, aircraft, ships, industries and the human body. From cars the main source of energy is the exhaust gas. Harvesting that heat energy using a thermoelectric generator can increase the fuel efficiency of the car. Thermoelectric generators have been investigated to replace the alternators in cars demonstrating a 3.45% reduction in fuel consumption. Projections for future improvements are up to a 10% increase in mileage for hybrid vehicles. It has been stated that the potential energy savings could be higher for gasoline engines rather than for diesel engines. For more details, see the article: Automotive thermoelectric generator. For aircraft, engine nozzles have been identified as the best place to recover energy from, but heat from engine bearings and the temperature gradient existing in the aircraft skin have also been proposed. Solar cells use only the high-frequency part of the radiation, while the low-frequency heat energy is wasted. Several patents about the use of thermoelectric devices in parallel or cascade configuration with solar cells have been filed. The idea is to increase the efficiency of the combined solar/thermoelectric system to convert solar radiation into useful electricity. Thermoelectric generators are primarily used as remote and off-grid power generators for unmanned sites. They are the most reliable power generator in such situations as they do not have moving parts (thus virtually maintenance-free), work day and night, perform under all weather conditions and can work without battery backup. Although solar photovoltaic systems are also implemented in remote sites, Solar PV may not be a suitable solution where solar radiation is low, i.e. areas at higher latitudes with snow or no sunshine, areas with much cloud or tree canopy cover, dusty deserts, forests, etc. Thermoelectric generators are commonly used on gas pipelines, for example, for cathodic protection, radio communication, and telemetry. On gas pipelines for power consumption of up to 5 kW, thermal generators are preferable to other power sources. The manufacturers of generators for gas pipelines are Global Power Technologies (formerly Global Thermoelectric) (Calgary, Canada) and TELGEN (Russia). Microprocessors generate waste heat. Researchers have considered whether some of that energy could be recycled. (However, see below for problems that can arise.) Thermoelectric generators have also been investigated as standalone solar-thermal cells. Integration of thermoelectric generators have been directly integrated to a solar thermal cell with efficiency of 4.6%. The Maritime Applied Physics Corporation in Baltimore, Maryland is developing a thermoelectric generator to produce electric power on the deep-ocean offshore seabed using the temperature difference between cold seawater and hot fluids released by hydrothermal vents, hot seeps, or from drilled geothermal wells. A high-reliability source of seafloor electric power is needed for ocean observatories and sensors used in the geological, environmental, and ocean sciences, by seafloor mineral and energy resource developers, and by the military. Recent studies have found that deep-sea thermoelectric generators for large scale energy plants are also economically viable. Ann Makosinski from British Columbia, Canada has developed several devices using Peltier tiles to harvest heat (from a human hand, the forehead, and hot beverage) that claims to generate enough electricity to power an LED light or charge a mobile device, although the inventor admits that the brightness of the LED light is not competitive with those on the market. Thermoelectric generators are used in stove fans. They are put on top of a wood or coal burning stove. The TEG is sandwiched between 2 heat sinks and the difference in temperature will power a slow-moving fan that helps circulate the stove's heat into the room. Practical limitations Besides low efficiency and relatively high cost, practical problems exist in using thermoelectric devices in certain types of applications resulting from a relatively high electrical output resistance, which increases self-heating, and a relatively low thermal conductivity, which makes them unsuitable for applications where heat removal is critical, as with heat removal from an electrical device such as microprocessors. High generator output resistance: To get voltage output levels in the range required by digital electrical devices, a common approach is to place many thermoelectric elements in series within a generator module. The element's voltages increase, but so does their output resistance. The maximum power transfer theorem dictates that maximum power is delivered to a load when the source and load resistances are identically matched. For low impedance loads near zero ohms, as the generator resistance rises the power delivered to the load decreases. To lower the output resistance, some commercial devices place more individual elements in parallel and fewer in series and employ a boost regulator to raise the voltage to the voltage needed by the load. Low thermal conductivity: Because a very high thermal conductivity is required to transport thermal energy away from a heat source such as a digital microprocessor, the low thermal conductivity of thermoelectric generators makes them unsuitable to recover the heat. Cold-side heat removal with air: In air-cooled thermoelectric applications, such as when harvesting thermal energy from a motor vehicle's crankcase, the large amount of thermal energy that must be dissipated into ambient air presents a significant challenge. As a thermoelectric generator's cool side temperature rises, the device's differential working temperature decreases. As the temperature rises, the device's electrical resistance increases causing greater parasitic generator self-heating. In motor vehicle applications a supplementary radiator is sometimes used for improved heat removal, though the use of an electric water pump to circulate a coolant adds parasitic loss to total generator output power. Water cooling the thermoelectric generator's cold side, as when generating thermoelectric power from the hot crankcase of an inboard boat motor, would not suffer from this disadvantage. Water is a far easier coolant to use effectively in contrast to air. Future market While TEG technology has been used in military and aerospace applications for decades, new TE materials and systems are being developed to generate power using low or high temperatures waste heat, and that could provide a significant opportunity in the near future. These systems can also be scalable to any size and have lower operation and maintenance cost. The global market for thermoelectric generators is estimated to be US$320 million in 2015 and US$472 million in 2021; up to US$1.44 billion by 2030 with a CAGR of 11.8%. Today, North America captures 66% of the market share and it will continue to be the biggest market in the near future. However, Asia-Pacific and European countries are projected to grow at relatively higher rates. A study found that the Asia-Pacific market would grow at a Compound Annual Growth Rate (CAGR) of 18.3% in the period from 2015 to 2020 due to the high demand of thermoelectric generators by the automotive industries to increase overall fuel efficiency, as well as the growing industrialization in the region. Small scale thermoelectric generators are also in the early stages of investigation in wearable technologies to reduce or replace charging and boost charge duration. Recent studies focused on the novel development of a flexible inorganic thermoelectric, silver selenide, on a nylon substrate. Thermoelectrics represent particular synergy with wearables by harvesting energy directly from the human body creating a self-powered device. One project used n-type silver selenide on a nylon membrane. Silver selenide is a narrow bandgap semiconductor with high electrical conductivity and low thermal conductivity, making it perfect for thermoelectric applications. Low power TEG or "sub-watt" (i.e. generating up to 1 watt peak) market is a growing part of the TEG market, capitalizing on the latest technologies. Main applications are sensors, low power applications and more globally Internet of things applications. A specialized market research company indicated that 100,000 units have been shipped in 2014 and expects 9 million units per year by 2020. See also Bismuth telluride Electrical generator Energy harvesting devices: Thermoelectrics Gentherm Incorporated Mária Telkes Stirling engine Thermal power station Thermoelectric battery Thermionic converter Thermoelectric cooling or Peltier cooler Thermoelectric effect Thermoelectric materials References External links Small Thermoelectric Generators by G. Jeffrey Snyder Kanellos, M. (2008, November 24). Tapping America’s Secret Power Source. Retrieved from Greentech Media, October 30, 2009. Web site: Tapping America's Secret Power Source LT Journal October 2010: Ultralow Voltage Energy Harvester Uses Thermoelectric Generator for Battery-Free Wireless Sensors DIY: How to Build a Thermoelectric Energy Generator With a Cheap Peltier Unit Gentherm Inc. This device harnesses the cold night sky to generate electricity in the dark Electrical generators Energy harvesting Thermoelectricity hr:Termoelektrični generator kk:Термоэлектрлік генератор ja:熱電変換素子
Thermoelectric generator
[ "Physics", "Technology" ]
5,398
[ "Physical systems", "Electrical generators", "Machines" ]
9,210,048
https://en.wikipedia.org/wiki/Richard%20Crandall
Richard E. Crandall (December 29, 1947 – December 20, 2012) was an American physicist and computer scientist who made contributions to computational number theory. Background Crandall was born in Ann Arbor, Michigan, and spent two years at Caltech before transferring to Reed College in Portland, Oregon, where he graduated in physics and wrote his undergraduate thesis on randomness. He earned his Ph.D in theoretical physics from Massachusetts Institute of Technology. Career In 1978, he became a physics professor at Reed College, where he taught courses in experimental physics and computational physics for many years, ultimately becoming Vollum Professor of Science and director of the Center for Advanced Computation. He was also, at various times, Chief Scientist at NeXT, Inc., Chief Cryptographer and Distinguished Scientist at Apple, and head of Apple's Advanced Computation Group. He was a pioneer in experimental mathematics. He developed the irrational base discrete weighted transform, a method of finding very large primes. He wrote several books and many scholarly papers on scientific programming and computation. Crandall was awarded numerous patents for his work in the field of cryptography. He also wrote a poker program that could bluff. He owned and operated PSI Press, an online publishing company. Personal life Crandall was part Cherokee and proud of his Native heritage. He fronted a band called the Chameleons in 1981. He was working on an intellectual biography of Steve Jobs when he collapsed at his home in Portland, Oregon, from acute leukemia. He died 10 days later, on December 20, 2012, at the age of 64. Books Pascal Applications for the Sciences. John Wiley & Sons, New York 1983. with M. M. Colgrove: Scientific Programming with Macintosh Pascal. John Wiley & Sons, New York 1986. Mathematica for the Sciences, Addison-Wesley, Reading, Mass, 1991. Projects in Scientific Computation. Springer 1994. Topics in Advanced Scientific Computation. Springer 1996. with M. Levich: A Network Orange. Springer 1997. with C. Pomerance: Prime numbers: A Computational Perspective.'' Springer 2001. References External links Professor Richard E. Crandall; many of Crandall's papers can be found here Nicholas Wheeler, Remembering Prof. Crandall Stephen Wolfram, Remembering Richard Crandall (1947-2012) David Bailey and Jonathan Borwein, Mathematician/physicist/inventor Richard Crandall dies at 64 David Broadhurst, A prime puzzle in honor of Richard Crandall 1947 births 2012 deaths Scientists from Ann Arbor, Michigan Scientists from Portland, Oregon 20th-century American inventors 21st-century American inventors American atheists American computer scientists Apple Inc. employees Computational physicists Deaths from leukemia in Oregon Deaths from acute leukemia Reed College faculty Reed College alumni
Richard Crandall
[ "Physics" ]
561
[ "Computational physicists", "Computational physics" ]
9,210,114
https://en.wikipedia.org/wiki/Rhind%20Mathematical%20Papyrus
The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057, pBM 10058, and Brooklyn Museum 37.1784Ea-b) is one of the best known examples of ancient Egyptian mathematics. It is one of two well-known mathematical papyri, along with the Moscow Mathematical Papyrus. The Rhind Papyrus is the larger, but younger, of the two. In the papyrus' opening paragraphs Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues: This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made in the time of the King of Upper and Lower Egypt Nimaatre. The scribe Ahmose writes this copy. Several books and articles about the Rhind Mathematical Papyrus have been published, and a handful of these stand out. The Rhind Papyrus was published in 1923 by the English Egyptologist T. Eric Peet and contains a discussion of the text that followed Francis Llewellyn Griffith's Book I, II and III outline. Chace published a compendium in 1927–29 which included photographs of the text. A more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute. History The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics) from a now-lost text from the reign of the 12th dynasty king Amenemhat III. It dates to around 1550 BC. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from "Year 11" of his successor, Khamudi. Alexander Henry Rhind, a Scottish antiquarian, purchased two parts of the papyrus in 1858 in Luxor, Egypt; it was stated to have been found in "one of the small buildings near the Ramesseum", near Luxor. The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind. Fragments of the text were independently purchased in Luxor by American Egyptologist Edwin Smith in the mid 1860s, were donated by his daughter in 1906 to the New York Historical Society, and are now held by the Brooklyn Museum. An central section is missing. The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical-translation aspect remains incomplete in several respects. Books Book I – Arithmetic and Algebra The first part of the Rhind papyrus consists of reference tables and a collection of 21 arithmetic and 20 algebraic problems. The problems start out with simple fractional expressions, followed by completion (sekem) problems and more involved linear equations (aha problems). The first part of the papyrus is taken up by the 2/n table. The fractions 2/n for odd n ranging from 3 to 101 are expressed as sums of unit fractions. For example, . The decomposition of 2/n into unit fractions is never more than 4 terms long as in for example: This table is followed by a much smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. For instance the division of 7 by 10 is recorded as: 7 divided by 10 yields 2/3 + 1/30 After these two tables, the papyrus records 91 problems altogether, which have been designated by moderns as problems (or numbers) 1–87, including four other items which have been designated as problems 7B, 59B, 61B and 82B. Problems 1–7, 7B and 8–40 are concerned with arithmetic and elementary algebra. Problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4, and 1 + 2/3 + 1/3 = 2 by different fractions. Problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘‘aha’’ problems; these are linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + 1/3 x + 1/4 x = 2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume. Beginning at this point, assorted units of measurement become much more important throughout the remainder of the papyrus, and indeed a major consideration throughout the rest of the papyrus is dimensional analysis. Problems 39 and 40 compute the division of loaves and use arithmetic progressions. Book II – Geometry The second part of the Rhind papyrus, being problems 41–59, 59B and 60, consists of geometry problems. Peet referred to these problems as "mensuration problems". Volumes Problems 41–46 show how to find the volume of both cylindrical and rectangular granaries. In problem 41 Ahmes computes the volume of a cylindrical granary. Given the diameter d and the height h, the volume V is given by: In modern mathematical notation (and using d = 2r) this gives . The fractional term 256/81 approximates the value of π as being 3.1605..., an error of less than one percent. Problem 47 is a table with fractional equalities which represent the ten situations where the physical volume quantity of "100 quadruple heqats" is divided by each of the multiples of ten, from ten through one hundred. The quotients are expressed in terms of Horus eye fractions, sometimes also using a much smaller unit of volume known as a "quadruple ro". The quadruple heqat and the quadruple ro are units of volume derived from the simpler heqat and ro, such that these four units of volume satisfy the following relationships: 1 quadruple heqat = 4 heqat = 1280 ro = 320 quadruple ro. Thus, 100/10 quadruple heqat = 10 quadruple heqat 100/20 quadruple heqat = 5 quadruple heqat 100/30 quadruple heqat = (3 + 1/4 + 1/16 + 1/64) quadruple heqat + (1 + 2/3) quadruple ro 100/40 quadruple heqat = (2 + 1/2) quadruple heqat 100/50 quadruple heqat = 2 quadruple heqat 100/60 quadruple heqat = (1 + 1/2 + 1/8 + 1/32) quadruple heqat + (3 + 1/3) quadruple ro 100/70 quadruple heqat = (1 + 1/4 + 1/8 + 1/32 + 1/64) quadruple heqat + (2 + 1/14 + 1/21 + 1/42) quadruple ro 100/80 quadruple heqat = (1 + 1/4) quadruple heqat 100/90 quadruple heqat = (1 + 1/16 + 1/32 + 1/64) quadruple heqat + (1/2 + 1/18) quadruple ro 100/100 quadruple heqat = 1 quadruple heqat Areas Problems 48–55 show how to compute an assortment of areas. Problem 48 is notable in that it succinctly computes the area of a circle by approximating π. Specifically, problem 48 explicitly reinforces the convention (used throughout the geometry section) that "a circle's area stands to that of its circumscribing square in the ratio 64/81." Equivalently, the papyrus approximates π as 256/81, as was already noted above in the explanation of problem 41. Other problems show how to find the area of rectangles, triangles and trapezoids. Pyramids The final six problems are related to the slopes of pyramids. A seked problem is reported as follows: If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?" The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity found for the seked is the cotangent of the angle to the base of the pyramid and its face. Book III – Miscellany The third part of the Rhind papyrus consists of the remainder of the 91 problems, being 61, 61B, 62–82, 82B, 83–84, and "numbers" 85–87, which are items that are not mathematical in nature. This final section contains more complicated tables of data (which frequently involve Horus eye fractions), several pefsu problems which are elementary algebraic problems concerning food preparation, and even an amusing problem (79) which is suggestive of geometric progressions, geometric series, and certain later problems and riddles in history. Problem 79 explicitly cites, "seven houses, 49 cats, 343 mice, 2401 ears of spelt, 16807 hekats." In particular problem 79 concerns a situation in which 7 houses each contain seven cats, which all eat seven mice, each of which would have eaten seven ears of grain, each of which would have produced seven measures of grain. The third part of the Rhind papyrus is therefore a kind of miscellany, building on what has already been presented. Problem 61 is concerned with multiplications of fractions. Problem 61B, meanwhile, gives a general expression for computing 2/3 of 1/n, where n is odd. In modern notation the formula given is The technique given in 61B is closely related to the derivation of the 2/n table. Problems 62–68 are general problems of an algebraic nature. Problems 69–78 are all pefsu problems in some form or another. They involve computations regarding the strength of bread and beer, with respect to certain raw materials used in their production. Problem 79 sums five terms in a geometric progression. Its language is strongly suggestive of the more modern riddle and nursery rhyme "As I was going to St Ives". Problems 80 and 81 compute Horus eye fractions of hinu (or heqats). The last four mathematical items, problems 82, 82B and 83–84, compute the amount of feed necessary for various animals, such as fowl and oxen. However, these problems, especially 84, are plagued by pervasive ambiguity, confusion, and simple inaccuracy. The final three items on the Rhind papyrus are designated as "numbers" 85–87, as opposed to "problems", and they are scattered widely across the papyrus's back side, or verso. They are, respectively, a small phrase which ends the document (and has a few possibilities for translation, given below), a piece of scrap paper unrelated to the body of the document, used to hold it together (yet containing words and Egyptian fractions which are by now familiar to a reader of the document), and a small historical note which is thought to have been written some time after the completion of the body of the papyrus's writing. This note is thought to describe events during the "Hyksos domination", a period of external interruption in ancient Egyptian society which is closely related with its second intermediary period. With these non-mathematical yet historically and philologically intriguing errata, the papyrus's writing comes to an end. Unit concordance Much of the Rhind Papyrus's material is concerned with Ancient Egyptian units of measurement and especially the dimensional analysis used to convert between them. A concordance of units of measurement used in the papyrus is given in the image. Content This table summarizes the content of the Rhind Papyrus by means of a concise modern paraphrase. It is based upon the two-volume exposition of the papyrus which was published by Arnold Buffum Chace in 1927, and in 1929. In general, the papyrus consists of four sections: a title page, the 2/n table, a tiny "1–9/10 table", and 91 problems, or "numbers". The latter are numbered from 1 through 87 and include four mathematical items which have been designated by moderns as problems 7B, 59B, 61B, and 82B. Numbers 85–87, meanwhile, are not mathematical items forming part of the body of the document, but instead are respectively: a small phrase ending the document, a piece of "scrap-paper" used to hold the document together (having already contained unrelated writing), and a historical note which is thought to describe a time period shortly after the completion of the body of the papyrus. These three latter items are written on disparate areas of the papyrus's verso (back side), far away from the mathematical content. Chace therefore differentiates them by styling them as numbers as opposed to problems, like the other 88 numbered items. See also List of ancient Egyptian papyri Akhmim wooden tablet Ancient Egyptian units of measurement As I was going to St. Ives Berlin Papyrus 6619 History of mathematics Lahun Mathematical Papyri Bibliography References External links British Museum webpage on the first section of the Papyrus British Museum webpage on the second section of the Papyrus . Williams, Scott W. Mathematicians of the African Diaspora, containing a page on Egyptian Mathematics Papyri. 16th-century BC literature 1858 archaeological discoveries Egyptian mathematics Egyptian fractions Papyri from ancient Egypt Papyrus Mathematics manuscripts Pi Hyksos Ancient Egyptian objects in the British Museum Luxor Amenemhat III Mathematics literature
Rhind Mathematical Papyrus
[ "Mathematics" ]
2,938
[ "Pi" ]
9,210,186
https://en.wikipedia.org/wiki/Suillus%20grevillei
Suillus grevillei, commonly known as Greville's bolete, tamarack jack, or larch bolete, is a mycorrhizal mushroom with a tight, brilliantly coloured cap, shiny and wet looking with its mucous slime layer. The hymenium easily separates from the flesh of the cap, with a central stalk that is quite slender. The species has a ring or a tight-fitting annular zone. Etymology The specific epithet is derived from Robert Kaye Greville. Description Suillus grevillei is a mushroom with a 5–10 cm (2–4 in) cap colored from citrus yellow to burnt orange, that is at first hemispherical, then bell-shaped, and finally flattened. It has a sticky skin, often with veil remnants on the edge, short tubes of yellow (possibly staining brownish) which descend down to the bottom of its cylindrical stalk (6–10 x 1–2 cm), which is yellowish above the ring area with streaks of reddish brown below. The flesh is yellow, staining brown. The thin meat has consistency at first but then quickly becomes soft. It has an odor reminiscent of rumpled Pelargonium geranium leaves. It grows in the soil of mixed forests, not always at the foot of larch (can be quite some distance away) with which it lives in symbiosis. Habitat and distribution Grows only under larch trees. Widespread in North America and Europe (July–November). In Asia, it has been recorded from Taiwan. Edibility Suillus grevillei can be cooked as an edible mushroom (without consistency nor flavor) if the slimy cuticle is removed off the cap. This mucousy skin layer is what is known to cause intestinal issues, as is the case with several other Suillus such as Slippery Jack (S. luteus) or Jill (S. salmonicolor); often considered to be not worth the work. Chemistry The fungus produces grevillin which is characteristic of this fungus. The genetic and enzymatic basis for atromentin, the precursor to various pulvinic acid-type pigments, has been characterized (an atromentin synthetase by the name, GreA). A cosmid library (31 249 bp in total) has been made from the genome. The estimated gene density based on the cosmid library is 1 per 3900 bp of genomic DNA. The genome has a GC content of 49.8%. See also List of North American boletes Larch bolete, other species of fungi associated with larch References Works in French Régis Courtecuisse, Bernard Duhem : Guide des champignons de France et d'Europe (Delachaux & Niestlé, 1994-2000). Marcel Bon : Champignons de France et d'Europe occidentale (Flammarion, 2004) Dr Ewaldt Gerhardt : Guide Vigot des champignons (Vigot, 1999) - Roger Phillips : Les champignons (Solar, 1981) - Thomas Laessoe, Anna Del Conte : L'Encyclopédie des champignons (Bordas, 1996) - Peter Jordan, Steven Wheeler : Larousse saveurs - Les champignons (Larousse, 1996) - G. Becker, Dr L. Giacomoni, J Nicot, S. Pautot, G. Redeuihl, G. Branchu, D. Hartog, A. Herubel, H. Marxmuller, U. Millot et C. Schaeffner : Le guide des hampignons (Reader's Digest, 1982) - Henri Romagnesi : Petit atlas des champignons (Bordas, 1970) - External links Baura G, Szaro TM, Bruns TD. 1992. Gastrosuillus laricinius is a recent derivative of Suillus grevillei: molecular evidence. Mycologia 84(4): 592–597. grevillei Fungi of Asia Fungi of Europe Edible fungi Fungi described in 1945 Fungus species
Suillus grevillei
[ "Biology" ]
865
[ "Fungi", "Fungus species" ]
9,210,345
https://en.wikipedia.org/wiki/Gaussian%20adaptation
Gaussian adaptation (GA), also called normal or natural adaptation (NA) is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. In short, GA is a stochastic adaptive process where a number of samples of an n-dimensional vector x[xT = (x1, x2, ..., xn)] are taken from a multivariate Gaussian distribution, N(m, M), having mean m and moment matrix M. The samples are tested for fail or pass. The first- and second-order moments of the Gaussian restricted to the pass samples are m* and M*. The outcome of x as a pass sample is determined by a function s(x), 0 < s(x) < q ≤ 1, such that s(x) is the probability that x will be selected as a pass sample. The average probability of finding pass samples (yield) is Then the theorem of GA states: For any s(x) and for any value of P < q, there always exist a Gaussian p. d. f. [ probability density function ] that is adapted for maximum dispersion. The necessary conditions for a local optimum are m = m* and M proportional to M*. The dual problem is also solved: P is maximized while keeping the dispersion constant (Kjellström, 1991). Proofs of the theorem may be found in the papers by Kjellström, 1970, and Kjellström & Taxén, 1981. Since dispersion is defined as the exponential of entropy/disorder/average information it immediately follows that the theorem is valid also for those concepts. Altogether, this means that Gaussian adaptation may carry out a simultaneous maximisation of yield and average information (without any need for the yield or the average information to be defined as criterion functions). The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian, m, is then moved to the centre of gravity of the approved (selected) points, m*. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points. It was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller (in analogy to simulated annealing, Kirkpatrick 1983). Since 1970 it has been used for both ordinary optimization and yield maximization. Natural evolution and Gaussian adaptation It has also been compared to the natural evolution of populations of living organisms. In this case s(x) is the probability that the individual having an array x of phenotypes will survive by giving offspring to the next generation; a definition of individual fitness given by Hartl 1981. The yield, P, is replaced by the mean fitness determined as a mean over the set of individuals in a large population. Phenotypes are often Gaussian distributed in a large population and a necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation, with respect to all Gaussian quantitative characters, is that it may push the centre of gravity of the Gaussian to the centre of gravity of the selected individuals. This may be accomplished by the Hardy–Weinberg law. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of the structure (Kjellström, 1996). In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense Gaussian adaptation may be seen as a genetic algorithm. How to climb a mountain Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small. If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides — dependent on the landscape — the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley). Computer simulation of Gaussian adaptation Thus far the theory only considers mean values of continuous distributions corresponding to an infinite number of individuals. In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of m and M (the moment matrix of the Gaussian). And this may also affect the efficiency of the process. Unfortunately very little is known about this, at least theoretically. The implementation of normal adaptation on a computer is a fairly simple task. The adaptation of m may be done by one sample (individual) at a time, for example m(i + 1) = (1 – a) m(i) + ax where x is a pass sample, and a < 1 a suitable constant so that the inverse of a represents the number of individuals in the population. M may in principle be updated after every step y leading to a feasible point x = m + y according to: M(i + 1) = (1 – 2b) M(i) + 2byyT, where yT is the transpose of y and b << 1 is another suitable constant. In order to guarantee a suitable increase of average information, y should be normally distributed with moment matrix μ2M, where the scalar μ > 1 is used to increase average information (information entropy, disorder, diversity) at a suitable rate. But M will never be used in the calculations. Instead we use the matrix W defined by WWT = M. Thus, we have y = Wg, where g is normally distributed with the moment matrix μU, and U is the unit matrix. W and WT may be updated by the formulas W = (1 – b)W + bygT and WT = (1 – b)WT + bgyT because multiplication gives M = (1 – 2b)M + 2byyT, where terms including b2 have been neglected. Thus, M will be indirectly adapted with good approximation. In practice it will suffice to update W only W(i + 1) = (1 – b)W(i) + bygT. This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999). The figure below illustrates the effect of increased average information in a Gaussian p.d.f. used to climb a mountain Crest (the two lines represent the contour line). Both the red and green cluster have equal mean fitness, about 65%, but the green cluster has a much higher average information making the green process much more efficient. The effect of this adaptation is not very salient in a 2-dimensional case, but in a high-dimensional case, the efficiency of the search process may be increased by many orders of magnitude. The evolution in the brain In the brain the evolution of DNA-messages is supposed to be replaced by an evolution of signal patterns and the phenotypic landscape is replaced by a mental landscape, the complexity of which will hardly be second to the former. The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance. For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music. In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values. A nerve cell kernel may add signal values, a synapse may multiply with a constant and An axon may delay values. This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991. In the figure below the brain stem is supposed to deliver Gaussian distributed signal patterns. This may be possible since certain neurons fire at random (Kandel et al.). The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed. The triangular boxes represent synapses and the boxes with the + sign are cell kernels. In the cortex signals are supposed to be tested for feasibility. When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory. The figure shows a 2-dimensional computer simulation of Gaussian adaptation according to the last formula in the preceding section. m and W are updated according to: m1 = 0.9 m1 + 0.1 x1; m2 = 0.9 m2 + 0.1 x2; w11 = 0.9 w11 + 0.1 y1g1; w12 = 0.9 w12 + 0.1 y1g2; w21 = 0.9 w21 + 0.1 y2g1; w22 = 0.9 w22 + 0.1 y2g2; As can be seen this is very much like a small brain ruled by the theory of Hebbian learning (Kjellström, 1996, 1999 and 2002). Gaussian adaptation and free will Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution. Such a random process gives us much freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999. A theorem of efficiency for random search The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon (see information content). When an event occurs with probability P, then the information −log(P) may be achieved. For instance, if the mean fitness is P, the information gained for each individual selected for survival will be −log(P) – on the average - and the work/time needed to get the information is proportional to 1/P. Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have: E = −P log(P). This function attains its maximum when P = 1/e = 0.37. The same result has been obtained by Gaines with a different method. E = 0 if P = 0, for a process with infinite mutation rate, and if P = 1, for a process with mutation rate = 0 (provided that the process is alive). This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand. 1 The search should be statistically independent and equally efficient in different parameter directions. This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency. 2 All individuals have equal cost and the derivative at P = 1 is < 0. Then, the following theorem may be proved: All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –P log(P/q) when the number of dimensions increases, and are maximized by P = q exp(-1) (Kjellström, 1996 and 1999). The figure above shows a possible efficiency function for a random search process such as Gaussian adaptation. To the left the process is most chaotic when P = 0, while there is perfect order to the right where P = 1. In an example by Rechenberg, 1971, 1973, a random walk is pushed thru a corridor maximizing the parameter x1. In this case the region of acceptability is defined as a (n − 1)-dimensional interval in the parameters x2, x3, ..., xn, but a x1-value below the last accepted will never be accepted. Since P can never exceed 0.5 in this case, the maximum speed towards higher x1-values is reached for P = 0.5/e = 0.18, in agreement with the findings of Rechenberg. A point of view that also may be of interest in this context is that no definition of information (other than that sampled points inside some region of acceptability gives information about the extension of the region) is needed for the proof of the theorem. Then, because, the formula may be interpreted as information divided by the work needed to get the information, this is also an indication that −log(P) is a good candidate for being a measure of information. The Stauffer and Grimson algorithm Gaussian adaptation has also been used for other purposes as for instance shadow removal by "The Stauffer-Grimson algorithm" which is equivalent to Gaussian adaptation as used in the section "Computer simulation of Gaussian adaptation" above. In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time. But there are differences. In the Stauffer-Grimson case the information is not used for the control of a random number generator for centering, maximization of mean fitness, average information or manufacturing yield. The adaptation of the moment matrix also differs very much as compared to "the evolution in the brain" above. See also Entropy in thermodynamics and information theory Fisher's fundamental theorem of natural selection Free will Genetic algorithm Hebbian learning Information content Simulated annealing Stochastic optimization Covariance matrix adaptation evolution strategy (CMA-ES) Unit of selection References Bergström, R. M. An Entropy Model of the Developing Brain. Developmental Psychobiology, 2(3): 139–152, 1969. Brooks, D. R. & Wiley, E. O. Evolution as Entropy, Towards a unified theory of Biology. The University of Chicago Press, 1986. Brooks, D. R. Evolution in the Information Age: Rediscovering the Nature of the Organism. Semiosis, Evolution, Energy, Development, Volume 1, Number 1, March 2001 Gaines, Brian R. Knowledge Management in Societies of Intelligent Adaptive Agents. Journal of intelligent Information systems 9, 277–298 (1997). Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981. Hamilton, WD. 1963. The evolution of altruistic behavior. American Naturalist 97:354–356 Kandel, E. R., Schwartz, J. H., Jessel, T. M. Essentials of Neural Science and Behavior. Prentice Hall International, London, 1995. S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol 220, Number 4598, pages 671–680, 1983. Kjellström, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133–151, 1969. Kjellström, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157–175, 1970. Kjellström, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS-28, no. 7, July 1981. Kjellström, G., Taxén, L. and Lindberg, P. O. Discrete Optimization of Digital Filters Using Gaussian Adaptation and Quadratic Function Minimization. IEEE Trans. on Circ. and Syst., vol. CAS-34, no 10, October 1987. Kjellström, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, December 1991. Kjellström, G. & Taxén, L. Gaussian Adaptation, an evolution-based efficient global optimizer; Computational and Applied Mathematics, In, C. Brezinski & U. Kulish (Editors), Elsevier Science Publishers B. V., pp 267–276, 1992. Kjellström, G. Evolution as a statistical optimization algorithm. Evolutionary Theory 11:105–117 (January, 1996). Kjellström, G. The evolution in the brain. Applied Mathematics and Computation, 98(2–3):293–300, February, 1999. Kjellström, G. Evolution in a nutshell and some consequences concerning valuations. EVOLVE, , Stockholm, 2002. Levine, D. S. Introduction to Neural & Cognitive Modeling. Laurence Erlbaum Associates, Inc., Publishers, 1991. MacLean, P. D. A Triune Concept of the Brain and Behavior. Toronto, Univ. Toronto Press, 1973. Maynard Smith, J. 1964. Group Selection and Kin Selection, Nature 201:1145–1147. Maynard Smith, J. Evolutionary Genetics. Oxford University Press, 1998. Mayr, E. What Evolution is. Basic Books, New York, 2001. Müller, Christian L. and Sbalzarini Ivo F. Gaussian Adaptation revisited - an entropic view on Covariance Matrix Adaptation. Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, CH-8092 Zurich, Switzerland. Pinel, J. F. and Singhal, K. Statistical Design Centering and Tolerancing Using Parametric Sampling. IEEE Transactions on Circuits and Systems, Vol. Das-28, No. 7, July 1981. Rechenberg, I. (1971): Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). Ridley, M. Evolution. Blackwell Science, 1996. Stauffer, C. & Grimson, W.E.L. Learning Patterns of Activity Using Real-Time Tracking, IEEE Trans. on PAMI, 22(8), 2000. Stehr, G. On the Performance Space Exploration of Analog Integrated Circuits. Technischen Universität Munchen, Dissertation 2005. Taxén, L. A Framework for the Coordination of Complex Systems’ Development. Institute of Technology, Linköping University, Dissertation, 2003. Zohar, D. The quantum self : a revolutionary view of human nature and consciousness rooted in the new physics. London, Bloomsbury, 1990. Evolutionary algorithms Creationism Free will
Gaussian adaptation
[ "Biology" ]
4,264
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
9,210,858
https://en.wikipedia.org/wiki/Marine%20architecture
Marine architecture is the design of architectural and engineering structures which support coastal design, near-shore and off-shore or deep-water planning for many projects such as shipyards, ship transport, coastal management or other marine and/or hydroscape activities. These structures include harbors, lighthouses, marinas, oil platforms, offshore drillings, accommodation platforms and offshore wind farms, floating engineering structures and building architectures or civil seascape developments. Floating structures in deep water may use suction caisson for anchoring. See also , a temporary water-excluding structure built in place, sometimes surrounding a working area as does an open caisson. Photo gallery References External links Water and the environment Offshore engineering
Marine architecture
[ "Engineering" ]
143
[ "Construction", "Marine architecture", "Architecture", "Offshore engineering" ]
13,188,978
https://en.wikipedia.org/wiki/Doe%20Triple-D
The Doe Triple-D or Doe Dual Drive is a make of tractor produced by Ernest Doe & Sons in the 1950s and 1960s in Ulting Essex. Its two engines and 90-degree articulation made it one of the most unorthodox tractors ever built. Development During the 1950s farmers in the United Kingdom in need of high-power tractors had few options. Essex farmer George Pryor developed an ingenious solution to the problem by creating his own tractor. He did this by purchasing two Fordson tractors, removing the front wheels and axles and linking the two by means of a turntable which provided the steering action powered by hydraulic rams. This left him with a double-engined four-wheel—drive tractor capable of producing more power and outperforming any of the conventional tractors on the UK market at the time. Commercial production Local Fordson dealers Ernest Doe & Sons agreed to build an improved version, the first one was completed in 1958 and called the Doe Dual Power, later changed to Doe Dual Drive and abbreviated to Triple-D. The first Doe Triple-D used two Fordson Power Major units producing , the later Triple-D 130 used two Ford 5000 tractors increasing the power output to over and the Triple-D 150 was based on Ford Force 5000 tractors producing . The vast majority of Triple-Ds were sold in the UK, but a number were exported to the United States and elsewhere. Disadvantages The main disadvantage with the Triple-D was the lack of suitable implements for such a powerful tractor, this meant that Ernest Doe & Sons also had to develop and build a range of implements to sell with the tractors. Other disadvantages stemmed from the use of two engines, this made controlling the tractor more difficult because of the need for two gearboxes. There were two engines and gearboxes to maintain and repair and the probability of breakdowns was increased. End of production By the late 1960s several companies had developed single-engined tractors capable of producing over , this competition and the need for Doe to develop and test a approved safety rollover cab put the Doe out of production after over 300 had been built. Legacy The Triple-D often makes appearances at agricultural fairs such as the Epworth Festival of the Plough in Epworth, Lincolnshire and LeSueur, Minnesota Pioneer Power Days show where it is always a crowd favourite, popular due to its unorthodox build. Triple-Ds are worth a great deal due to their relative rarity, even unrestored Does can demand extremely high prices at auction. The Triple-D is also available in a 1:16 Scale model produced by Universal Hobbies References External links Ernest Doe homepage Tractorshed profile YouTube footage of Does ploughing at Epworth Tractors
Doe Triple-D
[ "Engineering" ]
549
[ "Engineering vehicles", "Tractors" ]
13,190,594
https://en.wikipedia.org/wiki/Harry%20H.%20Goode
Harry H. Goode (June 30, 1909 – October 30, 1960) was an American computer engineer and systems engineer and professor at the University of Michigan. He is known as co-author of the book Systems Engineering from 1957, which is one of the earliest significant books directly related to systems engineering. Biography Harry H. Goode (née Goodstein) was born in New York City in 1909. He received his B.A. in history from New York University in 1931, when the country was in the depths of the Depression. While studying chemical engineering at Cooper Union, Goode earned his living playing the clarinet and saxophone in New York jazz bands. He received his second bachelor's degree in 1940. During the war he attended Columbia University and received a master's degree in mathematics in 1945. In 1941 Goode started working as a statistician for the New York City Department of Health. From 1946 to 1949 Goode worked for the U.S. Navy in Sands Point, Long Island, where he became head of the Special Projects Branch. Here he contributed to flight control simulation training, aircraft instrumentation, antisubmarine warfare, weapons systems design, and computer research and initiated computerbased simulation projects. In the 1950s Goode became professor at the University of Michigan. Until his death in 1960 he was president of the National Joint Computer Committee (NJCC). He was the principal architect of what was to become AFIPS (American Federation of Information Processing Societies). Had he lived, Goode undoubtedly would have become the first president of AFIPS, for he was the prime mover in organizing the three American constituent societies that were members of NJCC into one federation. Work Harry Goode worked on the research frontiers of Management Science, Operations Research and Systems engineering in connection with organisms as systems, the reactions of groups, models of human preference, the experimental exploration of human observation, detection, and decision making, and the analysis and synthesis of speech. Harry H. Goode Memorial Award The IEEE Computer Society yearly awards a Harry H. Goode Memorial Award for achievements in the information processing field which are considered either a single contribution of theory, design, or technique of outstanding significance, or the accumulation of important contributions on theory or practice over an extended time period, the total of which represent an outstanding contribution. Publications Goode wrote several books and articles. Books: 1944 Mathematical Analysis of Ordinary and Deviated Pursuit Curves, with Leonard Gillman, Special Devices Section, Training Division, Bureau of Aeronautics, Navy Department, 264 pp. 1944. 1957 Systems Engineering: An Introduction to the Design of Large-Scale Systems, with Robert Engel Machol, McGraw-Hill, 551 pp. Articles, a selection: 1945 "Service Records and Their Administrative Uses", with Abraham H. Kantrow, Leona Baumgartner, in: Am J Public Health Nations Health. 1945 October; 35(10): 1063–1069. 1956 "The Use of a Digital Computer to Model a Signalized Intersection", with C.H. Pollmar and J.B. Wright, in: Proceedings of Highway Research Board, vol. 35, 1956, pp. 548 – 557. 1957 "Survey of Operations Research and Systems Engineering", Paper presented at Conference of Engineering Deans on Science and Technology, Purdue University, September 1957. 1958 "Greenhouses of Science for Management", in: Management Science, Vol. 4, No. 4 (Jul. 1958), pp. 365–381. 1958 "Simulation: Simulation and display of four inter-related vehicular traffic intersections", with C. True Wendell, Paper presented at the 13th national meeting of the Association for Computing Machinery ACM '58. About Harry H. Goode: Isaac L. Auerbach, "Harry H. Goode, June 30, 1909-October 30, 1960", IEEE Annals of the History of Computing, vol. 08, no. 3, pp. 257–260, Jul-Sept 1986. Robert E. Machol, Harry H. Goode, System Engineer, in: Science, Volume 133, Issue 3456, pp. 864–866, 03/1961. References External links Harry H. Goode Memorial Award, IEEE Computer Society. The McGraw-Hill Series in Control Systems Engineering overview. by Kent H Lundberg, January 2004. 1909 births 1960 deaths American engineering writers Systems engineers Columbia Graduate School of Arts and Sciences alumni University of Michigan faculty New York University alumni Cooper Union alumni 20th-century American writers
Harry H. Goode
[ "Engineering" ]
922
[]
13,191,396
https://en.wikipedia.org/wiki/Canadian%20Association%20of%20Rocketry
The Canadian Association of Rocketry - L'Association Canadienne De Fuséologie (CAR-ACF) is a Canadian federal not for profit self-supporting association and governing body representing amateur/model rocketeers across Canada. The history of amateur/ model rocketry in Canada goes back to 1965 with its approval by the Canadian Federal government with the assistance of the Canadian Aeronautics and Space Institute (CASI), the Royal Canadian Flying Clubs (RCFCA), the new Canadian Association of Rocketry (CAR), and then with the help of the Youth Aeronautic and Aerospace of Canada (YAAC). CAR-ACF was incorporated in 2009 from the then existing Canadian Association of Rocketry - CAR. Among its many duties, CAR-ACF is: to promote development of Amateur Aerospace as a recognized sport and worthwhile amateur activity. the official national body for amateur aerospace in Canada. a chartering organization for model rocket clubs across the country. offers its chartered clubs contest sanction and assistance in getting and keeping flying sites. the voice of its membership, providing liaison and certification programs with Transport Canada, Natural Resources Canada (Explosives Regulatory Division), and other government agencies also works with local governments, zoning boards and parks departments to promote the interests of local chartered clubs. CAR-ACF is the principal stakeholder representing Non-military, Non-commercial aerospace on the Transport Canada Canadian Aviation Regulatory Advisory Council (CARAC) which is responsible for maintaining and developing the Canadian Aviation Regulations (CARS). a Rocketry Association whose rules and regulations as formally acceptable to the Minister of Transport. External links Canadian Association of Rocketry - L'Association Canadienne De Fuséologie Clubs and societies in Canada Model rocketry
Canadian Association of Rocketry
[ "Astronomy" ]
349
[ "Rocketry stubs", "Astronomy stubs" ]
13,191,721
https://en.wikipedia.org/wiki/Vulnerability%20and%20susceptibility%20in%20conservation%20biology
In conservation biology, susceptibility is the extent to which an organism or ecological community would suffer from a threatening process or factor if exposed, without regard to the likelihood of exposure. It should not be confused with vulnerability, which takes into account both the effect of exposure and the likelihood of exposure. For example, a plant species may be highly susceptible to a particular plant disease, meaning that exposed populations invariably become extinct or decline heavily. However, that species may not be vulnerable if it occurs only in areas where exposure to the disease is unlikely, or if it occurs over such a wide distribution that exposure of all populations is unlikely. Conversely, a plant species may show low susceptibility to a disease, yet may be considered vulnerable if the disease is present in every population. References Conservation biology
Vulnerability and susceptibility in conservation biology
[ "Biology" ]
161
[ "Conservation biology" ]
13,192,026
https://en.wikipedia.org/wiki/Serviceability%20%28computer%29
In software engineering and hardware engineering, serviceability (also known as supportability) is one of the -ilities or aspects (from IBM's RAS(U) (Reliability, Availability, Serviceability, and Usability)). It refers to the ability of technical support personnel to install, configure, and monitor computer products, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide hardware or software maintenance in pursuit of solving a problem and restoring the product into service. Incorporating serviceability facilitating features typically results in more efficient product maintenance and reduces operational costs and maintains business continuity. Examples of features that facilitate serviceability include: Help desk notification of exceptional events (e.g., by electronic mail or by sending text to a pager) Network monitoring Documentation Event logging / Tracing (software) Logging of program state, such as Execution path and/or local and global variables Procedure entry and exit, optionally with incoming and return variable values (see: subroutine) Exception block entry, optionally with local state (see: exception handling) Software upgrade Graceful degradation, where the product is designed to allow recovery from exceptional events without intervention by technical support staff Hardware replacement or upgrade planning, where the product is designed to allow efficient hardware upgrades with minimal computer system downtime (e.g., hotswap components.) Serviceability engineering may also incorporate some routine system maintenance related features (see: Operations, Administration and Maintenance (OA&M.)) A service tool is defined as a facility or feature, closely tied to a product, that provides capabilities and data so as to service (analyze, monitor, debug, repair, etc.) that product. Service tools can provide broad ranges of capabilities. Regarding diagnosis, a proposed taxonomy of service tools is as follows: Level 1: Service tool that indicates if a product is functional or not functional. Describing computer servers, the states are often referred to as ‘up’ or ‘down’. This is a binary value. Level 2: Service tool that provides some detailed diagnostic data. Often the diagnostic data is referred to as a problem ‘signature’, a representation of key values such as system environment, running program name, etc. This level of data is used to compare one problem’s signature to another problem’s signature: the ability to match the new problem to an old one allows one to use the solution already created for the prior problem. The ability to screen problems is valuable when a problem does match a pre-existing problem, but it is not sufficient to debug a new problem. Level 3: Provides detailed diagnostic data sufficient to debug a new and unique problem. As a rough rule of thumb for these taxonomies, there are multiple ‘orders of magnitude’ of diagnostic data in level 1 vs. level 2 vs. level 3 service tools. Additional characteristics and capabilities that have been observed in service tools: Time of data collection: some tools can collect data immediately, as soon as problem occurs, others are delayed in collecting data. Pre-analyzed, or not-yet-analyzed data: some tools collect ‘external’ data, while others collect ‘internal’ data. This is seen when comparing system messages (natural language-like statements in the user’s native language) vs. ‘binary’ storage dumps. Partial or full set of system state data: some tools collect a complete system state vs. a partial system state (user or partial ‘binary’ storage dump vs. complete system dump). Raw or analyzed data: some tools display raw data, while others analyze it (examples storage dump formatters that format data, vs. ‘intelligent’ data formatters (“ANALYZE” is a common verb) that combine product knowledge with analysis of state variables to indicate the ‘meaning’ of the data. Programmable tools vs. ‘fixed function’ tools. Some tools can be altered to get varying amounts of data, at varying times. Some tools have only a fixed function. Automatic or manual? Some tools are built into a product, to automatically collect data when a fault or failure occurs. Other tools have to be specifically invoked to start the data collection process. Repair or non-repair? Some tools collect data as a fore-runner to an automatic repair process (self-healing/fault tolerant). These tools have the challenge of quickly obtaining unaltered data before the desired repair process starts. See also FURPS Maintainability External links Excellent example of Serviceability Feature Requirements: Sun Gathering Debug Data (Sun GDD). This is a set of tools developed by the Sun's support guys aimed to provide the right approach to problem resolution by leveraging proactive actions and best practices to gather the debug data needed for further analysis. "Carrier Grade Linux Serviceability Requirements Definition Version 4," Copyright (c) 2005-2007 by Open Source Development Labs, Inc. Beaverton, OR 97005 USA Design for X
Serviceability (computer)
[ "Engineering" ]
1,008
[ "Design", "Design for X" ]
13,192,028
https://en.wikipedia.org/wiki/Serviceability%20%28structure%29
In civil engineering and structural engineering, serviceability refers to the conditions under which a building is still considered useful. Should these limit states be exceeded, a structure that may still be structurally sound would nevertheless be considered unfit. It refers to conditions other than the building strength that render the buildings unusable. Serviceability limit state design of structures includes factors such as durability, overall stability, fire resistance, deflection, cracking and excessive vibration. For example, a skyscraper could sway severely and cause the occupants to be sick (much like sea-sickness), yet be perfectly sound structurally. This building is in no danger of collapsing, yet since it is obviously no longer fit for human occupation, it is considered to have exceeded its serviceability limit state. Serviceability limit A serviceability limit defines the performance criterion for serviceability and corresponds to a conditions beyond which specified service requirements resulting from the planned use are no longer met. In limit state design, a structure fails its serviceability if the criteria of the serviceability limit state are not met during the specified service life and with the required reliability. Hence, the serviceability limit state identifies a civil engineering structure which fails to meet technical requirements for use even though it may be strong enough to remain standing. A structure that fails serviceability has exceeded a defined limit for one of the following properties: Excessive deflection Vibration Local deformation (engineering) Serviceability limits are not always defined by building code developer, government or regulatory agency. Building codes tend to be restricted to ultimate limits related to public and occupant safety. Global geopolitical variations are likely to exist. Structural engineering Building engineering
Serviceability (structure)
[ "Engineering" ]
331
[ "Structural engineering", "Building engineering", "Construction", "Civil engineering", "Architecture" ]
13,192,647
https://en.wikipedia.org/wiki/List%20of%20freshwater%20ecoregions%20in%20Africa%20and%20Madagascar
This is a list of freshwater ecoregions in Africa and Madagascar as identified by the World Wildlife Fund (WWF). The WWF categorizes the Earth's land surface into ecoregions, which are defined as "large area[s] of land or water containing a distinct assemblage of natural communities and species." These ecoregions are further grouped into bioregions, "a complex of ecoregions that share a similar biogeographic history, and thus often have strong affinities at higher taxonomic levels (e.g. genera, families)." The Earth's land surface is divided into eight biogeographic realms. While most of Africa falls within the Afrotropical realm, the freshwater ecoregions of North Africa share similarities with the Palearctic realm. Each ecoregion is also classified into major habitat types or biomes. Many view this classification as decisive, and some propose using these boundaries as stable borders for bioregional democracy initiatives. by Bioregion North African Canary Islands Horn (Djibouti, Ethiopia, Somalia) Permanent Maghreb (Algeria, Mauritania, Morocco, Tunisia, Western Sahara) Temporary Maghreb (Algeria, Egypt, Libya, Mauritania, Morocco, Tunisia, Western Sahara) Red Sea Coastal (Egypt, Eritrea, Ethiopia, Sudan) Socotra (Yemen) Nilo-Sudan Ashanti (Côte d'Ivoire, Ghana) Bight Coastal (Benin, Ghana, Nigeria, Togo) Bijagos (Guinea Bissau) Cape Verde Dry Sahel (Algeria, Chad, Egypt, Libya, Mali, Mauritania, Niger, Sudan, Western Sahara) Eburneo (Burkina Faso, Côte d'Ivoire, Mali) Ethiopian Highlands (Ethiopia) Lake Chad Catchment (Cameroon, Central African Republic, Chad, Nigeria, Sudan) Yaéré (seasonal wetland) Niger Upper Niger (Côte d'Ivoire, Guinea, Mali) Inner Niger Delta (Mali) Lower Niger-Benue (Benin, Burkina Faso, Mali, Niger, Nigeria) Niger Delta (Nigeria) Nile Lake Tana (Ethiopia) Upper Nile (Sudan, Uganda) Lower Nile (Egypt, Sudan) Nile Delta (Egypt) Northern Eastern Rift (Ethiopia) Senegal-Gambia Catchments (Gambia, Guinea, Guinea Bissau, Mali, Mauritania, Senegal) Shebele-Juba Catchments (Ethiopia, Kenya, Somalia) Lake Turkana (Ethiopia, Kenya) Volta (Burkina Faso, Côte d'Ivoire, Ghana, Togo) Upper Guinea Fouta-Djalon (Guinea) Mount Nimba (Côte d'Ivoire, Guinea, Liberia) Northern Upper Guinea (Guinea, Guinea Bissau, Liberia, Sierra Leone) Southern Upper Guinea (Côte d'Ivoire, Guinea, Liberia) West Coast Equatorial Central West Coastal Equatorial (Cameroon, Republic of the Congo, Equatorial Guinea, Gabon) Northern West Coast Equatorial (Cameroon, Equatorial Guinea, Nigeria) Southern West Coast Equatorial (Angola, Democratic Republic of the Congo, Republic of the Congo, Gabon) São Tomé, Príncipe, and Annobón (Equatorial Guinea, São Tomé and Príncipe) Western Equatorial Crater Lakes (Cameroon) Congo Albertine Highlands (Democratic Republic of the Congo) Bangweulu-Mweru (Democratic Republic of the Congo, Zambia) Cuvette Centrale (Democratic Republic of the Congo) Kasai (Angola, Democratic Republic of the Congo) Lower Congo (Angola, Democratic Republic of the Congo, Republic of the Congo) Lower Congo Rapids (Democratic Republic of the Congo, Republic of the Congo) Mai-Ndombe (Democratic Republic of the Congo) Malebo Pool (Democratic Republic of the Congo) Sangha (Cameroon, Central African Republic, Republic of the Congo) Sudanic Congo (Oubangi) (Central African Republic, Democratic Republic of the Congo, Republic of the Congo) Uele (Democratic Republic of the Congo) Upper Congo (Democratic Republic of the Congo) Upper Congo Rapids (Democratic Republic of the Congo) Upper Luluaba (Democratic Republic of the Congo) Thysville Caves (Democratic Republic of the Congo) Tumba (Democratic Republic of the Congo) Great Lakes Lake Malawi (Malawi, Mozambique, Tanzania) Lake Rukwa (Tanzania) Lake Tanganyika (Burundi, Democratic Republic of the Congo, Rwanda, Tanzania, Zambia) Lakes Kivu, Edward, George & Victoria (Burundi, Democratic Republic of the Congo, Kenya, Tanzania, Uganda) Eastern and Coastal Southern Eastern Rift (Kenya, Tanzania) Kenyan Coastal Rivers (Kenya) Pangani (Kenya, Tanzania) Malagarasi-Moyowosi (Tanzania) Eastern Coastal Basins (Mozambique, Tanzania) Lakes Chilwa and Chiuta (Malawi, Mozambique) Cuanza Cuanza (Angola) Zambezi Etosha (Angola, Namibia) Kalahari (Botswana, Namibia, South Africa) Karstveld Sink Holes (Namibia) Namib Coastal (Angola, Namibia) Okavango Floodplains (Angola, Botswana, Namibia) Zambezian Lowveld (Mozambique, South Africa, Eswatini, Zimbabwe) Zambezi Zambezian Headwaters (Angola, Zambia) Kafue (Zambia) Upper Zambezi Floodplains (Angola, Botswana, Namibia, Zambia) Mulanje (Malawi, Mozambique) Eastern Zimbabwe Highlands (Mozambique, Zimbabwe) Zambezian (Plateau) Highveld (Zimbabwe) Middle Zambezi Luangwa (Mozambique, Zambia, Zimbabwe) Lower Zambezi (Malawi, Mozambique) Madagascar and the Indian Ocean Islands Comoros Madagascar Eastern Lowlands (Madagascar) Madagascar Eastern Highlands (Madagascar) Madagascar Northwestern Basins (Madagascar) Madagascar Southern Basins (Madagascar) Madagascar Western Basins (Madagascar) Mascarenes (Mauritius, Réunion) Coralline Seychelles (Seychelles) Granitic Seychelles (Seychelles) Southern Temperate Amatole-Winterberg Highlands (South Africa) Cape Fold (South Africa) Drakensberg-Maloti Highlands (Lesotho, South Africa) Karoo (South Africa) Southern Kalahari (South Africa) Southern Temperate Highveld (South Africa, Eswatini) Western Orange (Botswana, South Africa) by Major Habitat type Closed basins and small lakes Lakes Chilwa and Chiuta Southern Eastern Rift Lake Tana Northern Eastern Rift Western Equatorial Crater Lakes Floodplains, swamps, and lakes Bangweulu-Mweru Inner Niger Delta Kafue Lake Chad Catchment Mai Ndombe Malagarasi-Moyowosi Okavango Floodplains Tumba Upper Luluaba Upper Nile Upper Zambezi Floodplains Yaéré Moist forest rivers Ashanti Cuvette Centrale Central West Coastal Equatorial Eburneo Kasai Lower Congo Madagascar Eastern Lowlands Malebo Pool Northern Upper Guinea Northern West Coast Equatorial Sangha Southern Upper Guinea Northern West Coastal Equatorial Sudanic Congo (Oubangi) Upper Congo Upper Niger Mediterranean systems Cape Fold Permanent Maghreb Highland and mountain systems Albertine Highlands Amatole-Winterberg Highlands Drakensberg-Maloti Highlands Eastern Zimbabwe Highlands Ethiopian Highlands Fouta-Djalon Madagascar Eastern Highlands Mount Nimba Mulanje Island rivers and lakes Bijagos Canary Islands Cape Verde Comoros Coralline Seychelles Granitic Seychelles São Tomé, Príncipe, and Annobón Mascarenes Socotra Large lakes Lake Malawi Lake Rukwa Lake Tanganyika Lake Turkana Lakes Kivu, Edward, George & Victoria Large river deltas Niger Delta Nile Delta Large river rapids Lower Congo Rapids Upper Congo Rapids Savanna-dry forest rivers Bight Coastal Cuanza Kenyan Coastal Rivers Lower Niger-Benue Lower Zambezi Madagascar Northwestern Basins Madagascar Western Basins Middle Zambezi Luangwa Pangani Senegal-Gambia Catchments Eastern Coastal Basins Southern Temperate Highveld Uele Volta Zambezian Headwaters Zambezian Lowveld Zamebzian (Plateau) Highveld Subterranean and spring systems Karstveld Sink Holes Thysville Caves Xeric systems Dry Sahel Etosha Horn Kalahari Karoo Lower Nile Madagascar Southern Basins Namib Coastal Red Sea Coastal Shebele-Juba Catchments Southern Kalahari Temporary Maghreb Western Orange References Thieme, Michelle L. (2005). Freshwater Ecoregions of Africa and Madagascar: A Conservation Assessment. Island Press, Washington DC. 01 .Freshwater . Aquatic ecology Africa, Freshwater Ecoregions, Freshwater Ecoregions, Freshwater
List of freshwater ecoregions in Africa and Madagascar
[ "Biology" ]
1,709
[ "Aquatic ecology", "Ecosystems" ]
13,192,836
https://en.wikipedia.org/wiki/Fountaineer
Fountaineer is a portmanteau of "Fountain" and "Engineer" – Hydraulic engineer. Fountaineer describes one who designs, explores, or is passionate about fountains and their design, operation, and use. In addition, the Fontainiers made the water pipes from lead, the restoration of which, from the point of view of monument preservation, is the responsibility of today's fountain masters. Fountaineers Notable fountaineers include: André Le Nôtre: Gardens of Versailles; others. Lawrence Halprin: Keller Fountain Park; Freeway Park; Franklin Delano Roosevelt Memorial; others. Dan Euser: Dundas Square; proposed World Trade Center memorial fountains [world's largest human-made waterfall] Jeff Chapman: fountaineer-ing as subject. Jean Tinguely + Niki de Saint Phalle: Stravinsky Fountain; others.. WET Design: Fountain of Nations; Salt Lake 2002 Olympic Cauldron Park; Fountains of Bellagio; others. See also Fountain (Duchamp) Fountains in Paris List of fountains in Rome Category: Fountains Category: Outdoor sculptures References Liz Clayton "Fountaineer-ing", Spacing, special issue on Water, Summer 2007, pp42–43 Jeff Chapman, The man behind YIP? and Infiltration zines, urban explorer, and fountaineer. Fountains Landscape and garden designers Landscape architecture Hydraulic engineers
Fountaineer
[ "Engineering" ]
276
[ "Architecture stubs", "Landscape architecture", "Architecture" ]
13,193,455
https://en.wikipedia.org/wiki/Irina%20Beletskaya
Irina Petrovna Beletskaya (; born 10 March 1933) is a Soviet and Russian professor of chemistry at Moscow State University. She specializes in organometallic chemistry and its application to problems in organic chemistry. She is best known for her studies on aromatic reaction mechanisms, as well as work on carbanion acidity and reactivity. She developed some of the first methods for carbon-carbon bond formation using palladium or nickel catalysts, and extended these reactions to work in aqueous media. She also helped to open up the chemistry of organolanthanides. Academic career Beletskaya was born in Leningrad (St. Petersburg, Russia) in 1933. She graduated from the Department of Chemistry of Lomonosov Moscow State University in 1955 where she focused her undergraduate research on organoarsenic chemistry. She obtained the Candidate of Chemistry (analogous to Ph.D.) degree in 1958. For this degree she investigated electrophilic substitution reactions. More specifically, she explored the influence of ammonia on a-bromomercurophenylacetic acid reactions. In 1963 she received her Dr.Sci. degree from the same institution. In 1970 she became a Full Professor of Chemistry at Moscow State University, where she currently serves as head of the Organoelement Chemistry Laboratory. Beletskaya was elected a corresponding member of the Academy of Science of USSR in 1974. In 1992 she became a full member (academician) of the Russian Academy of Sciences. Between 1991 and 1993 she served as president of the Division of Organic Chemistry of IUPAC. Until 2001 she served on the IUPAC Committee on Chemical Weapons Destruction Technology (CWDT). She is editor-in chief of the Russian Journal of Organic Chemistry. Beletskaya initially researched the reaction mechanisms of organic reactions, focusing on compounds with metal-carbon bonds. Her research included Grignard-like reactions and lanthanide complexes in the context of catalysts. She and Prof. O. Reutov worked on electrophilic reactions at saturated carbon. She also investigated the reaction mechanisms of organometallic compounds. She also researched carbanion reactivity, emphasizing the reactivity and structure of ion pairs.  Once more advanced in her career, Beltskaya focused more on transition metal catalysts and developing economically favorable catalysts. Currently, she serves as the head of the Laboratory of Organoelement Compounds within the Department of Chemistry at Moscow State University, where she has concentrated her research on carbon dioxide utilization and its utility in renewable energy and reactions with epoxides. Research contributions Beletskaya is known for her foundational contributions to organometallic chemistry and as one of the first prominent female chemists. Her work helped pave the way for women in Russia to participate in the scientific community. Her pioneering role in organometallic synthesis has laid an essential foundation for future organic chemists. Her work advocating for rare-earth elements in organic chemistry led to the publication of many new textbooks, changing how organic chemistry is taught everywhere. The current field of organic chemists does not always see the need to include other elements in the study of organic chemistry, as it is all carbon-based. Beletskaya’s work helps to expand the use of precious metals in organic reactions. External links Publications Protolysis mechanism of cis- and trans-β-chlorovinylmercury chlorides when acted upon by HCl and DCl Pd-Catalyzed amination of dibromobiphenyls in the synthesis of macrocycles comprising two biphenyl and two polyamine moieties The influence of the substituents in the electrofilic bimolecular reaction New trends in the cross-coupling and other catalytic reactions Honors and awards Source: Lomonosov Prize, 1974. Mendeleev Prize, 1979. Nesmeyanov Prize, 1991. Demidov Prize, 2003. State Prize, 2004. IUPAC 2013 Distinguished Women in Chemistry or Chemical Engineering Award, 2013. References 1933 births Living people 20th-century Russian inventors 20th-century Russian women Corresponding Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Academic staff of Moscow State University Demidov Prize laureates Honoured Scientists of the Russian Federation Recipients of the Order of the Red Banner of Labour State Prize of the Russian Federation laureates Russian women chemists 20th-century Russian chemists Soviet women chemists Women inventors Organometallic chemistry
Irina Beletskaya
[ "Chemistry" ]
909
[ "Organometallic chemistry" ]
13,193,620
https://en.wikipedia.org/wiki/Bound%20graph
In graph theory, a bound graph expresses which pairs of elements of some partially ordered set have an upper bound. Rigorously, any graph G is a bound graph if there exists a partial order ≤ on the vertices of G with the property that for any vertices u and v of G, uv is an edge of G if and only if u ≠ v and there is a vertex w such that u ≤ w and v ≤ w. The bound graphs are exactly the graphs that have a clique edge cover, a family of cliques that cover all edges, with the additional property that each clique includes a vertex that does not belong to any other clique in the family. For the bound graph of a given partial order, each clique can be taken to be the subset of elements less than or equal to some given element. A graph that is covered by cliques in this way is the bound graph of a partial order on its vertices, obtained by ordering the unique vertices in each clique as a chain, above all other vertices in that clique. Bound graphs are sometimes referred to as upper bound graphs, but the analogously defined lower bound graphs comprise exactly the same class—any lower bound for ≤ is easily seen to be an upper bound for the dual partial order ≥. References Graph families Order theory
Bound graph
[ "Mathematics" ]
264
[ "Order theory" ]
13,193,766
https://en.wikipedia.org/wiki/Stanford%20Dish
The Stanford Dish, known locally as the Dish, is a radio antenna in the Stanford foothills. The dish was built in 1961 by the Stanford Research Institute (now SRI International). The cost to construct the antenna was $4.5 million, and was funded by the United States Air Force. In the 1960s the Dish was used to provide information on Soviet radar installations by detecting radio signals bounced off the moon. Later on, the Dish was used to communicate with satellites and spacecraft. With its unique bistatic range radio communications, where the transmitter and receiver are separate units, the powerful radar antenna was well-suited for communicating with spacecraft in regions where conventional radio signals may be disrupted. At one point, the Dish transmitted signals to each of the Voyager craft that NASA dispatched into the outer reaches of the solar system. In 1982 it was used to rescue the amateur radio satellite UoSAT-1. Today The dish is still actively used today for academic and research purposes. It is owned by the U.S. Government and operated by SRI International. It is used for commanding and calibrating spacecraft and for radio astronomy measurements. Recreational route The area around the Dish offers a popular 3.5 mile recreational trail, visited by an average of 1,500–1,800 people daily. The trail around the dish is known for its rolling hills and beautiful views, which on a clear day extend to San Jose, San Francisco, and the East Bay. The Stanford Running Club hosts an annual Dish Race and fun run that forms a 3.25 mile loop around the Dish trail. While hikers, walkers, and runners are welcome, biking and dogs at the dish are not allowed on the trail. The opening hours are as per the schedule below, roughly matching daylight hours: As of June 2018, 360 cows were grazing on the grounds of the Stanford Dish. Stanford leases the land to farmers who own the cows. References External links Stanford Dish Area - official web page SRI Dish page (archive link) Radio telescopes Stanford University campus Astronomical imaging Astronomical instruments Buildings and structures in Santa Clara County, California SRI International Trails in the San Francisco Bay Area Buildings and structures completed in 1966
Stanford Dish
[ "Astronomy" ]
435
[ "Astronomical instruments" ]
13,195,175
https://en.wikipedia.org/wiki/Ceramic%20tile%20cutter
Ceramic tile cutters are used to cut ceramic tiles to a required size or shape. They come in a number of different forms, from basic manual devices to complex attachments for power tools. Hand tools Beam score cutters, cutter boards The ceramic tile cutter works by first scratching a straight line across the surface of the tile with a hardened metal wheel and then applying pressure directly below the line and on each side of the line on top. Snapping pressure varies widely, some mass-produced models exerting over 750 kg. The cutting wheel and breaking jig are combined in a carriage that travels along one or two beams to keep the carriage angled correctly and the cut straight. The beam(s) may be height adjustable to handle different thicknesses of tiles. The base of the tool may have adjustable fences for angled cuts and square cuts and fence stops for multiple cuts of exactly the same size. The scoring wheel is easily replaceable. History The first tile cutter was designed to facilitate the work and solve the problems that masons had when cutting a mosaic of encaustic tiles (a type of decorative tile with pigment, highly used in 1950s, due to the high strength needed because of the high hardness and thickness of these tiles). Over the time the tool evolved, incorporating elements that made it more accurate and productive. The first cutter had an iron point to scratch the tiles. It was later replaced by the current tungsten carbide scratching wheel. Another built-in device introduced in 1960 was the snapping element. It allowed users to snap the tiles easily and not with the bench, the cutter handle or hitting the tile with a knee as it was done before. This was a revolution in the cutting process of the ceramic world. Tile nippers Tile nippers are similar to small pairs of pincers, with part of the width of the tool removed so that they can be fit into small holes. They can be used to break off small edges of tiles that have been scored or nibble out small chips enlarging holes etc. Glass cutter A simple hand held glass cutter is capable of scoring smooth ceramic glaze surface allowing the tile to be snapped. Power tools The harder grades of ceramic tiles like fully vitrified porcelain tiles, stone tiles, and some clay tiles with textured surfaces have to be cut with a diamond blade. The diamond blades are mounted in: Angle grinders An angle grinder can be used for short, sometimes curved cuts. It can also be used for L-shaped cuts and for making holes. It can be used dry and, more rarely, wet. Tile saws Dedicated tile saws are designed to be used with water as a coolant for the diamond blade. They are available in different sizes. Adjustable fences for angled cuts and square cuts. Fence stops for multiple cuts of exactly the same size. Gallery References See also Hand tool Power tool Diamond tool Encaustic tile Porcelain tile Dimension stone Glass tiles Quarry tile Mosaic Mechanical hand tools Hand-held power tools Cutting tools Grinding machines
Ceramic tile cutter
[ "Physics" ]
610
[ "Mechanics", "Mechanical hand tools" ]
13,195,281
https://en.wikipedia.org/wiki/Pain%20model%20of%20behaviour%20management
The pain model of behaviour management, which acknowledges that physical pain and psychological pain may inhibit learning, is a model developed for teachers who work with students who have extremely challenging behaviours, social problems and a lack of social skills. The model's strategies may also be used by teachers to prevent the development of challenging behaviours in the classroom. The model was developed in Queensland, Australia early this decade by a team of behaviour support teachers led by Patrick Connor, an applied psychologist working as a guidance officer within this team. The teachers, who work within a Behaviour Management Unit work with children who can no longer attend school due to exclusions or suspension from school. The pain model is grounded in the work they have done with these students identified as high-risk; students whose behaviour has resulted in a referral to the Behaviour Management Unit – a service supplied to schools by some states in Australia. Basis Connor drew on the work of Eric Berne and Harris who researched the influences of past experiences on later behaviour, and O’Reilly (1994) and accepted the proposition of the neuro-physiological link between the brain and behaviour. Connor recognised, as far as learning was concerned, that there was little difference between the effect of physical pain and psychological pain. Both types of pain were debilitating and inhibited learning. The pain model recognises that social problems such as homelessness, skill-lessness, meaninglessness, domestic violence, abuse, addiction or chemical or organic problems such as autism spectrum disorder (ASD) or attention deficit hyperactivity disorder (ADHD) cause psychological pain. When high-risk students (students that are experiencing one or more of these problems) are fearful, stressed and experiencing psychological pain teachers need to calm the student and relieve the pain before participation within the school environment can begin. The model also allows the teacher to understand that the student’s behaviour is due to the pain they are experiencing making a less stressful classroom environment and allowing teachers to be more patient with students. Assumptions If students ‘feel good’ they will ‘act good’; if students ‘feel bad’ they will ‘act bad’. Behaviour is a type of communication and, because it is a type of communication schools may misinterpret the intended meaning of the message the student is sending through ‘bad’ behaviour. Students who act ‘bad’ may be unhappy and experiencing pain; inflicting punishment will only make this worse. Listening to students is more appropriate than punishing them. When young people are abused they cannot build primary relationships and often do not have the skills to participate in the class environment. They need to be taught these skills prior to gradual reintegration to the school. Traditional models of discipline are not effective with high-risk students. Some students ‘act bad’ in order to be punished and noticed. As a result, they are noticed for their behaviour not for who they are. Principles Acknowledge the pain Value the person Preventative strategies Develop relationships Give clear instructions Care for teachers – support provided to teachers with ‘high-risk’ students. Corrective strategies Relieve the pain and calm the student – teach relaxation techniques, assess and address physical needs Re-skill the student – teach personal skills, interpersonal skills, academic skills and problem solving skills Reconstruct self-esteem – use slogans; set up for success; encourage Use related strategies - agreements; self-managing log; adjunctive therapies; collaboration with parents Refer on - deeper therapy. School-wide strategies Make school a welcoming place Create a welfare centre Advantages Less stress for teachers Better outcomes for high-risk students Long-term advantages for teachers and society Actively involves parents in process Disadvantages Resource intensive Change to whole school culture needed It is difficult for some teachers to relinquish power Some teachers expect naughty students to be punished Some aspects of the model are not suitable for use as general behaviour management for the majority of classes Relies upon all aspects of the child’s life supporting the basis of this model in order for it to be successful References Edwards, C. H., & Watts, V. (2004). Classroom discipline and management: An Australian perspective. Queensland: John Wiley and Sons Australia Ltd. Behavior modification Pain School and classroom behaviour
Pain model of behaviour management
[ "Biology" ]
852
[ "Behavior modification", "Human behavior", "Behavior", "Behaviorism" ]
13,195,536
https://en.wikipedia.org/wiki/Motorola%20Single%20Board%20Computers
Motorola Single Board Computers is Motorola's production line of computer boards for embedded systems. There are three different lines : mvme68k, mvmeppc and mvme88k. The first version of the board appeared in 1988. Motorola still makes those boards and the last one is MVME3100. NetBSD supports the MVME147, MVME162, MVME167, MVME172 and MVME177 boards from the mvme68k family, as well as the MVME160x line of mvmeppc boards. OpenBSD supported the MVME141, MVME147, MVME162, MVME165, MVME167, MVME172, MVME177, MVME180, MVME181, MVME187, MVME188, and MVME197 boards. Both the OpenBSD/mvme68k and OpenBSD/mvme88k ports were discontinued following the 5.5 release. References Motorola products PowerPC mainboards 68k architecture
Motorola Single Board Computers
[ "Technology" ]
231
[ "Computing stubs" ]
13,195,856
https://en.wikipedia.org/wiki/2007%20Bombardier%20Dash%208%20landing%20gear%20accidents
In September 2007, two separate accidents due to similar landing gear failures occurred within three days of each other on Bombardier Dash 8 Q400 aircraft operated by Scandinavian Airlines System (SAS). A third accident, again with a SAS aircraft, occurred in 27 October 2007, leading to the withdrawal of the type from the airline's fleet. Scandinavian Airlines System Flight 1209 Scandinavian Airlines System Flight 1209, a Bombardier Dash 8 Q400 registered as LN-RDK, took off from Copenhagen Airport, Denmark, on 9 September 2007. It was on a domestic flight to Aalborg Airport. Prior to landing, the right main landing gear failed to lock and the crew circled for an hour while trying to fix the problem then preparing for an emergency landing. After the aircraft touched down, the right landing gear collapsed, the right wing hit the ground, and a fire broke out. The fire went out before the aircraft came to rest and all passengers and crew were evacuated. Five people had minor injuries, some from parts of the propeller entering the cabin and others from the evacuation. Investigation When the handle for lowering the landing gear was activated, the indicator showed two green and one red light. The red light indicated that the right main gear was not locked in position. The landing was aborted. Attempts at lowering the gear manually were also unsuccessful. An investigation into the cause of the failure to deploy revealed that the right main gear hydraulic actuator eyebolt had broken away from the actuator. A further analysis of the actuator showed corrosion of the threads on both the inside threads of the piston rod and the outside threads of the rod end, leading to reduced mechanical strength of the actuator and eventual failure. On 19 September 2007, the prosecutor of Stockholm commenced a preliminary investigation regarding suspicion of endangering another person. Maintenance procedures Scandinavian Airlines System (SAS) was accused of cutting corners in the maintenance of its Q400 aircraft. As the Swedish Civil Aviation Administration began an investigation of the accident, it brought renewed focus on SAS maintenance procedures. (Only two weeks previously, Swedish authorities had levelled a scathing critique at the airline after an aircraft of the same model nearly crashed because its engine accelerated unexpectedly during landing.) The outcome of the investigation was that the cause was not a lack of maintenance but over-cleaning of the landing gear, with pressure washers being used that washed out the corrosion preventative coatings between the eyebolt and the actuator rod end. The airline reportedly made 2,300 flights in which safety equipment was not up to standard, although the airline denied this. AIB Denmark (Havarikommissionen) noted that the use of different alloys in the bolt and surrounding construction was most probably a contributing factor: Scandinavian Airlines System Flight 2748 A second accident occurred when a Bombardier Q400, operating as Scandinavian Airlines System Flight 2748, took off from Copenhagen Airport, Denmark, on 12 September 2007. It was headed to Palanga, Lithuania, but was diverted to Vilnius International Airport when landing gear problems were discovered before landing. Again, the right landing gear collapsed immediately after the aircraft touched down. All passengers and crew were evacuated safely. The local officials at Vilnius International Airport noted that this was the most serious accident in recent years. This accident was also caused by corroded threads in the piston rod and rod end. Scandinavian Airlines System Flight 2867 On 27 October 2007, a Q400 registered as LN-RDI was operating SAS Flight 2867 from Bergen, Norway to Copenhagen, Denmark with 40 passengers and 4 crew members when problems with the main landing gear were discovered. After waiting about two hours in the air to burn fuel and troubleshoot, the pilots attempted a prepared emergency landing. The pilots were forced to land the aircraft with the right main landing gear up. The right engine was shut down prior to the landing, because in the previous landings the propeller had hit the ground and shards of it ripped into the fuselage. This was not on the emergency checklist, rather it was the pilots making a safety-based decision. The aircraft stopped on the runway at 16:53 local time with the right wing touching the surface. It did not catch fire and the passengers and the crew were evacuated quickly. There were no serious injuries. The aircraft in question was one of six that had been cleared to fly just a month before, following the grounding of the entire Scandinavian Airlines Q400 fleet due to similar landing gear issues. The entire fleet was grounded again following the accident. The preliminary Danish investigation determined this latest Q400 accident was unrelated to the airline's earlier corrosion problems; in this particular case being caused by a misplaced o-ring found blocking the orifice in a hydraulic restrictor valve. Accordingly, the European Aviation Safety Agency announced that "...the Scandinavian airworthiness authorities will reissue the Certificates of Airworthiness relevant to this aircraft type in the coming days". The final report stated: Aftermath After the second accident in Vilnius, SAS grounded its entire Q400 fleet consisting of 27 aircraft, and a few hours later the manufacturer Bombardier Aerospace recommended that all Q400 aircraft with more than 10,000 flights stay grounded until further notice, affecting about 60 of the 160 Q400 aircraft then in service worldwide. As a result, several hundred flights were cancelled around the world. Horizon Air grounded nineteen of its aircraft and Austrian Airlines grounded eight. On 13 September 2007, Transport Canada issued an Airworthiness Directive applicable to Bombardier Q400 turboprop aircraft instructing all Q400 aircraft operators to conduct a general visual inspection of the left and right main landing gear systems and main landing gear retract actuator jam nuts. This effectively grounded all Q400 aircraft until the inspection had been carried out. On 14 September 2007, Bombardier issued an All-Operators Message (AOM) recommending new procedures concerning the landing gear inspection for all aircraft with more than 8,000 flights. Bombardier acknowledged the likelihood of corrosion developing inside the retract actuator. Previous maintenance procedures mandated checking this component after 15,000 landings. The new maintenance schedule affected about 85 of the 165 Q400 aircraft worldwide. Some operators found that spare parts for this unexpected actuator replacement program were not available, grounding their aircraft indefinitely. Investigators detected corrosion inside actuators on 25 of 27 aircraft they checked. Accordingly, SAS decided to continue the grounding of its Q400 fleet until all the affected parts were replaced. On 28 October 2007, SAS announced that it would retire its entire fleet of Q400 aircraft after a third accident involving the landing gear occurred the day prior. On 10 March 2008, a multi-party agreement was announced, attempting to finalize the roles of maintenance and manufacture in causing the SAS accidents; as settlement the airline and its partners ordered a replacement set of short-haul aircraft from Bombardier, and in turn received a US$164 million discount. It has been speculated that a November 2007 shakeup of Bombardier management was spurred by the Q400 landing gear issues. References External links VG Newspaper article Video of the SAS Dash8-Q400 accident at AAL Havarikommissionen (Danish Accident Investigation Board) report on the accident of the aircraft LN-RDK Havarikommissionen (Danish Accident Investigation Board) report on the accident of the aircraft LN-RDI even though it says Danish 2007 in Denmark Accidents and incidents involving the De Havilland Canada Dash 8 Aviation accidents and incidents in 2007 Aviation accidents and incidents in Denmark Aviation accidents and incidents in Lithuania Scandinavian Airlines accidents and incidents September 2007 events in Europe October 2007 events in Europe 2007 disasters in Denmark Airliner accidents and incidents caused by mechanical failure
2007 Bombardier Dash 8 landing gear accidents
[ "Materials_science" ]
1,545
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
13,196,068
https://en.wikipedia.org/wiki/Fractional%20vortices
In a standard superconductor, described by a complex field fermionic condensate wave function (denoted ), vortices carry quantized magnetic fields because the condensate wave function is invariant to increments of the phase by . There a winding of the phase by creates a vortex which carries one flux quantum. See quantum vortex. The term Fractional vortex is used for two kinds of very different quantum vortices which occur when: (i) A physical system allows phase windings different from , i.e. non-integer or fractional phase winding. Quantum mechanics prohibits it in a uniform ordinary superconductor, but it becomes possible in an inhomogeneous system, for example, if a vortex is placed on a boundary between two superconductors which are connected only by an extremely weak link (also called a Josephson junction); such a situation also occurs on grain boundaries etc. At such superconducting boundaries the phase can have a discontinuous jump. Correspondingly, a vortex placed onto such a boundary acquires a fractional phase winding hence the term fractional vortex. A similar situation occurs in Spin-1 Bose condensate, where a vortex with phase winding can exist if it is combined with a domain of overturned spins. (ii) A different situation occurs in uniform multicomponent superconductors, which allow stable vortex solutions with integer phase winding , where , which however carry arbitrarily fractionally quantized magnetic flux. Observation of fractional-flux vortices was reported in a multiband Iron-based superconductor. (i) Vortices with non-integer phase winding Josephson vortices Fractional vortices at phase discontinuities Josephson phase discontinuities may appear in specially designed long Josephson junctions (LJJ). For example, so-called 0-π LJJ have a discontinuity of the Josephson phase at the point where 0 and parts join. Physically, such LJJ can be fabricated using tailored ferromagnetic barrier or using d-wave superconductors. The Josephson phase discontinuities can also be introduced using artificial tricks, e.g., a pair of tiny current injectors attached to one of the superconducting electrodes of the LJJ. The value of the phase discontinuity is denoted by κ and, without losing generality, it is assumed that , because the phase is periodic. An LJJ reacts to the phase discontinuity by bending the Josephson phase in the vicinity of the discontinuity point, so that far away there are no traces of this perturbation. The bending of the Josephson phase inevitably results in appearance of a local magnetic field localized around the discontinuity ( boundary). It also results in the appearance of a supercurrent circulating around the discontinuity. The total magnetic flux Φ, carried by the localized magnetic field is proportional to the value of the discontinuity , namely , where is a magnetic flux quantum. For a π-discontinuity, , the vortex of the supercurrent is called a semifluxon. When , one speaks about arbitrary fractional Josephson vortices. This type of vortex is pinned at the phase discontinuity point, but may have two polarities, positive and negative, distinguished by the direction of the fractional flux and direction of the supercurrent (clockwise or counterclockwise) circulating around its center (discontinuity point). The semifluxon is a particular case of such a fractional vortex pinned at the phase discontinuity point. Although, such fractional Josephson vortices are pinned, if perturbed they may perform a small oscillations around the phase discontinuity point with an eigenfrequency, that depends on the value of κ. Splintered vortices (double sine-Gordon solitons) In the context of d-wave superconductivity, a fractional vortex (also known as splintered vortex) is a vortex of supercurrent carrying unquantized magnetic flux , which depends on parameters of the system. Physically, such vortices may appear at the grain boundary between two d-wave superconductors, which often looks like a regular or irregular sequence of 0 and π facets. One can also construct an artificial array of short 0 and π facets to achieve the same effect. These splintered vortices are solitons. They are able to move and preserve their shape similar to conventional integer Josephson vortices (fluxons). This is opposite to the fractional vortices pinned at phase discontinuity, e.g. semifluxons, which are pinned at the discontinuity and cannot move far from it. Theoretically, one can describe a grain boundary between d-wave superconductors (or an array of tiny 0 and π facets) by an effective equation for a large-scale phase ψ. Large scale means that the scale is much larger than the facet size. This equation is double sin-Gordon equation, which in normalized units reads where is a dimensionless constant resulting from averaging over tiny facets. The detailed mathematical procedure of averaging is similar to the one done for a parametrically driven pendulum, and can be extended to time-dependent phenomena. In essence, () described extended φ Josephson junction. For () has two stable equilibrium values (in each 2π interval): , where . They corresponding to two energy minima. Correspondingly, there are two fractional vortices (topological solitons): one with the phase going from to , while the other has the phase changing from to . The first vortex has a topological change of 2φ and carries the magnetic flux . The second vortex has a topological change of and carries the flux . Splintered vortices were first observed at the asymmetric 45° grain boundaries between two d-wave superconductors YBa2Cu3O7−δ. Spin-triplet Superfluidity In certain states of spin-1 superfluids or Bose condensates, the condensate wavefunction is invariant if the superfluid phase changes by , along with a rotation of spin angle. This is in contrast to the invariance of condensate wavefunction in a spin-0 superfluid. A vortex resulting from such phase windings is called fractional or half-quantum vortex, in contrast to one-quantum vortex where a phase changes by . (ii) Vortices with integer phase winding and fractional flux in multicomponent superconductivity Different kinds of "Fractional vortices" appear in a different context in multi-component superconductivity where several independent charged condensates or superconducting components interact with each other electromagnetically. Such a situation occurs for example in the theories of the projected quantum states of liquid metallic hydrogen, where two order parameters originate from theoretically anticipated coexistence of electronic and protonic Cooper pairs. There topological defects with an (i.e. "integer") phase winding only in or only in a protonic condensate carries fractionally quantized magnetic flux: a consequence of electromagnetic interaction with the second condensate. Also these fractional vortices carry a superfluid momentum which does not obey Onsager-Feynman quantization Despite the integer phase winding, the basic properties of these kinds of fractional vortices are very different from the Abrikosov vortex solutions. For example, in contrast to the Abrikosov vortex, their magnetic field generically is not exponentially localized in space. Also in some cases the magnetic flux inverts its direction at a certain distance from the vortex center See also Josephson junction π Josephson junction magnetic flux quantum Semifluxon Quantum vortex References Josephson effect Superfluidity
Fractional vortices
[ "Physics", "Chemistry", "Materials_science" ]
1,662
[ "Physical phenomena", "Phase transitions", "Josephson effect", "Superconductivity", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
13,197,882
https://en.wikipedia.org/wiki/Derek%20Hitchins
Derek K. Hitchins (born 1935) is a British systems engineer and was professor in engineering management, in command & control and in systems science at Cranfield University at Cranfield, Bedfordshire, England. Biography Hitchins joined the Royal Air Force in 1952 as an apprentice and retired as a wing commander in 1973 to join industry. From 1975 to 1976 he worked as the system design manager of the Tornado ADV aviation company and technical coordinator for UKAIR CCIS. From 1975 to 1979 he was head of Integrated sciences in a grammar school, teaching physics, integrated science, mathematics, electronics, biology and astronomy to advanced level, with music, gymnastics and athletics as additional subjects. In 1980 he returned to industry and held posts at two leading systems engineering companies as Marketing Director, Business Development Director and Technical Director. He also worked as UK Technical Director for the NATO Air Command and Control System (ACCS) project in Brussels before becoming an academic in 1988. In 1988 he became professor in Engineering Management at City University, London. From 1990 to 1994 he held the British Aerospace Chairs in Systems Science and in Command and Control, Cranfield University at RMCS Shrivenham. After his retirement in 1994 he continued as a part-time consultant, teacher, visiting professor and international lecturer. He was the inaugural president of the UK chapter of INCOSE, and also the inaugural chairman of the Institution of Electrical Engineers’ (IEE’s) Professional Group on Systems Engineering. For many years he was an independent member of the UK Defence Scientific Advisory Board. Work His current research is into system thinking, system requirements, social psychology & anthropology, Egyptology, command & control, system design and world-class systems engineering. Publications Hitchins wrote several books and article. A selection: 1990. Conceiving Systems. Thesis (Ph.D.) City University, 1990. 1992. Draft Guide to the Practice of System Engineering. With John C. Boarder and Patrick D. R. Moore. Institution of Electrical Engineers. 1993. Putting Systems to Work, 2000. Getting to Grips with Complexity or... A Theory of Everything Else... 2003, Advanced Systems Thinking, Engineering and Management, Norwood MA: Artech House. 2003. The Pyramid Builder's Handbook 2003. The Secret Diaries of Hemiunu 2007. Systems Engineering: A 21st Century Systems Methodology. Articles, a selection: 2003, Systems Methodology, paper. 2003, What’s in a System-of-Systems?, paper. References External links Hitchins homepage. Derek Hitchins, INCOSE 2007. Hitchins CV, Feb 2005. 1935 births Academics of City, University of London Academics of Cranfield University British non-fiction writers Living people Systems engineers British systems scientists British male writers British male non-fiction writers
Derek Hitchins
[ "Engineering" ]
565
[ "Systems engineers", "Systems engineering" ]
13,197,969
https://en.wikipedia.org/wiki/Vitali%20convergence%20theorem
In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in Lp in terms of convergence in measure and a condition related to uniform integrability. Preliminary definitions Let be a measure space, i.e. is a set function such that and is countably-additive. All functions considered in the sequel will be functions , where or . We adopt the following definitions according to Bogachev's terminology. A set of functions is called uniformly integrable if , i.e . A set of functions is said to have uniformly absolutely continuous integrals if , i.e. . This definition is sometimes used as a definition of uniform integrability. However, it differs from the definition of uniform integrability given above. When , a set of functions is uniformly integrable if and only if it is bounded in and has uniformly absolutely continuous integrals. If, in addition, is atomless, then the uniform integrability is equivalent to the uniform absolute continuity of integrals. Finite measure case Let be a measure space with . Let and be an -measurable function. Then, the following are equivalent : and converges to in ; The sequence of functions converges in -measure to and is uniformly integrable ; For a proof, see Bogachev's monograph "Measure Theory, Volume I". Infinite measure case Let be a measure space and . Let and . Then, converges to in if and only if the following holds : The sequence of functions converges in -measure to ; has uniformly absolutely continuous integrals; For every , there exists such that and When , the third condition becomes superfluous (one can simply take ) and the first two conditions give the usual form of Lebesgue-Vitali's convergence theorem originally stated for measure spaces with finite measure. In this case, one can show that conditions 1 and 2 imply that the sequence is uniformly integrable. Converse of the theorem Let be measure space. Let and assume that exists for every . Then, the sequence is bounded in and has uniformly absolutely continuous integrals. In addition, there exists such that for every . When , this implies that is uniformly integrable. For a proof, see Bogachev's monograph "Measure Theory, Volume I". Citations Theorems in measure theory
Vitali convergence theorem
[ "Mathematics" ]
506
[ "Theorems in mathematical analysis", "Theorems in measure theory" ]
13,199,350
https://en.wikipedia.org/wiki/MPSolve
MPSolve (Multiprecision Polynomial Solver) is a package for the approximation of the roots of a univariate polynomial. It uses the Aberth method, combined with a careful use of multiprecision. "Mpsolve takes advantage of sparsity, and has special hooks for polynomials that can be evaluated efficiently by straight-line programs" Implementation The program is written mostly in ANSI C and makes use of the GNU Multi-Precision Library. It uses a command-line interface (CLI) and, starting from version 3.1.0 has also a GUI and interfaces for MATLAB and GNU/Octave. Usage The executable program of the package is called mpsolve. It can be run from command line in console. The executable file for the graphical user interface is called xmpsolve, and the MATLAB and Octave functions are called mps_roots. They behave similarly to the function roots that is already included in these software packages. Output Typically output will be on the screen. It may also be saved as a text file (with res extension) and plotted in gnuplot. Direct plotting in gnuplot is also supported on Unix systems. See also Polynomial root-finding algorithms References External links Home page C (programming language) software Free mathematics software Free software programmed in C Numerical software Software using the GNU General Public License
MPSolve
[ "Mathematics" ]
279
[ "Numerical software", "Free mathematics software", "Mathematical software" ]
13,199,359
https://en.wikipedia.org/wiki/Partitioning%20cryptanalysis
In cryptography, partitioning cryptanalysis is a form of cryptanalysis for block ciphers. Developed by Carlo Harpes in 1995, the attack is a generalization of linear cryptanalysis. Harpes originally replaced the bit sums (affine transformations) of linear cryptanalysis with more general balanced Boolean functions. He demonstrated a toy cipher that exhibits resistance against ordinary linear cryptanalysis but is susceptible to this sort of partitioning cryptanalysis. In its full generality, partitioning cryptanalysis works by dividing the sets of possible plaintexts and ciphertexts into efficiently computable partitions such that the distribution of ciphertexts is significantly non-uniform when the plaintexts are chosen uniformly from a given block of the partition. Partitioning cryptanalysis has been shown to be more effective than linear cryptanalysis against variants of DES and CRYPTON. A specific partitioning attack called mod n cryptanalysis uses the congruence classes modulo some integer for partitions. References Cryptographic attacks
Partitioning cryptanalysis
[ "Technology" ]
200
[ "Cryptographic attacks", "Computer security exploits" ]
13,199,362
https://en.wikipedia.org/wiki/ARGUS%20%28experiment%29
ARGUS (A Russian-German-United States-Swedish Collaboration; later joined by Canada and the former Yugoslavia) was a particle physics experiment that ran at the electron–positron collider ring DORIS II at the German national laboratory DESY. Its aim was to explore properties of charm and bottom quarks. Its construction started in 1979, the detector was commissioned in 1982 and operated until 1992. The ARGUS detector was a hermetic detector with 90% coverage of the full solid angle. It had drift chambers, a time-of-flight system, an electromagnetic calorimeter and a muon chamber system. The ARGUS experiment was the first experiment that observed the mixing of the B mesons into its antiparticle, the anti-B meson; this was done in 1987. This observation led to the conclusion that the second-heaviest quark – the bottom quark – could under certain circumstances convert into a different, hitherto unknown quark, which had to have a huge mass. This quark, the top quark, was discovered in 1995 at Fermilab. The ARGUS distribution is named after the experiment. In 2010, the former site of ARGUS at DORIS became the location of the OLYMPUS experiment. External links Webpage of ARGUS Fest, a symposium to commemorate the 20th anniversary of the discovery of B-meson oscillations. (Last accessed on Sept. 10, 2007) Record for ARGUS on INSPIRE-HEP References Particle experiments
ARGUS (experiment)
[ "Physics" ]
300
[ "Particle physics stubs", "Particle physics" ]
13,199,685
https://en.wikipedia.org/wiki/PSfrag
PSfrag is a LaTeX package that allows one to overlay Encapsulated PostScript (EPS) figures with arbitrary LaTeX constructions, properly aligned, scaled, and rotated. The user has to place a text tag into the EPS file and the corresponding LaTeX construction into the LaTeX file that will include the EPS file. PSfrag will remove the tag and replace it by the specified LaTeX construction. The authors of PSfrag are Craig Barratt, Michael Grant and David Carlisle. Basic usage Insert a simple tag into the EPS file. The tag must be a single word, alphanumeric, and unaccented. Add to the LaTeX document a \psfrag command for replacing the tag as follows. \psfrag{tag}[position][psposition][scale][rotation]{LaTeX construction} Include the EPS file into the LaTeX document using \includegraphics. Load psfrag.sty using \usepackage. Pdf compatibility PSfrag is not pdf-compatible, but there exist external solutions, like pstool, pst-pdf or pdfrack. External links PSfrag page at CTAN. PSfrag documentation. Use of Psfrag with gnuplot and latex pstool at CTAN. pdfrack at CTAN. pst-pdf at CTAN. Free TeX software
PSfrag
[ "Technology" ]
287
[ "Computing stubs", "Digital typography stubs" ]
7,078,310
https://en.wikipedia.org/wiki/Angle-resolved%20photoemission%20spectroscopy
Angle-resolved photoemission spectroscopy (ARPES) is an experimental technique used in condensed matter physics to probe the allowed energies and momenta of the electrons in a material, usually a crystalline solid. It is based on the photoelectric effect, in which an incoming photon of sufficient energy ejects an electron from the surface of a material. By directly measuring the kinetic energy and emission angle distributions of the emitted photoelectrons, the technique can map the electronic band structure and Fermi surfaces. ARPES is best suited for the study of one- or two-dimensional materials. It has been used by physicists to investigate high-temperature superconductors, graphene, topological materials, quantum well states, and materials exhibiting charge density waves. ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the sample of a material, and an electron spectrometer. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions with respect to kinetic energy and emission angle, the electrons are directed to a detector and counted to provide ARPES spectra—slices of the band structure along one momentum direction. Some ARPES instruments can extract a portion of the electrons alongside the detector to measure the polarization of their spin. Principle Electrons in crystalline solids can only populate states of certain energies and momenta, others being forbidden by quantum mechanics. They form a continuum of states known as the band structure of the solid. The band structure determines if a material is an insulator, a semiconductor, or a metal, how it conducts electricity and in which directions it conducts best, or how it behaves in a magnetic field. Angle-resolved photoemission spectroscopy determines the band structure and helps understand the scattering processes and interactions of electrons with other constituents of a material. It does so by observing the electrons ejected by photons from their initial energy and momentum state into the state whose energy is by the energy of the photon higher than the initial energy, and higher than the binding energy of the electron in the solid. In the process, the electron's momentum remains virtually intact, except for its component perpendicular to the material's surface. The band structure is thus translated from energies at which the electrons are bound within the material, to energies that free them from the crystal binding and enable their detection outside of the material. By measuring the freed electron's kinetic energy, its velocity and absolute momentum can be calculated. By measuring the emission angle with respect to the surface normal, ARPES can also determine the two in-plane components of momentum that are in the photoemission process preserved. In many cases, if needed, the third component can be reconstructed as well. Instrumentation A typical instrument for angle-resolved photoemission consists of a light source, a sample holder attached to a manipulator, and an electron spectrometer. These are all part of an ultra-high vacuum system that provides the necessary protection from adsorbates for the sample surface and eliminates scattering of the electrons on their way to the analyzer. The light source delivers to the sample a monochromatic, usually polarized, focused, high-intensity beam of ~1012 photons/s with a few meV energy spread. Light sources range from compact noble-gas discharge UV lamps and radio-frequency plasma sources (10–⁠40 eV), ultraviolet lasers (5–⁠11 eV) to synchrotron insertion devices that are optimized for different parts of the electromagnetic spectrum (from 10 eV in the ultraviolet to 1000 eV X-rays). The sample holder accommodates samples of crystalline materials, the electronic properties of which are to be investigated. It facilitates their insertion into the vacuum, cleavage to expose clean surfaces, and precise positioning. The holder works as the extension of a manipulator that makes translations along three axes, and rotations to adjust the sample's polar, azimuth and tilt angles possible. The holder has sensors or thermocouples for precise temperature measurement and control. Cooling to temperatures as low as 1 kelvin is provided by cryogenic liquefied gases, cryocoolers, and dilution refrigerators. Resistive heaters attached to the holder provide heating up to a few hundred °C, whereas miniature backside electron-beam bombardment devices can yield sample temperatures as high as 2000 °C. Some holders can also have attachments for light beam focusing and calibration. The electron spectrometer disperses the electrons along two spatial directions in accordance with their kinetic energy and their emission angle when exiting the sample; in other words, it provides mapping of different energies and emission angles to different positions on the detector. In the type most commonly used, the hemispherical electron energy analyzer, the electrons first pass through an electrostatic lens. The lens has a narrow focal spot that is located some 40 mm from the entrance to the lens. It further enhances the angular spread of the electron plume, and serves it with adjusted energy to the narrow entrance slit of the energy dispersing part. The energy dispersion is carried out for a narrow range of energies around the so-called pass energy in the direction perpendicular to the direction of angular dispersion, that is perpendicular to the cut of a ~25 mm long and ⪆0.1 mm wide slit. The angular dispersion previously achieved around the axis of the cylindrical lens is only preserved along the slit, and depending on the lens mode and the desired angular resolution is usually set to amount to ±3°, ±7° or ±15°. The hemispheres of the energy analyzer are kept at constant voltages so that the central trajectory is followed by electrons that have the kinetic energy equal to the set pass energy; those with higher or lower energies end up closer to the outer or the inner hemisphere at the other end of the analyzer. This is where an electron detector is mounted, usually in the form of a 40 mm microchannel plate paired with a fluorescent screen. Electron detection events are recorded using an outside camera and are counted in hundreds of thousands of separate angle vs. kinetic energy channels. Some instruments are additionally equipped with an electron extraction tube at one side of the detector to enable the measurement of the electrons' spin polarization. Modern analyzers are capable of resolving the electron emission angles as low as 0.1°. Energy resolution is pass-energy and slit-width dependent so the operator chooses between measurements with ultrahigh resolution and low intensity (< 1 meV at 1 eV pass energy) or poorer energy resolutions of 10 meV or more at higher pass energies and with wider slits resulting in higher signal intensity. The instrument's resolution shows up as artificial broadening of the spectral features: a Fermi energy cutoff wider than expected from the sample's temperature alone, and the theoretical electron's spectral function convolved with the instrument's resolution function in both energy and momentum/angle. Sometimes, instead of hemispherical analyzers, time-of-flight analyzers are used. These, however, require pulsed photon sources and are most common in laser-based ARPES labs. Basic relations Angle-resolved photoemission spectroscopy is a potent refinement of ordinary photoemission spectroscopy. Light of frequency made up of photons of energy , where is the Planck constant, is used to stimulate the transitions of electrons from occupied to unoccupied electronic state of the solid. If a photon's energy is greater than the binding energy of an electron , the electron will eventually leave the solid without being scattered, and be observed with kinetic energy at angle relative to the surface normal, both characteristic of the studied material. Electron emission intensity maps measured by ARPES as a function of and are representative of the intrinsic distribution of electrons in the solid expressed in terms of their binding energy and the Bloch wave vector , which is related to the electrons' crystal momentum and group velocity. In the photoemission process, the Bloch wave vector is linked to the measured electron's momentum , where the magnitude of the momentum is given by the equation . As the electron crosses the surface barrier, losing part of its energy due to the surface work function, only the component of that is parallel to the surface, , is preserved. From ARPES, therefore, only is known for certain and its magnitude is given by . Here, is the reduced Planck constant. Because of incomplete determination of the three-dimensional wave vector, and the pronounced surface sensitivity of the elastic photoemission process, ARPES is best suited to the complete characterization of the band structure in ordered low-dimensional systems such as two-dimensional materials, ultrathin films, and nanowires. When it is used for three-dimensional materials, the perpendicular component of the wave vector is usually approximated, with the assumption of a parabolic, free-electron-like final state with the bottom at energy . This gives: . The inner potential is an unknown parameter a priori. For d-electron systems, experiment suggest that . In general, the inner potential is estimated through a series of photon energy-dependent experiments, especially in photoemission band mapping experiments. Fermi surface mapping Electron analyzers that use a slit to prevent the mixing of momentum and energy channels are only capable of taking angular maps along one direction. To take maps over energy and two-dimensional momentum space, either the sample is rotated in the proper direction so that the slit receives electrons from adjacent emission angles, or the electron plume is steered inside the electrostatic lens with the sample fixed. The slit width will determine the step size of the angular scans. For example, when a ±15° plume dispersed around the axis of the lens is served to a 30 mm long and 1 mm wide slit, each millimeter of the slit receives a 1° portion—in both directions; but at the detector the other direction is interpreted as the electron's kinetic energy and the emission angle information is lost. This averaging determines the maximal angular resolution of the scan in the direction perpendicular to the slit: with a 1 mm slit, steps coarser than 1° lead to missing data, and finer steps to overlaps. Modern analyzers have slits as narrow as 0.05 mm. The energy–angle–angle maps are usually further processed to give energy–kx–ky maps, and sliced in such a way to display constant energy surfaces in the band structure and, most importantly, the Fermi surface map when they are cut near the Fermi level. Emission angle to momentum conversion ARPES spectrometer measures angular dispersion in a slice α along its slit. Modern analyzers record these angles simultaneously, in their reference frame, typically in the range of ±15°. To map the band structure over a two-dimensional momentum space, the sample is rotated while keeping the light spot on the surface fixed. The most common choice is to change the polar angle θ around the axis that is parallel to the slit and adjust the tilt τ or azimuth φ so emission from a particular region of the Brillouin zone can be reached. The momentum components of the electrons can be expressed in terms of the quantities measured in the reference frame of the analyzer as , where . These components can be transformed into the appropriate components of momentum in the reference frame of the sample, , by using rotation matrices . When the sample is rotated around the y-axis by θ, there has components . If the sample is also tilted around x by τ, this results in , and the components of the electron's crystal momentum determined by ARPES in this mapping geometry are choose sign at depending on whether is proportional to or If high symmetry axes of the sample are known and need to be aligned, a correction by azimuth φ can be applied by rotating around z, when or by rotating the transformed map around origin in two-dimensional momentum planes. Theory of photoemission intensity relations The theory of photoemission is that of direct optical transitions between the states and of an N-electron system. Light excitation is introduced as the magnetic vector potential through the minimal substitution in the kinetic part of the quantum-mechanical Hamiltonian for the electrons in the crystal. The perturbation part of the Hamiltonian comes out to be: . In this treatment, the electron's spin coupling to the electromagnetic field is neglected. The scalar potential set to zero either by imposing the Weyl gauge or by working in the Coulomb gauge in which becomes negligibly small far from the sources. Either way, the commutator is taken to be zero. Specifically, in Weyl gauge because the period of for ultraviolet light is about two orders of magnitude larger than the period of the electron's wave function. In both gauges it is assumed the electrons at the surface had little time to respond to the incoming perturbation and add nothing to either of the two potentials. It is for most practical uses safe to neglect the quadratic term. Hence, . The transition probability is calculated in time-dependent perturbation theory and is given by the Fermi's golden rule: , The delta distribution above is a way of saying that energy is conserved when a photon of energy is absorbed . If the electric field of an electromagnetic wave is written as , where , the vector potential inherits its polarization and equals to . The transition probability is then given in terms of the electric field as . In the sudden approximation, which assumes an electron is instantaneously removed from the system of N electrons, the final and initial states of the system are taken as properly antisymmetrized products of the single particle states of the photoelectron , and the states representing the remaining -electron systems. The photoemission current of electrons of energy and momentum is then expressed as the products of , known as the dipole selection rules for optical transitions, and , the one-electron removal spectral function known from the many-body theory of condensed matter physics summed over all allowed initial and final states leading to the energy and momentum being observed. Here, E is measured with respect to the Fermi level EF, and Ek with respect to vacuum so where , the work function, is the energy difference between the two referent levels. The work function is material, surface orientation, and surface condition dependent. Because the allowed initial states are only those that are occupied, the photoemission signal will reflect the Fermi-Dirac distribution function in the form of a temperature-dependent sigmoid-shaped drop of intensity in the vicinity of EF. In the case of a two-dimensional, one-band electronic system the intensity relation further reduces to . Selection rules The electronic states in crystals are organized in energy bands, which have associated energy-band dispersions that are energy eigenvalues for delocalized electrons according to Bloch's theorem. From the plane-wave factor in Bloch's decomposition of the wave functions, it follows the only allowed transitions when no other particles are involved are between the states whose crystal momenta differ by the reciprocal lattice vectors , i.e. those states that are in the reduced zone scheme one above another (thus the name direct optical transitions). Another set of selection rules comes from (or ) when the photon polarization contained in (or ) and symmetries of the initial and final one-electron Bloch states and are taken into account. Those can lead to the suppression of the photoemission signal in certain parts of the reciprocal space or can tell about the specific atomic-orbital origin of the initial and final states. Many-body effects The one-electron spectral function that is directly measured in ARPES maps the probability that the state of the system of N electrons from which one electron has been instantly removed is any of the ground states of the -particle system: . If the electrons were independent of one another, the N-electron state with the state removed would be exactly an eigenstate of the particle system and the spectral function would become an infinitely sharp delta function at the energy and momentum of the removed particle; it would trace the dispersion of the independent particles in energy-momentum space. In the case of increased electron correlations, the spectral function broadens and starts developing richer features that reflect the interactions in the underlying many-body system. These are customarily described by the complex correction to the single particle energy dispersion that is called the quasiparticle self-energy, . This function contains the full information about the renormalization of the electronic dispersion due to interactions and the lifetime of the hole created by the excitation. Both can be determined experimentally from the analysis of high-resolution ARPES spectra under a few reasonable assumptions. Namely, one can assume that the part of the spectrum is nearly constant along high-symmetry directions in momentum space and that the only variable part comes from the spectral function, which in terms of , where the two components of are usually taken to be only dependent on , reads This function is known from ARPES as a scan along a chosen direction in momentum space and is a two-dimensional map of the form . When cut at a constant energy , a Lorentzian-like curve in is obtained whose renormalized peak position is given by and whose width at half maximum is determined by , as follows: The only remaining unknown in the analysis is the bare band . The bare band can be found in a self-consistent way by enforcing the Kramers-Kronig relation between the two components of the complex function that is obtained from the previous two equations. The algorithm is as follows: start with an ansatz bare band, calculate by eq. (2), transform it into using the Kramers-Kronig relation, then use this function to calculate the bare band dispersion on a discrete set of points by eq. (1), and feed to the algorithm its fit to a suitable curve as a new ansatz bare band; convergence is usually achieved in a few quick iterations. From the self-energy obtained in this way one can judge on the strength and shape of electron-electron correlations, electron-phonon (more generally, electron-boson) interaction, active phonon energies, and quasiparticle lifetimes. In simple cases of band flattening near the Fermi level because of the interaction with Debye phonons, the band mass is enhanced by and the electron-phonon coupling factor λ can be determined from the linear dependence of the peak widths on temperature. For strongly correlated systems like cuprate superconductors, self-energy knowledge is unfortunately insufficient for a comprehensive understanding of the physical processes that lead to certain features in the spectrum. In fact, in the case of cuprate superconductors different theoretical treatments often lead to very different explanations of the origin of specific features in the spectrum. A typical example is the pseudogap in the cuprates, i.e., the momentum-selective suppression of spectral weight at the Fermi level, which has been related to spin, charge or (d-wave) pairing fluctuations by different authors. This ambiguity about the underlying physical mechanism at work can be overcome by considering two-particle correlation functions (such as Auger electron spectroscopy and appearance-potential spectroscopy), as they are able to describe the collective mode of the system and can also be related to certain ground-state properties. Uses ARPES has been used to map the occupied band structure of many metals and semiconductors, states appearing in the projected band gaps at their surfaces, quantum well states that arise in systems with reduced dimensionality, one-atom-thick materials like graphene, transition metal dichalcogenides, and many flavors of topological materials. It has also been used to map the underlying band structure, gaps, and quasiparticle dynamics in highly correlated materials like high-temperature superconductors and materials exhibiting charge density waves. When the electron dynamics in the bound states just above the Fermi level need to be studied, two-photon excitation in pump-probe setups (2PPE) is used. There, the first photon of low-enough energy is used to excite electrons into unoccupied bands that are still below the energy necessary for photoemission (i.e. between the Fermi and vacuum levels). The second photon is used to kick these electrons out of the solid so they can be measured with ARPES. By precisely timing the second photon, usually by using frequency multiplication of the low-energy pulsed laser and delay between the pulses by changing their optical paths, the electron lifetime can be determined on the scale below picoseconds. Notes References External links Introduction to ARPES at Diamond Light Source i05 beamline Laboratory techniques in condensed matter physics Emission spectroscopy Electron spectroscopy de:Photoelektronenspektroskopie#Winkelaufgelöste Messungen (ARPES)
Angle-resolved photoemission spectroscopy
[ "Physics", "Chemistry", "Materials_science" ]
4,291
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Emission spectroscopy", "Laboratory techniques in condensed matter physics", "Condensed matter physics", "Spectroscopy" ]
7,078,422
https://en.wikipedia.org/wiki/NGC%205474
NGC 5474 is a peculiar dwarf galaxy in the constellation Ursa Major. It is one of several companion galaxies of the Pinwheel Galaxy (M101), a grand-design spiral galaxy. Among the Pinwheel Galaxy's companions, this galaxy is the closest to the Pinwheel Galaxy itself. The gravitational interaction between NGC 5474 and the Pinwheel Galaxy has strongly distorted the former. As a result, the disk is offset relative to the nucleus. The star formation in this galaxy (as traced by hydrogen spectral line emission) is also offset from the nucleus. NGC 5474 shows some signs of a spiral structure. As a result, this galaxy is often classified as a dwarf spiral galaxy, a relatively rare group of dwarf galaxies. See also Peculiar galaxy Dwarf galaxy Ursa Major (constellation) References External links SEDS Unbarred spiral galaxies Peculiar galaxies Dwarf galaxies Dwarf spiral galaxies Interacting galaxies M101 Group Ursa Major 5474 09013 50216
NGC 5474
[ "Astronomy" ]
198
[ "Ursa Major", "Constellations" ]
7,078,772
https://en.wikipedia.org/wiki/Radcom%20Ltd.
RADCOM Ltd. is a provider of quality monitoring and service assurance software for telecommunications carriers, founded in 1991. RADCOM's U.S. headquarters is in Paramus, New Jersey and its international headquarters is in Tel Aviv, Israel. RADCOM is a member of the RAD Group of companies. The company is traded on the Nasdaq exchange. Products RADCOM provides service assurance and customer experience management for telecom operators and communications service providers. RADCOM provides software for telecommunications carriers to carry out customer experience monitoring and to manage their networks and services. RADCOM offers software for network function virtualization (NFV). In August 2020, RADCOM announced support for 5G networks. History RADCOM started as an internal project within the RAD Group in 1985 and incorporated in 1991. The company received initial funding from Star Venture, Evergreen and Pitango Venture Capital funds. In September 1997, the company had an initial public offering on Nasdaq. Net proceeds to the company were approximately $20.2 million. References External links Yahoo! Finance Radcom Ltd.(RDCM) Technology companies of Israel Computer hardware companies Telecommunications equipment vendors Networking hardware companies Electronics companies of Israel Electronics companies established in 1991 Israeli brands Israeli companies established in 1991 Companies based in Tel Aviv Companies listed on the Nasdaq
Radcom Ltd.
[ "Technology" ]
272
[ "Computer hardware companies", "Computers" ]
7,079,248
https://en.wikipedia.org/wiki/Centrosymmetric%20matrix
In mathematics, especially in linear algebra and matrix theory, a centrosymmetric matrix is a matrix which is symmetric about its center. Formal definition An matrix is centrosymmetric when its entries satisfy Alternatively, if denotes the exchange matrix with 1 on the antidiagonal and 0 elsewhere: then a matrix is centrosymmetric if and only if . Examples All 2 × 2 centrosymmetric matrices have the form All 3 × 3 centrosymmetric matrices have the form Symmetric Toeplitz matrices are centrosymmetric. Algebraic structure and properties If and are centrosymmetric matrices over a field , then so are and for any in . Moreover, the matrix product is centrosymmetric, since . Since the identity matrix is also centrosymmetric, it follows that the set of centrosymmetric matrices over forms a subalgebra of the associative algebra of all matrices. If is a centrosymmetric matrix with an -dimensional eigenbasis, then its eigenvectors can each be chosen so that they satisfy either or where is the exchange matrix. If is a centrosymmetric matrix with distinct eigenvalues, then the matrices that commute with must be centrosymmetric. The maximum number of unique elements in an centrosymmetric matrix is Related structures An matrix is said to be skew-centrosymmetric if its entries satisfy Equivalently, is skew-centrosymmetric if , where is the exchange matrix defined previously. The centrosymmetric relation lends itself to a natural generalization, where is replaced with an involutory matrix (i.e., ) or, more generally, a matrix satisfying for an integer . The inverse problem for the commutation relation of identifying all involutory that commute with a fixed matrix has also been studied. Symmetric centrosymmetric matrices are sometimes called bisymmetric matrices. When the ground field is the real numbers, it has been shown that bisymmetric matrices are precisely those symmetric matrices whose eigenvalues remain the same aside from possible sign changes following pre- or post-multiplication by the exchange matrix. A similar result holds for Hermitian centrosymmetric and skew-centrosymmetric matrices. References Further reading External links Centrosymmetric matrix on MathWorld. Linear algebra Matrices
Centrosymmetric matrix
[ "Mathematics" ]
480
[ "Matrices (mathematics)", "Linear algebra", "Mathematical objects", "Algebra" ]
7,079,787
https://en.wikipedia.org/wiki/PS210%20experiment
The PS210 experiment was the first experiment that led to the observation of antihydrogen atoms produced at the Low Energy Antiproton Ring (LEAR) at CERN in 1995. The antihydrogen atoms were produced in flight and moved at nearly the speed of light. They made unique electrical signals in detectors that destroyed them almost immediately after they formed by matter–antimatter annihilation. Eleven signals were observed, of which two were attributed to other processes. In 1997 similar observations were announced at Fermilab from the E862 experiment. The first measurement demonstrated the existence of antihydrogen, the second (with improved setup and intensity monitoring) measured the production rate. Both experiments, one at each of the only two facilities with suitable antiprotons, were stimulated by calculations which suggested the possibility of making very fast antihydrogen within existing circular accelerators. References Further reading Particle experiments CERN experiments External links PS210 experiment record on INSPIRE-HEP
PS210 experiment
[ "Physics" ]
202
[ "Particle physics stubs", "Particle physics" ]
7,079,795
https://en.wikipedia.org/wiki/The%20Assayer
The Assayer () is a book by Galileo Galilei, published in Rome in October 1623. It is generally considered to be one of the pioneering works of the scientific method, first broaching the idea that the book of nature is to be read with mathematical tools rather than those of scholastic philosophy, as generally held at the time. Despite the retroactive acclaim given to Galileo's theory of knowledge, the empirical claims he made in the book—that comets are sublunary and their observed properties the product of optical phenomena—were incorrect. Background – Galileo vs. Grassi on comets In 1619, Galileo became embroiled in a controversy with Father Orazio Grassi, professor of mathematics at the Jesuit Collegio Romano. It began as a dispute over the nature of comets, but by the time Galileo had published The Assayer, his last salvo in the dispute, it had become a much wider controversy over the very nature of science itself. An Astronomical Disputation The debate between Galileo and Grassi started in early 1619, when Father Grassi anonymously published the pamphlet, An Astronomical Disputation on the Three Comets of the Year 1618 (Disputatio astronomica de tribus cometis anni MDCXVIII), which discussed the nature of a comet that had appeared late in November of the previous year. Grassi concluded that the comet was a fiery, celestial body that had moved along a segment of a great circle at a constant distance from the earth, and since it moved in the sky more slowly than the Moon, it must be farther away than the Moon. Tychonic system Grassi adopted Tycho Brahe's Tychonic system, in which the other planets of the Solar System orbit around the Sun, which, in turn, orbits around the Earth. In his Disputatio Grassi referenced many of Galileo's observations, such as the surface of the Moon and the phases of Venus, without mentioning him. Grassi argued from the apparent absence of observable parallax that comets move beyond the Moon. Galileo never explicitly stated that comets are an illusion, but merely wondered if they are real or an optical illusion. Discourse on Comets Grassi's arguments and conclusions were criticised in a subsequent pamphlet, Discourse on Comets, published under the name of one of Galileo's disciples, a Florentine lawyer named Mario Guiducci, although it had been largely written by Galileo himself. Galileo and Guiducci offered no definitive theory of their own on the nature of comets, although they did present some tentative conjectures that are now known to be mistaken. (The correct approach to the study of comets had been proposed at the time by Tycho Brahe.) In its opening passage, Galileo and Guiducci's Discourse gratuitously insulted the Jesuit Christoph Scheiner, and various uncomplimentary remarks about the professors of the Collegio Romano were scattered throughout the work. The Astronomical and Philosophical Balance The Jesuits were offended, and Grassi soon replied with a polemical tract of his own, The Astronomical and Philosophical Balance (Libra astronomica ac philosophica), under the pseudonym Lothario Sarsio Sigensano, purporting to be one of his own pupils. The Assayer The Assayer was Galileo's devastating reply to the Astronomical Balance. It has been widely recognized as a masterpiece of polemical literature, in which "Sarsi's" arguments are subjected to withering scorn. It was greeted with wide acclaim, and particularly pleased the new pope, Urban VIII, to whom it had been dedicated. In Rome, in the previous decade, Barberini, the future Urban VIII, had come down on the side of Galileo and the Lincean Academy. Galileo's dispute with Grassi permanently alienated many Jesuits, and Galileo and his friends were convinced that they were responsible for bringing about his later condemnation, although supporting evidence for this is not conclusive. Science, mathematics, and philosophy In 1616 Galileo may have been silenced on Copernicanism. In 1623 his supporter and friend, Cardinal Maffeo Barberini, a former patron of the Accademia dei Lincei and uncle of future Cardinal Francesco Barberini, became Pope Urban VIII. The election of Barberini seemed to assure Galileo of support at the highest level in the Church. A visit to Rome confirmed this. The Assayer is a milestone in the history of science: here Galileo describes the scientific method, which was quite a revolution at the time. The title page of The Assayer shows the crest of the Barberini family, featuring three busy bees. In The Assayer, Galileo weighs the astronomical views of a Jesuit, Orazio Grassi, and finds them wanting. The book was dedicated to the new pope. The title page also shows that Urban VIII employed a member of the Lynx, Cesarini, at a high level in the papal service. This book was edited and published by members of the Lynx. In The Assayer Galileo mainly criticized Grassi's method of inquiry, heavily biased by his religious belief and based on , rather than his hypothesis on comets. Furthermore, he insisted that natural philosophy (i.e. physics) should be mathematical. According to the title page, he was the philosopher (i.e. physicist) of the Grand Duke of Tuscany, not merely the mathematician. Natural philosophy (physics) spans the gamut from processes of generation and growth (represented by a plant) to the physical structure of the universe, represented by the cosmic cross-section. Mathematics, on the other hand, is symbolized by telescopes, and an astrolabe. The language of science The Assayer contains Galileo's famous statement that mathematics is the language of science. Only through mathematics can one achieve lasting truth in physics. Those who neglect mathematics wander endlessly in a dark labyrinth. From the book: Galileo used a sarcastic and witty tone throughout the essay. The book was read with delight at the dinner table by Urban VIII. In 1620 Maffeo Barberini wrote a poem entitled Adulatio Perniciosa in Galileo's honor. An official, Giovanni di Guevara, said that The Assayer was free from any unorthodoxy. Perceived vs. real phenomena In The Assayer Galileo described heat as an artifact of our minds. He wrote that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon. Galileo also theorized that senses such as smell and taste are made possible by the release of tiny particles from their host substances, which was correct but not proven until later. See also Book of Nature References Citations References Pietro Redondi, Galileo eretico, 1983; Galileo: Heretic (transl: Raymond Rosenthal) Princeton University Press 1987 (reprint 1989 ); Penguin 1988 (reprint 1990 ) Wallace, William A. (1991). Galileo, the Jesuits and the Medieval Aristotle. External links PDF version of the abridged text of The Assayer - Stanford University Galileo, Selections from The Assayer - Princeton University 1623 books Astronomy books History of astronomy Books by Galileo Galilei
The Assayer
[ "Astronomy" ]
1,491
[ "Astronomy books", "Works about astronomy", "History of astronomy" ]
7,079,848
https://en.wikipedia.org/wiki/Scanning%20ion-conductance%20microscopy
Scanning ion-conductance microscopy (SICM) is a scanning probe microscopy technique that uses an electrode as the probe tip. SICM allows for the determination of the surface topography of micrometer and even nanometer-range structures in aqueous media conducting electrolytes. The samples can be hard or soft, are generally non-conducting, and the non-destructive nature of the measurement allows for the observation of living tissues and cells, and biological samples in general. It is able to detect steep profile changes in samples and can be used to map a living cell's stiffness in tandem with its detailed topography, or to determine the mobility of cells during their migrations. Working principle Scanning ion conductance microscopy is a technique using the increase of access resistance in a micro-pipette in an electrolyte-containing aqueous medium when it approaches a poorly conducting surface. It monitors the ionic current flowing in and out of the micro/nano-pipette, which is hindered if the tip is very close to the sample surface since the gap through which ions can flow is reduced in size. The SICM setup is generally as follows: A voltage is applied between the two Ag/AgCl electrodes, one of which is in the glass micro-pipette, and the other in the bulk solution. The voltage will generate an ionic current between the two electrodes, flowing in and out of the micro-pipette. The conductance between the two electrodes is measured, and depends on the flux of ions. Movements of the pipette are regulated through piezoelectrics. The micro-pipette is lowered closer and closer to the sample until the ionic flux starts to be restricted. The conductance of the system will then decrease (and the resistance will increase). When this resistance reaches a certain threshold the tip is stopped and the position recorded. The tip is then moved (in different ways depending on the mode used, see below) and another measurement is made in a different location, and so on. In the end, comparing the positions of all the measurements provides a detailed height profile of the sample. It is important to note that the tip is stopped before contacting the sample, thus it does not bend nor damage the surface observed, which is one of the major advantages of SICM. Equivalent circuit The total resistance of the setup (Rtot) is the sum of the three resistances: Rb, Rm, and Rt. Rb the resistance of the electrolyte solution between the tip of the micro-pipette and the electrode in the bulk of the solution. Rm is the resistance of the electrolyte solution between the electrode in the micro-pipette and the tip. Rt is the resistance of the current flowing through the tip Rb and Rm depend on the electrolyte conductivity, and the position and shape of the Ag/AgCl electrodes. Rt depends on the size and shape of the aperture, and on the distance between the tip and the sample. All the parameters except the distance between tip and sample are constant within a given SICM setup, thus it is the variation of Rt with the distance to the sample that will be used to determine the topography of the sample. Usual approximations are: 1) the voltage drop at the surfaces of the Ag/AgCl electrodes is neglected, it is assumed that it is negligible compared to the voltage drop at the tip, and constant, 2) the fact that the bulk resistance is a function of d is neglected since it depends on the distance between the tip and the electrode in the bulk. Comparison with other scanning probe microscopy techniques SICM has a worse resolution than AFM or STM, which can routinely reach resolutions of about 0.1 nm. The resolution of SICM measurement is limited to 1.5 times the diameter of the tip opening in theory, but measurements taken with a 13 nm opening-diameter managed a resolution of around 3–6 nm. SICM can be used to image poorly or non-conducting surfaces, which is impossible with STM. In SICM measurements, the tip of the micro-pipette does not touch the surface of the sample; which allows the imaging of soft samples (cells, biological samples, cell villi) without deformation. SICM is used in an electrolyte-containing solution, so can be used in physiological media and image living cells and tissues, and monitor biological processes while they are taking place. In hopping mode, it is able to correctly determine profiles with steep slopes and grooves. Imaging modes There are four main imaging modes in SICM: constant-z mode, Direct current (constant distance) mode, alternating current mode, and hopping/backstep/standing approach mode. Constant-z mode In constant-z mode, the micro-pipette is maintained at a constant z (height) while it is moved laterally and the resistance is monitored, its variations allowing for the reconstitution of the topography of the sample. This mode is fast but is barely used since it only works on very flat samples. If the sample has rugged surfaces, the pipette will either crash into it, or be too far for imaging most of the sample. Direct current mode In direct current (DC) mode (constant distance mode), the micro-pipette is lowered toward the sample until a predefined resistance is reached. The pipette is then moved laterally and a feedback loop maintains the distance to the sample (through the resistance value). The z-position of the pipette determines the topography of the sample. This mode does not detect steep slopes in sample, may contact the sample in such cases and is prone to electrode drift. Alternating current mode In alternating current (AC) mode, the micro-pipette oscillates vertically in addition to its usual movement. While the pipette is still far from the surface the ionic current, and the resistance is steady, so the pipette is lowered. Once the resistance starts oscillating, the amplitude serves as feedback to modulate the position until a predefined amplitude is reached. The response of the AC component increases much steeper than the DC, and allows for the recording of more complex samples. Hopping mode In hopping (/backstep/standing approach) mode, the micro-pipette is lowered to the sample until a given resistance is reached, and the height is recorded. Then the pipette is dragged back, laterally moved and another measurement is made, and the process repeats. The topography of the sample can then be reconstituted. Hopping mode is slower than the others, but is able to image complex topography and even entire cells, without distorting the sample surface. Combinations with other techniques, and alternative uses SICM was used to image a living neural cell from rat brain, determine the life cycle of microvilli, observe the movement of protein complexes in spermatozoa. SICM has been combined with fluorescence microscopy and förster resonance energy transfer. SICM has been used in a "smart patch-clamp" technique, clamping the pipette by suction to the surface of a cell and then monitoring the activity of the sodium channels in the cell membrane. A combination of AFM and SICM was able to obtain high resolution images of synthetic membranes in ionic solutions. Scanning near-field optical microscopy has been used with SICM; the SICM measurement allowed for the tip of the pipette to be placed very close to the surface of the sample. Fluorescent particles, coming from the inside of the micro-pipette, provide a light source for the SNOM that is being continuously renewed and prevent photobleaching. FSICM (Fast SICM), improving notably the speed of hopping mode has recently been developed. References Scanning probe microscopy
Scanning ion-conductance microscopy
[ "Chemistry", "Materials_science" ]
1,585
[ "Nanotechnology", "Scanning probe microscopy", "Microscopy" ]
7,079,877
https://en.wikipedia.org/wiki/Universal%20testing%20machine
A universal testing machine (UTM), also known as a universal tester, universal tensile machine, materials testing machine, materials test frame, is used to test the tensile strength (pulling) and compressive strength (pushing), flexural strength, bending, shear, hardness, and torsion testing, providing valuable data for designing and ensuring the quality of materials. An earlier name for a tensile testing machine is a tensometer. The "universal" part of the name reflects that it can perform many standard tests application on materials, components, and structures (in other words, that it is versatile). Electromechanical and Hydraulic Testing System An electromechanical UTM utilizes an electric motor to apply a controlled force, while a hydraulic UTM uses hydraulic systems for force application. Electromechanical UTMs are favored for their precision, speed, and ease of use, making them suitable for a wide range of applications, including tensile, compression, and flexural testing. On the other hand, hydraulic UTMs are capable of generating higher forces and are often used for testing high-strength materials such as metals and alloys, where extreme force applications are required. Both types of UTMs play critical roles in various industries including aerospace, automotive, construction, and materials science, enabling engineers and researchers to accurately assess the mechanical properties of materials for design, quality control, and research purposes. Components Several variations are in use. Common components include: Load frame - Usually consisting of two strong supports for the machine. Some small machines have a single support. Load cell - A force transducer or other means of measuring the load is required. Periodic calibration is usually required by governing regulations or quality system. Cross head - A movable cross head (crosshead) is controlled to move up or down. Usually this is at a constant speed: sometimes called a constant rate of extension (CRE) machine. Some machines can program the crosshead speed or conduct cyclical testing, testing at constant force, testing at constant deformation, etc. Electromechanical, servo-hydraulic, linear drive, and resonance drive are used. Means of measuring extension or deformation - Many tests require a measure of the response of the test specimen to the movement of the cross head. Extensometers are sometimes used. Control Panel and Software Device - Providing the test result with parameters set by the user for data acquisition and analysis. Some older machines have dial or digital displays and chart recorders. Many newer machines have a computer interface for analysis and printing. Conditioning - Many tests require controlled conditioning (temperature, humidity, pressure, etc.). The machine can be in a controlled room or a special environmental chamber can be placed around the test specimen for the test. Test fixtures, specimen holding jaws, and related sample making equipment are called for in many test methods. Use The set-up and usage are detailed in a test method, often published by a standards organization. This specifies the sample preparation, fixturing, gauge length (the length which is under study or observation), analysis, etc. The specimen is placed in the machine between the grips and an extensometer if required can automatically record the change in gauge length during the test. If an extensometer is not fitted, the machine itself can record the displacement between its cross heads on which the specimen is held. However, this method not only records the change in length of the specimen but also all other extending / elastic components of the testing machine and its drive systems including any slipping of the specimen in the grips. Once the machine is started it begins to apply an increasing load on specimen. Throughout the tests the control system and its associated software record the load and extension or compression of the specimen. Machines range from very small table top systems to ones with over 53 MN (12 million lbf) capacity. See also Modulus of elasticity Stress-strain curve Young's modulus Necking (engineering) Fatigue testing Hydraulic press References ASTM E74 - Practice for Calibration of Force Measuring Instruments for Verifying the Force Indication of Testing Machines ASTM E83 - Practice for Verification and Classification on Extensometer Systems ASTM E1012 - Practice for Verification of Test Frame and Specimen Alignment Under Tensile and Compressive Axial Force Application ASTM E1856 - Standard Guide for Evaluating Computerized Data Acquisition Systems Used to Acquire Data from Universal Testing Machines JIS K7171 - Standard for determine the flextural strength for plastic material & products External links Materials science Tests Measuring instruments
Universal testing machine
[ "Physics", "Materials_science", "Technology", "Engineering" ]
920
[ "Applied and interdisciplinary physics", "Materials science", "nan", "Measuring instruments" ]
7,080,378
https://en.wikipedia.org/wiki/Kolmogorov%20continuity%20theorem
In mathematics, the Kolmogorov continuity theorem is a theorem that guarantees that a stochastic process that satisfies certain constraints on the moments of its increments will be continuous (or, more precisely, have a "continuous version"). It is credited to the Soviet mathematician Andrey Nikolaevich Kolmogorov. Statement Let be some complete separable metric space, and let be a stochastic process. Suppose that for all times , there exist positive constants such that for all . Then there exists a modification of that is a continuous process, i.e. a process such that is sample-continuous; for every time , Furthermore, the paths of are locally -Hölder-continuous for every . Example In the case of Brownian motion on , the choice of constants , , will work in the Kolmogorov continuity theorem. Moreover, for any positive integer , the constants , will work, for some positive value of that depends on and . See also Kolmogorov extension theorem References p. 51 Theorems regarding stochastic processes
Kolmogorov continuity theorem
[ "Mathematics" ]
218
[ "Theorems about stochastic processes", "Theorems in probability theory" ]
7,081,475
https://en.wikipedia.org/wiki/Thompson%27s%20Island
Thompson's Island is a alluvial island in the upper Allegheny River. It is located in Pleasant Township, Pennsylvania, and is part of the Allegheny Islands Wilderness in Allegheny National Forest. The island's forests contain old growth Silver Maple, Sugar Maple, American Sycamore, and Slippery Elm. Thompson's Island is the site of the only American Revolutionary War battle in Northwest Pennsylvania. Colonel Daniel Brodhead defeated the Senecas in 1779. See also List of old growth forests Possession Island (Namibia) References Nature Tourism Forest County Allegheny Islands Wilderness American Revolutionary War sites Old-growth forests Protected areas of Warren County, Pennsylvania Landforms of Warren County, Pennsylvania River islands of Pennsylvania Islands of the Allegheny River in Pennsylvania
Thompson's Island
[ "Biology" ]
146
[ "Old-growth forests", "Ecosystems" ]
7,081,524
https://en.wikipedia.org/wiki/Crull%27s%20Island
Crull's Island is a alluvial island in the upper Allegheny River. It is located in Pleasant Township, Pennsylvania, and is part of the Allegheny Islands Wilderness in Allegheny National Forest. The lower third of Crull's Island was briefly farmed, but was abandoned when it proved to be unprofitable. It is now a prime location for old growth, virgin, and river bottom forests. The forests contain Silver Maple, Sugar Maple, American Sycamore, and Slippery Elm. See also List of old growth forests References Nature Tourism Allegheny Islands Wilderness Old-growth forests Landforms of Warren County, Pennsylvania River islands of Pennsylvania Islands of the Allegheny River in Pennsylvania
Crull's Island
[ "Biology" ]
138
[ "Old-growth forests", "Ecosystems" ]
7,081,715
https://en.wikipedia.org/wiki/Micropower
Micropower describes the use of very small electric generators and prime movers or devices to convert heat or motion to electricity, for use close to the generator. The generator is typically integrated with microelectronic devices and produces "several watts of power or less." These devices offer the promise of a power source for portable electronic devices which is lighter weight and has a longer operating time than batteries. Microturbine technology The components of any turbine engine — the gas compressor, the combustion chamber, and the turbine rotor — are fabricated from etched silicon, much like integrated circuits. The technology holds the promise of ten times the operating time of a battery of the same weight as the micropower unit, and similar efficiency to large utility gas turbines. Researchers at Massachusetts Institute of Technology have thus far succeeded in fabricating the parts for such a micro turbine out of six etched and stacked silicon wafers, and are working toward combining them into a functioning engine about the size of a U.S. quarter coin. Researchers at Georgia Tech have built a micro generator 10 mm wide, which spins a magnet above an array of coils fabricated on a silicon chip. The device spins at 100,000 revolutions per minute, and produces 1.1 watts of electrical power, sufficient to operate a cell phone. Their goal is to produce 20 to 50 watts, sufficient to power a laptop computer. Scientists at Lehigh University are developing a hydrogen generator on a silicon chip that can convert methanol, diesel, or gasoline into fuel for a microengine or a miniature fuel cell. Professor Sanjeev Mukerjee of Northeastern University's chemistry department is developing fuel cells for the military that will burn hydrogen to power portable electronic equipment, such as night vision goggles, computers, and communication equipment. In his system, a cartridge of methanol would be used to produce hydrogen to run a small fuel cell for up to 5,000 hours. It would be lighter than rechargeable batteries needed to provide the same power output, with a longer run time. Similar technology could be improved and expanded in future years to power automobiles. The National Academies' National Research Council recommended in a 2004 report that the U.S. Army should investigate such micropower sources for powering electronic equipment to be carried by soldiers in the future, since batteries sufficient to power the computers, sensors, and communications devices would add considerable weight to the burden of infantry soldiers. The Future Warrior Concept of the U.S. Army envisions a 2- to 20-watt micro turbine fueled by a liquid hydrocarbon being used to power communications and wearable heating/cooling equipment for up to six days on 10 ounces of fuel. Other microgenerator/nanogenerator technologies Professor Orest Symko of the University of Utah physics department and his students developed Thermal Acoustic Piezo Energy Conversion (TAPEC), devices of a cubic inch (16 cubic centimeters), or so, which convert waste heat into acoustic resonance and then into electricity. It would be used to power microelectromechanical systems, or MEMS. The research was funded by the U.S. Army. Symko was to present a paper at the Acoustical Society of America. June 8, 2007. Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMS) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Professor Zhong Lin Wang of the Georgia Institute of Technology said his team of investigators had developed a "nanometer-scale generator ... based on arrays of vertically aligned zinc oxide nanowires that move inside a "zigzag" plate electrode." Built into shoes, it could generate electricity from walking to power small electronic devices. It could also be powered by blood flow to power biomedical devices. Per an account of the device which appeared in the journal Science, bending of the zinc oxide nanowire arrays produces an electric field by the piezoelectric properties of the material. The semiconductor properties of the device create a Schottky barrier with rectifying capabilities. The generator is estimated to be 17% to 30% efficient in converting mechanical motion into electricity. This could be used to power biomedical devices that have wireless transmission capabilities for data and control. A later development was to grow hundreds of such nanowires on a substrate that functioned as an electrode. On top of this was placed a silicon electrode covered with a series of platinum ridges. Vibration of the top electrode caused the generation of direct current. A report by Wang was to appear in the August 8, 2007 issue of the journal "Nano Letters," saying that such devices could power implantable biomedical devices. The device would be powered by flowing blood or a beating heart. It could function while immersed in body fluids, and would get its energy from ultrasonic vibrations. Wang expects that an array of the devices could produce 4 watts per cubic centimeter. Goals for further development are to increase the efficiency of the array of nanowires, and to increase the lifetime of the device, which as of April 2007 was only about one hour. By November 2010 Wang and his team were able to produce 3 volts of potential and as much as 300 nanoamperes of current, an output level 100 times greater than was possible a year earlier, from an array measuring about 2 cm by 1.5 cm. The windbelt is a micropower technology invented by Shawn Frayne. It is essentially an aeolian harp, except that it exploits the motion of the string produced by aeroelastic flutter to create a physical oscillation that can be converted to electricity. It avoids the losses inherent in rotating wind powered generators. Prototypes have produced 40 milliwatts in a 16 km/h wind. Magnets on the vibrating membrane generate currents in stationary coils. Piezoelectric nanofibers in clothing could generate enough electricity from the wearer's body movements to power small electronic devices, such as iPods or some of the electronic equipment used by soldiers on the battlefield, based on research by University of California, Berkeley Professor Liwei Lin and his team. One million such fibers could power an iPod, and would be altogether as large as a grain of sand. Researchers at Stanford University are developing "eTextiles" — batteries made of fabric — that might serve to store power generated by such technology. Thermal resonator technology allows generation of power from the daily change of temperature, even when there is no instantaneous temperature difference as needed for thermoelectric generation, and no sunlight as needed for photovoltaic generation. A phase change material such as octadecane is selected which can change from solid to liquid when the ambient temperature changes a few degrees celsius. In a small demonstration device created by chemical engineering professor Michael Strano and seven others at MIT, a 10 degree celsius daily change produced 350 millivolts and 1.3 milliwatts. The power levels envisioned could power sensors and communication devices. See also Battery (electricity) Cell phone Electrical generator Electronics Fuel cell Gas turbine Hub dynamo Integrated circuits Laptop Microelectronics Microelectromechanical systems Portable fuel cell applications Windbelt Nanogenerator References External links MIT Gas Turbine Laboratory Z.L. Wang's lab at Georgia Institute of Technology Electrical generators Microtechnology
Micropower
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,568
[ "Electrical generators", "Machines", "Microtechnology", "Materials science", "Physical systems" ]
7,082,115
https://en.wikipedia.org/wiki/Roofer
A roofer, roof mechanic, or roofing contractor is a tradesman who specializes in roof construction. Roofers replace, repair, and install the roofs of buildings, using a variety of materials, including shingles, single-ply, bitumen, and metal. Roofing work includes the hoisting, storage, application, and removal of roofing materials and equipment, including related insulation, sheet metal, vapor barrier work, and green technologies rooftop jobs such as vegetative roofs, rainwater harvesting systems, and photovoltaic products, such as solar shingles and solar tiles. Roofing work can be physically demanding because it may involve heavy lifting, climbing, bending, and kneeling, often in extreme weather conditions. Roofers are also vulnerable to falls from heights due to working at elevated heights. Various protective measures are required in many countries. In the United States these requirement are established by the Occupational Safety and Health Administration (OSHA) to address this concern. Several resources from occupational health agencies are available on implementing the required and other recommended interventions. Global usage According to data from the U.S. Bureau of Labor Statistics (BLS), , there were 129,300 individuals working as roofers in the construction industry. Among that population, a majority of roofers (93%; 119,800) were contractors for Foundation, Structure, and Building Exterior projects. In terms of jobs outlook, it is predicted that there will only be a 2% increase in job growth from 2022 to 2032 in the United States. Approximately 12,200 openings are expected each year in this decade. Most of the new jobs are likely to be offered to replace roofers who retire or transition out of the trade. In Australia, this type of carpenter is called a roof carpenter and the term roofer refers to someone who installs the roof cladding (tiles, tin, etc.). The number of roofers in Australia was estimated to be approximately 15,000. New South Wales is the largest province with an 29% market share in the Australian Roofers industry (4,425 companies). Second is Victoria with 3,206 Roofers (21%). In the United States and Canada, they're often referred to as roofing contractors or roofing professionals. The most common roofing material in the United States is asphalt shingles. In the past, 3-tab shingles were used, but recent trends show "architectural" or "dimensional" shingles becoming very popular. Depending on the region, other commonly applied roofing materials installed by roofers include concrete tiles, clay tiles, natural or synthetic slate, single-ply (primarily EPDM rubber, PVC, or TPO), rubber shingles (made from recycled tires), glass, metal panels or shingles, wood shakes or shingles, liquid-applied, hot asphalt/rubber, foam, thatch, and solar tiles. "Living roof" systems, or rooftop landscapes, have become increasingly common in recent years in both residential and commercial applications. Roles, responsibilities, and tasks Roles and responsibilities of roofing professionals include: Assessing the roof system and components (may include decking and structural components) Determining the proper roofing system for the building Installing roof system components according to manufacturer’s specifications Repairing the roof system Maintenance of the roof system Beyond having common duties such as replacing, repairing, or installing roofs for buildings, roofers can also be involved in other tasks, including but is not limited to: Seal exposed heads of nails or screws using roofing cement or caulk to avert possible water infiltration Tailor roofing materials to accommodate architectural elements such as walls or vents Align the installed materials with the roof's edges to ensure a proper fit Apply various roofing materials such as shingles, asphalt, metal, etc., to render the roof impervious to weather conditions Establish roof ventilation mechanisms to regulate airflow and control temperature fluctuations Set up moisture barriers or insulation layers to improve the roof's thermal performance Dismantle the current roof systems to make ways for repairs or new installations Substitute impaired or decaying joists or plywood to maintain the roof's structural integrity Assess roof dimensions to assess the necessary amount of required materials Conduct evaluations on problematic roofs to determine the most effective repair approach Hazards Roofing is one of the most dangerous professions among construction occupations since it involves working at heights and exposes workers to dangerous weather conditions such as extreme heat. In the United States as of 2017, the rate of fatalities from falls among roofers is 36 deaths per 100,000 full-time employees, ten times greater than all construction-related professions combined. In the United States, the fatal injury rate in 2021 was 59.0 per 100,000 full-time roofers, compared to the national average of 3.6 per 100,000 full-time employees. According to the U.S. Bureau of Labor Statistics, roofing has been within the top 5 highest death rates of any profession for over 10 years in a row. For Hispanic roofers, data from 2001–2008 show fatal injuries from falls account for nearly 80% of deaths in this population, the highest cause of death among Hispanics of any construction trade. A major contributing factor to the high fatality rates among roofers in the United States is the nature of the craft which requires roofers to work on elevated, slanted roof surfaces. Findings from qualitative interviews with Michigan roofing contractors also found hand and finger injuries from handling heavy material and back injuries to be some of the more common task/injury combinations. Ladder falls contribute to the rates of injury and mortality. More than half a million people per year are treated for fall from ladder and over 3000 people die as a result. In 2014 the estimated cost annual cost of ladder injuries, including time away from work, medical, legal, liability expenses was estimated to reach $24 billion. Male, Hispanic, older, self-employed workers and those who work in smaller establishments, and work doing construction, maintenance, and repair experience higher ladder fall injury rates when compared with women and non-Hispanic whites and persons of other races/ethnicities. Ladders allow for roofers to access upper level work surfaces. For safe use, ladder must be inspected for damage by a competent person and must be used on stable and level surfaces unless they are secured to prevent displacement. Safety measures Nearly every industrialized country has established specific safety regulations for work on the roof, ranging from the use of conventional fall protection systems including personal fall arrest systems, guardrail systems, and safety nets. The European Agency for Safety and Health at Work describes scenarios of risk (fall prevention, falling materials, types of roofs), precautions, training needed and European legislation focused on roof work. European directives set minimum standards for health and safety and are transposed into law in all Member States. In the United States, OSHA standards require employers to have several means of fall protection available to ensure the safety of workers. In construction, this applies to workers who are exposed to falls of 6 feet or more above lower levels. In the United States, regulation of the roofing trade is left up to individual states. Some states leave roofing regulation up to city-level, county-level, and municipal-level jurisdictions. Unlicensed contracting of projects worth over a set threshold may result in stiff fines or even time in prison. In some states, roofers are required to meet insurance and roofing license guidelines. Roofers are also required to display their license number on their marketing material. Canada's rules are very similar to those from the U.S., and regulatory authority depends on where the business is located and fall under the authority of their local province. In 2009, in response to high rates of falls in constructions the Japanese Occupational Safety and Health Regulations and Guidelines amended their specific regulations. In 2013 compliance was low and the need for further research and countermeasures for preventing falls and ensuring fall protection from heights was identified. The United Kingdom has no legislation in place that requires a roofer to have a license to trade, although some do belong to recognized trade organizations. Personal fall arrest system (PFAS) The purpose of a PFAS is to halt a fall and prevent the worker from making bodily contact with a surface below. The PFAS consists of an anchorage, connectors, body harness and may include a lanyard, deceleration device, lifeline or suitable combination of these. Beyond these mandatory components of the PFAS, there are also specific fall distances associated with the functioning of the arrest system. Specifically, there is a total fall distance that the PFAS must allow for to assist the worker in avoiding contact with the ground or other surface below. The total fall distance consists of free fall distance, deceleration distance, D-ring shift, Back D-ring height, and Safety margin. In addition to the fall distance requirements for each component of the PFAS, the anchorage of the PFAS must also be able to support a minimum 5,000 pounds per worker. OSHA regulations have several requirements. The free fall distance, to the distance that the worker drops before the PFAS begins to work and slows the speed of the fall, must be 6 feet or less, nor contact any lower level. The deceleration, the length that the lanyard must stretch in order to arrest the fall must be no more than 3.5 feet. The D-ring shift, the distance that the harness stretches and how far the D-ring itself moves when it encounters the full weight of the worker during a fall, is generally assumed to be 1 foot, depending on the equipment design and the manufacturer of the harness. For the back D-ring height, the distance between the D-ring and the sole of the worker's footwear, employers often use 5 feet as the standard height with the assumption that the worker will be 6 feet in height, but because the D-ring height variability can affect the safety of the system, the back D-ring height must be calculated based on the actual height of the worker. The safety margin, the additional distance that is needed to ensure sufficient clearance between the worker and the surface beneath the worker after a fall occurs, is generally considered to be a minimum of 2 feet. Fall restraint system A fall restraint system is a type of fall protection system where, the goal is to stop workers from reaching the unprotected sides or edges of a working area in which a fall can subsequently occur. This system is useful where a worker may lose their footing near an unprotected edge or begin sliding. In such a case, the fall restraint system will restrain further movement of the worker toward the unprotected side or edge and prevent a serious fall. Although fall restraint systems are not explicitly defined or mentioned in OSHA's fall protection standards for construction, they are allowed by OSHA as specified in an OSHA letter of interpretation last updated in 2004. OSHA does not have any specific requirements for fall restraint systems, but recommends that any fall restraint system be capable of withstanding 3,000 pounds or at least twice the maximum predicted force necessary to save the worker from falling to the lower surface. There are no OSHA specifications on the distance from the edge the restraint system must allow for a falling worker, and although a likely very dangerous practice, the OSHA letter of interpretation states that as long as the restraint system prevents the employee from falling off an edge, the employee can be restrained to "within inches of the edge." Guardrail system Guardrail systems serve as an alternative to PFAS and fall restraint systems by having permanent or temporary guardrails around the perimeter of the roof and any roof openings. OSHA requires the height of the top of the rail to be 39-45 inches above the working surface. Mid-rails must be installed midway between the top of the top rail and the walking/working surface when there is no parapet wall at least 21 inches high. Guardrail systems must be capable of withstanding 200-pounds of force in any outward or downward direction applied within 2 inches of the top edge of the rail. Safety net system Safety net systems use a tested safety net adjacent to and below the edge of the walking/working surface to catch a worker who may fall off the roof. Safety nets must be installed as close as practicable under the surface where the work is being performed and shall extend outward from the outermost projection of the work surface as follows: Safety nets must be drop-tested with a 400-pound bag of sand, or submit a certification record prior to its initial use. Warning line system Warning lines systems consist of ropes, wires, or chains which are marked every 6 feet with high-visibility material, and must be supported in such a way so that it is between 34 and 39 inches above the walking/working surface. Warning lines are passive systems that allow for a perimeter to be formed around the working area so that workers are aware of dangerous edges. Warning lines are only permitted on roofs with a low slope (having a slope of less than or equal to 4 inches of vertical rise for every 12 inches horizontal length (4:12)). In the context of roofing fall protection, warning line systems may only be used in combination with a guardrail system, a safety net system, a personal fall arrest system, or a safety monitoring system. The warning line system must be erected around all sides of the roof work area. Safety monitoring systems Safety monitoring systems use safety monitors to monitor the safety of other workers on the roof. Safety monitors must be competent to recognize fall hazards. The safety monitor is tasked to ensure the safety of other workers on the roof and must be able to orally warn an employee when they are in an unsafe situation. Resources Multi-layered approaches to fall prevention and protection that use the hierarchy of controls can help to prevent fall injuries, incidents, and fatalities in the roofing industry. The hierarchy of controls is a way of determining which actions will best control exposures. The hierarchy of controls has five levels of actions to reduce or remove hazards – elimination, substitution, and engineering controls are among the preferred preventive actions based on general effectiveness. Resources are available to assist with the implementation of fall safety measures in the roofing industry such as fall prevention plans, a ladder safety mobile application, infographics and tipsheets, toolbox talks, videos and webinars, and safety leadership training. Many of these resources are available in Spanish and additional languages other than English. The recommended safety measures are described next. Emerging trends Job outlook In terms of job outlooks, it is predicted that there will only be an 1% increase in job growth from 2021 to 2032. The job openings (15,000) are expected to replace roofers who will retire or transition out of the trade. Solar roofs Solar Roof installation is one of the fastest growing trends in the roofing industry due to the nature of solar roofs being environmentally friendly and a worthwhile economic investment. Specifically, solar roofs have been found to allow homeowners to potentially save 40-70% on electric bills depending on the number of tiles installed. The US federal government has also begun incentivizing homeowners to install solar roofs with potential eligibility for 30% tax credit on the cost of a solar system based on federal income taxes. Metal roofs Across 14 researched markets, roofing contracting companies have reported that they have received more frequent calls regarding potential metal roof installations. For instance, one company used to receive 5-6 calls in total regarding metal installations but recently, they have received 5-6 calls weekly for inquiries regarding metal roof installations. See also Domestic roof construction Roof cleaning Flat roof Membrane roofing List of commercially available roofing materials Prevention through design External links Stop Construction Falls training and other resources from the Center for Construction Research and Training Construction Toolbox Talks Resources in Spanish and Additional Languages Construction Fatality Assessment and Control Evaluation (FACE) database, from the National Institute for Occupational Safety and Health and the Center for Construction Research and Training. Introduction to working at height safely from the Health Safety Executive, UK. Video NAPO: Working at height. Health and safety in roof work, from the Health Safety Executive, UK. Ladder safety resources from the National Institute for Occupational Safety and Health. You can prevent falls! from the Public Health Agency of Canada. Prevent Construction Falls from Roofs, Ladders, and Scaffolds, from the National Institute for Occupational Safety and Health. Roofing guidelines and recommendations, National Roofing Contractors Association. Education and Training Course Catalog, National Roofing Contractors Association. Occupational Employment and Wage Statistics, US Bureau of Labor Statistics. Infographics and Tipsheets. The Center for Construction Research and Training. References Construction trades workers Occupational safety and health Construction safety
Roofer
[ "Engineering" ]
3,412
[ "Construction", "Construction safety" ]
7,082,456
https://en.wikipedia.org/wiki/Astronomical%20Almanac
The Astronomical Almanac is an almanac published by the United Kingdom Hydrographic Office; it also includes data supplied by many scientists from around the world. On page vii, the listed major contributors to its various Sections are: H.M Nautical Almanac Office, United Kingdom Hydrographic Office; the Nautical Almanac Office, United States Naval Observatory; the Jet Propulsion Laboratory, California Institute of Technology; the IAU Standards Of Fundamental Astronomy (SOFA) initiative; the Institut de Mécanique Céleste et des Calcul des Éphémerides, Paris Observatory; and the Minor Planet Center, Cambridge, Massachusetts. It is considered a worldwide resource for fundamental astronomical data, often being the first publication to incorporate new International Astronomical Union resolutions. The almanac largely contains Solar System ephemerides based on the JPL Solar System integration "DE440" (created June 2020), and catalogs of selected stellar and extragalactic objects. The material appears in sections, each section addressing a specific astronomical category. The book also includes references to the material, explanations, and examples. It used to be available up to one year in advance of its date, however the current 2024 edition became available only one month in advance; in December 2023. The Astronomical Almanac Online was a companion to the printed volume. It was designed to broaden the scope of the publication, not duplicate the data. In addition to ancillary information, the Astronomical Almanac Online extended the printed version by providing data best presented in machine-readable form. The 2024 printed edition of the Almanac states on page iv: "The web companion to The Astronomical Almanac has been withdrawn as of January 2023." Publication contents Section A PHENOMENA: includes information on the seasons, phases of the Moon, configurations of the planets, eclipses, transits of Mercury or Venus, sunrise/set, moonrise/set times, and times for twilight. Preprints of many of these data appear in Astronomical Phenomena, another joint publication by USNO and HMNAO. Section B TIME-SCALES AND COORDINATE SYSTEMS: contains calendar information, relationships between time scales, universal and sidereal times, Earth rotation angle, definitions of the various celestial coordinate systems, frame bias, precession, nutation, obliquity, intermediate system, the position and velocity of the Earth, and coordinates of Polaris. Preprints of many of these data also appear in Astronomical Phenomena. Section C SUN; covers detailed positional information on the Sun, including the ecliptic and equatorial coordinates, physical ephemerides, geocentric rectangular coordinates, times of transit, and the equation of time. Section D MOON: contains detailed positional information on the Moon including phases, mean elements of the orbit and rotation, lengths of mean months, ecliptic and equatorial coordinates, librations, and physical ephemerides. Section E PLANETS: consist of detailed positional information on each of the major planets including osculating orbital elements, heliocentric ecliptic and geocentric equatorial coordinates, and physical ephemerides. Section F NATURAL SATELLITES; covers positional information on the satellites of Mars, Jupiter, Saturn (including the rings), Uranus, Neptune, and Pluto. Section G DWARF PLANETS AND SMALL SOLAR SYSTEM BODIES: includes positional and physical data on selected dwarf planets, positional information on bright minor planets and periodic comets. Section H STARS AND STELLAR SYSTEMS: contains mean places for bright stars, double stars, UBVRI standards, ubvy and H beta standards, spectrophotometric standards, radial velocity standards, variable stars, exoplanet and host stars, bright galaxies, open clusters, globular clusters, ICRF2 radio source positions, radio flux calibrators, x-ray sources, quasars, pulsars, and gamma ray sources. Section J OBSERVATORIES: was a worldwide index of observatory names, locations, MPC codes, and instrumentation in alphabetical order and by country. This section has now been removed as stated in the printed 2024 edition on page J1: "We are presently reserving Section J for possible new contents in future editions of The Astronomical Almanac." An explanation is given on page iv: "Section J: Observatories: This section has been removed as it is significantly out-of-date and it is not clear that a static listing of Observatories is a useful service any longer." Section K TABLES AND DATA: includes Julian dates, selected astronomical constants, relations between time scales, coordinates of the celestial pole, reduction of terrestrial coordinates, interpolations methods, vectors and matrices. Section L NOTES AND REFERENCES: gives notes on the data and references for source material found in the almanac. Section M GLOSSARY: contains terms and definitions for many of the words and phrases, with emphasis on positional astronomy. Publication history The Astronomical Almanac is the direct descendant of the British and American navigational almanacs. The British Nautical Almanac and Astronomical Ephemeris had been published since 1766, and was renamed The Astronomical Ephemeris in 1960. The American Ephemeris and Nautical Almanac had been published since 1852. In 1981 the British and American publications were combined under the title The Astronomical Almanac." Explanatory Supplement to the Astronomical Almanac The Explanatory Supplement to the Astronomical Almanac, currently in its third edition (2013), provides detailed discussion of usage and data reduction methods used by the Astronomical Almanac. It covers its history, significance, sources, methods of computation, and use of the data. Because the Astronomical Almanac prints primarily positional data, this book goes into great detail on techniques to get astronomical positions. Earlier editions of the supplement were published in 1961 and in 1992. See also American Ephemeris and Nautical Almanac (specific title) Astronomical Ephemeris (generic article) Almanac (generic article) Nautical almanac (generic article) The Nautical Almanac (familiar name for a specific series of (official British) publications which appeared under a variety of different full titles for the period 1767 to 1959, as well as being a specific official title (jointly UK/US-published) for 1960 onwards) Jet Propulsion Laboratory Development Ephemeris (used by the Astronomical Almanac) References External links The Astronomical Almanac (official publication at U.S. Naval Observatory website) The Astronomical Almanac Online (official publication online at Her Majesty's Nautical Almanac Office website) United States Naval Observatory Astronomical almanacs Astronomical catalogues
Astronomical Almanac
[ "Astronomy" ]
1,342
[ "Astronomical almanacs", "Works about astronomy", "Astronomical catalogues", "Celestial navigation", "Astronomical objects" ]
7,082,492
https://en.wikipedia.org/wiki/Agentless%20data%20collection
In the field of information technology, agentless data collection involves collecting data from computers without installing any new agents on them. References Internet Protocol based network software Data collection
Agentless data collection
[ "Technology" ]
34
[ "Data collection", "Data" ]
7,082,539
https://en.wikipedia.org/wiki/Binson%20Echorec
The Binson Echorec is a delay effects unit produced by Italian company Binson. Unlike most other electromechanical delays, the Echorec uses an analog magnetic drum recorder instead of a tape loop. After using Meazzi Echomatic machines, Hank Marvin of the Shadows began using Binson echoes. He used various Binson units on record and stage for much of the mid-to-late 1960s. Marvin continued to use Binsons until c.1979/1980, when he began using a Roland Space Echo. Echorecs were used by Syd Barrett, David Gilmour, and Richard Wright of Pink Floyd. The Echorec can be heard on Pink Floyd songs including "Interstellar Overdrive", "Astronomy Domine", "Shine On You Crazy Diamond", "Time", "One of These Days", and "Echoes". A Binson Echorec Baby owned by the band was displayed at the Victoria and Albert Museum as part of the 2017 Pink Floyd: Their Mortal Remains exhibition. See also Roland Space Echo Echoplex References Bibliography External links https://web.archive.org/web/20120317084352/http://binsonamoremio.altervista.org/ http://binson-museum.weebly.com/ http://www.radiomuseum.org/r/binson_echorec_2_t7e.html Sound recording technology Effects units Pink Floyd
Binson Echorec
[ "Technology" ]
306
[ "Recording devices", "Sound recording technology" ]
7,082,603
https://en.wikipedia.org/wiki/Aqueous%20Wastes%20from%20Petroleum%20and%20Petrochemical%20Plants
Aqueous Wastes from Petroleum and Petrochemical Plants is a book about the composition and treatment of the various wastewater streams produced in the hydrocarbon processing industries (i.e., oil refineries, petrochemical plants and natural gas processing plants). When it was published in 1967, it was the first book devoted to that subject. The book is notable for being the first technical publication of a method for the rigorous tray-by-tray design of steam distillation towers for removing hydrogen sulfide from oil refinery wastewaters. Such towers are commonly referred to as sour water strippers. The design method was also presented at a World Petroleum Congress Meeting shortly after the book was published. The subjects covered in the book include wastewater pollutants and the pertinent governmental regulations, oil refinery and petrochemical plant wastewater effluents, treatment methods, miscellaneous effluents, data on the cost of various wastewater treatment methods, and an extensive reference list. Availability in libraries The book became a classic in its field and is available in major university, public and industrial libraries worldwide. The book has no ISBN because they were not in use in 1967. The Library of Congress catalog number (LCCN) is 67019834 and the British Library system number is 012759691. It is no longer in print, but photocopies can be obtained from the ProQuest Company's Books On Demand service. Book reviews One of the book reviews is that of Dr. Nelson V. Nemerow, a Civil Engineering professor at Syracuse University in New York state, published in 1968 in the American Chemical Society's journal Environmental Science and Technology. References 1967 in the environment Engineering books Oil refining Books about petroleum Environmental non-fiction books Technology books
Aqueous Wastes from Petroleum and Petrochemical Plants
[ "Chemistry" ]
360
[ "Chemical engineering books", "Petroleum stubs", "Petroleum technology", "Books about petroleum", "Petroleum", "Oil refining" ]
7,082,881
https://en.wikipedia.org/wiki/Enactivism
Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198). "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate. The term 'enactivism' is close in meaning to 'enaction', defined as "the manner in which a subject of perception creatively matches its actions to the requirements of its situation". The introduction of the term enaction in this context is attributed to Francisco Varela, Evan Thompson, and Eleanor Rosch in The Embodied Mind (1991), who proposed the name to "emphasize the growing conviction that cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs". This was further developed by Thompson and others, to place emphasis upon the idea that experience of the world is a result of mutual interaction between the sensorimotor capacities of the organism and its environment. However, some writers maintain that there remains a need for some degree of the mediating function of representation in this new approach to the science of the mind. The initial emphasis of enactivism upon sensorimotor skills has been criticized as "cognitively marginal", but it has been extended to apply to higher level cognitive activities, such as social interactions. "In the enactive view,... knowledge is constructed: it is constructed by an agent through its sensorimotor interactions with its environment, co-constructed between and within living species through their meaningful interaction with each other. In its most abstract form, knowledge is co-constructed between human individuals in socio-linguistic interactions...Science is a particular form of social knowledge construction...[that] allows us to perceive and predict events beyond our immediate cognitive grasp...and also to construct further, even more powerful scientific knowledge." Enactivism is closely related to situated cognition and embodied cognition, and is presented as an alternative to cognitivism, computationalism, and Cartesian dualism. Philosophical aspects Enactivism is one of a cluster of related theories sometimes known as the 4Es. As described by Mark Rowlands, mental processes are: Embodied involving more than the brain, including a more general involvement of bodily structures and processes. Embedded functioning only in a related external environment. Enacted involving not only neural processes, but also things an organism does. Extended into the organism's environment. Enactivism proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes. The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world. "Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act." In The Tree of Knowledge Maturana & Varela proposed the term enactive "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism or connectionism. They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241] Another important notion relating to enactivism is autopoiesis. The word refers to a system that is able to reproduce and maintain itself. Maturana & Varela describe that "This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems" Using the term autopoiesis, they argue that any closed system that has autonomy, self-reference and self-construction (or, that has autopoietic activities) has cognitive capacities. Therefore, cognition is present in all living systems. This view is also called autopoietic enactivism. Radical enactivism is another form of enactivist view of cognition. Radical enactivists often adopt a thoroughly non-representational, enactive account of basic cognition. Basic cognitive capacities mentioned by Hutto and Myin include perceiving, imagining and remembering. They argue that those forms of basic cognition can be explained without positing mental representations. With regard to complex forms of cognition such as language, they think mental representations are needed, because there needs explanations of content. In human being's public practices, they claim that "such intersubjective practices and sensitivity to the relevant norms comes with the mastery of the use of public symbol systems" (2017, p. 120), and so "as it happens, this appears only to have occurred in full form with construction of sociocultural cognitive niches in the human lineage" (2017, p. 134). They conclude that basic cognition as well as cognition in simple organisms such as bacteria are best characterized as non-representational. Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body. "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction". Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing." Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality. However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it. Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions. Shaun Gallagher also points out that pragmatism is a forerunner of enactive and extended approaches to cognition. According to him, enactive conceptions of cognition can be found in many pragmatists such as Charles Sanders Peirce and John Dewey. For example, Dewey says that "The brain is essentially an organ for effecting the reciprocal adjustment to each other of the stimuli received from the environment and responses directed upon it" (1916, pp. 336–337). This view is fully consistent with enactivist arguments that cognition is not just a matter of brain processes and brain is one part of the body consisting of the dynamical regulation. Robert Brandom, a neo-pragmatist, comments that "A founding idea of pragmatism is that the most fundamental kind of intentionality (in the sense of directedness towards objects) is the practical involvement with objects exhibited by a sentient creature dealing skillfully with its world" (2008, p. 178). How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding. It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect, that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another." The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld. Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis, is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko. According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc." One objection to enactive approaches to cognition is the so-called "scale-up objection". According to this objection, enactive theories only have limited value because they cannot "scale up" to explain more complex cognitive capacities like human thoughts. Those phenomena are extremely difficult to explain without positing representation. But recently, some philosophers are trying to respond to such objection. For example, Adrian Downey (2020) provides a non-representational account of Obsessive-compulsive disorder, and then argues that ecological-enactive approaches can respond to the "scaling up" objection. Psychological aspects McGann & others argue that enactivism attempts to mediate between the explanatory role of the coupling between cognitive agent and environment and the traditional emphasis on brain mechanisms found in neuroscience and psychology. In the interactive approach to social cognition developed by De Jaegher & others, the dynamics of interactive processes are seen to play significant roles in coordinating interpersonal understanding, processes that in part include what they call participatory sense-making. Recent developments of enactivism in the area of social neuroscience involve the proposal of The Interactive Brain Hypothesis where social cognition brain mechanisms, even those used in non-interactive situations, are proposed to have interactive origins. Enactive views of perception In the enactive view, perception "is not conceived as the transmission of information but more as an exploration of the world by various means. Cognition is not tied into the workings of an 'inner mind', some cognitive core, but occurs in directed interaction between the body and the world it inhabits." Alva Noë in advocating an enactive view of perception sought to resolve how we perceive three-dimensional objects, on the basis of two-dimensional input. He argues that we perceive this solidity (or 'volumetricity') by appealing to patterns of sensorimotor expectations. These arise from our agent-active 'movements and interaction' with objects, or 'object-active' changes in the object itself. The solidity is perceived through our expectations and skills in knowing how the object's appearance would change with changes in how we relate to it. He saw all perception as an active exploration of the world, rather than being a passive process, something which happens to us. Noë's idea of the role of 'expectations' in three-dimensional perception has been opposed by several philosophers, notably by Andy Clark. Clark points to difficulties of the enactive approach. He points to internal processing of visual signals, for example, in the ventral and dorsal pathways, the two-streams hypothesis. This results in an integrated perception of objects (their recognition and location, respectively) yet this processing cannot be described as an action or actions. In a more general criticism, Clark suggests that perception is not a matter of expectations about sensorimotor mechanisms guiding perception. Rather, although the limitations of sensorimotor mechanisms constrain perception, this sensorimotor activity is drastically filtered to fit current needs and purposes of the organism, and it is these imposed 'expectations' that govern perception, filtering for the 'relevant' details of sensorimotor input (called "sensorimotor summarizing"). These sensorimotor-centered and purpose-centered views appear to agree on the general scheme but disagree on the dominance issue – is the dominant component peripheral or central. Another view, the closed-loop perception one, assigns equal a-priori dominance to the peripheral and central components. In closed-loop perception, perception emerges through the process of inclusion of an item in a motor-sensory-motor loop, i.e., a loop (or loops) connecting the peripheral and central components that are relevant to that item. The item can be a body part (in which case the loops are in steady-state) or an external object (in which case the loops are perturbed and gradually converge to a steady state). These enactive loops are always active, switching dominance by the need. Another application of enaction to perception is analysis of the human hand. The many remarkably demanding uses of the hand are not learned by instruction, but through a history of engagements that lead to the acquisition of skills. According to one interpretation, it is suggested that "the hand [is]...an organ of cognition", not a faithful subordinate working under top-down instruction, but a partner in a "bi-directional interplay between manual and brain activity." According to Daniel Hutto: "Enactivists are concerned to defend the view that our most elementary ways of engaging with the world and others - including our basic forms of perception and perceptual experience - are mindful in the sense of being phenomenally charged and intentionally directed, despite being non-representational and content-free." Hutto calls this position 'REC' (Radical Enactive Cognition): "According to REC, there is no way to distinguish neural activity that is imagined to be genuinely content involving (and thus truly mental, truly cognitive) from other non-neural activity that merely plays a supporting or enabling role in making mind and cognition possible." Participatory sense-making Hanne De Jaegher and Ezequiel Di Paolo (2007) have extended the enactive concept of sense-making into the social domain. The idea takes as its departure point the process of interaction between individuals in a social encounter. De Jaegher and Di Paolo argue that the interaction process itself can take on a form of autonomy (operationally defined). This allows them to define social cognition as the generation of meaning and its transformation through interacting individuals. The notion of participatory sense-making has led to the proposal that interaction processes can sometimes play constitutive roles in social cognition (De Jaegher, Di Paolo, Gallagher, 2010). It has been applied to research in social neuroscience and autism. In a similar vein, "an inter-enactive approach to agency holds that the behavior of agents in a social situation unfolds not only according to their individual abilities and goals, but also according to the conditions and constraints imposed by the autonomous dynamics of the interaction process itself". According to Torrance, enactivism involves five interlocking themes related to the question "What is it to be a (cognizing, conscious) agent?" It is: 1. to be a biologically autonomous (autopoietic) organism 2. to generate significance or meaning, rather than to act via...updated internal representations of the external world 3. to engage in sense-making via dynamic coupling with the environment 4. to 'enact' or 'bring forth' a world of significances by mutual co-determination of the organism with its enacted world 5. to arrive at an experiential awareness via lived embodiment in the world. Torrance adds that "many kinds of agency, in particular the agency of human beings, cannot be understood separately from understanding the nature of the interaction that occurs between agents." That view introduces the social applications of enactivism. "Social cognition is regarded as the result of a special form of action, namely social interaction...the enactive approach looks at the circular dynamic within a dyad of embodied agents." In cultural psychology, enactivism is seen as a way to uncover cultural influences upon feeling, thinking and acting. Baerveldt and Verheggen argue that "It appears that seemingly natural experience is thoroughly intertwined with sociocultural realities." They suggest that the social patterning of experience is to be understood through enactivism, "the idea that the reality we have in common, and in which we find ourselves, is neither a world that exists independently from us, nor a socially shared way of representing such a pregiven world, but a world itself brought forth by our ways of communicating and our joint action....The world we inhabit is manufactured of 'meaning' rather than 'information'. Luhmann attempted to apply Maturana and Varela's notion of autopoiesis to social systems. "A core concept of social systems theory is derived from biological systems theory: the concept of autopoiesis. Chilean biologist Humberto Maturana come up with the concept to explain how biological systems such as cells are a product of their own production." "Systems exist by way of operational closure and this means that they each construct themselves and their own realities." Educational aspects The first definition of enaction was introduced by psychologist Jerome Bruner, who introduced enaction as 'learning by doing' in his discussion of how children learn, and how they can best be helped to learn. He associated enaction with two other ways of knowledge organization: Iconic and Symbolic. "Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)" The term 'enactive framework' was elaborated upon by Francisco Varela and Humberto Maturana. Sriramen argues that enactivism provides "a rich and powerful explanatory theory for learning and being." and that it is closely related to both the ideas of cognitive development of Piaget, and also the social constructivism of Vygotsky. Piaget focused on the child's immediate environment, and suggested cognitive structures like spatial perception emerge as a result of the child's interaction with the world. According to Piaget, children construct knowledge, using what they know in new ways and testing it, and the environment provides feedback concerning the adequacy of their construction. In a cultural context, Vygotsky suggested that the kind of cognition that can take place is not dictated by the engagement of the isolated child, but is also a function of social interaction and dialogue that is contingent upon a sociohistorical context. Enactivism in educational theory "looks at each learning situation as a complex system consisting of teacher, learner, and context, all of which frame and co-create the learning situation." Enactivism in education is very closely related to situated cognition, which holds that "knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used." This approach challenges the "separating of what is learned from how it is learned and used." Artificial intelligence aspects The ideas of enactivism regarding how organisms engage with their environment have interested those involved in robotics and man-machine interfaces. The analogy is drawn that a robot can be designed to interact and learn from its environment in a manner similar to the way an organism does, and a human can interact with a computer-aided design tool or data base using an interface that creates an enactive environment for the user, that is, all the user's tactile, auditory, and visual capabilities are enlisted in a mutually explorative engagement, capitalizing upon all the user's abilities, and not at all limited to cerebral engagement. In these areas it is common to refer to affordances as a design concept, the idea that an environment or an interface affords opportunities for enaction, and good design involves optimizing the role of such affordances. The activity in the AI community has influenced enactivism as a whole. Referring extensively to modeling techniques for evolutionary robotics by Beer, the modeling of learning behavior by Kelso, and to modeling of sensorimotor activity by Saltzman, McGann, De Jaegher, and Di Paolo discuss how this work makes the dynamics of coupling between an agent and its environment, the foundation of enactivism, "an operational, empirically observable phenomenon." That is, the AI environment invents examples of enactivism using concrete examples that, although not as complex as living organisms, isolate and illuminate basic principles. Mathematical formalisms Enactive cognition has been formalised in order to address subjectivity in artificial general intelligence. A mathematical formalism of AGI is an agent proven to maximise a measure of intelligence. Prior to 2022, the only such formalism was AIXI, which maximised “the ability to satisfy goals in a wide range of environments”. In 2015 Jan Lieke and Marcus Hutter showed that "Legg-Hutter intelligence is measured with respect to a fixed UTM. AIXI is the most intelligent policy if it uses the same UTM", a result which "undermines all existing optimality properties for AIXI", rendering them subjective. Criticism One of the essential theses of this approach is that biological systems generate meanings, i.e. they are semiotic systems, engaging in transformational and not merely informational interactions. Since this thesis raised the problems of beginning cognition for organisms in the developmental stage of only simple reflexes (the binding problem and the problem of primary data entry), enactivists proposed the concept of embodied information that serves to start cognition. However, critics highlight that this idea requires introducing the nature of intentionality before engaging embodied information. In a natural environment, the stimulus-reaction pair (causation) is unpredictable due to many irrelevant stimuli claiming to be randomly associated with the embodied information. While embodied information is only beneficial when intentionality is already in place, enactivists introduced the notion of the generation of meanings by biological systems (engaging in transformational interactions) without introducing a neurophysiological basis of intentionality. See also Action-specific perception Autopoesis Biosemiotics Cognitive science Cognitive psychology Computational theory of mind Connectivism Cultural psychology Distributed cognition Embodied cognition Embodied embedded cognition Enactive interfaces Extended cognition Extended mind Externalism#Enactivism and embodied cognition Mind–body problem Phenomenology (philosophy) Representationalism Situated cognition Social cognition Notes References Further reading Di Paolo, E. A., Rohde, M. and De Jaegher, H., (2010). Horizons for the Enactive Mind: Values, Social Interaction, and Play. In J. Stewart, O. Gapenne and E. A. Di Paolo (eds), Enaction: Towards a New Paradigm for Cognitive Science, Cambridge, MA: MIT Press, pp. 33 – 87. Gallagher, Shaun (2017). Enactivist Interventions: Rethinking the Mind. Oxford University Press. Hutto, D. D. (Ed.) (2006). Radical Enactivism: Intentionality, phenomenology, and narrative. In R. D. Ellis & N. Newton (Series Eds.), Consciousness & Emotion, vol. 2. McGann, M. & Torrance, S. (2005). Doing it and meaning it (and the relationship between the two). In R. D. Ellis & N. Newton, Consciousness & Emotion, vol. 1: Agency, conscious choice, and selective perception. Amsterdam: John Benjamins. Merleau-Ponty, Maurice (2005). Phenomenology of Perception. Routledge. (Originally published 1945) Noë, Alva (2010). Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. Hill and Wang. (fr) Domenico Masciotra (2023). Une approche énactive des formations, Théorie et Méthode. En devenir compétent et connaisseur. ASCAR Inc. External links Slides related to a chapter on haptic perception (recognition through touch): An overview of the rationale and means and methods for the study of representations that the learner constructs in his/her attempt to understand knowledge in a given field. See in particular §1.2.1.4 Toward social representations (p. 24) An extensive but uncritical introduction to the work of Francisco Varela and Humberto Maturana Entire journal issue on enactivism's status and current debates. Action (philosophy) Behavioral neuroscience Cognitive science Consciousness Educational psychology Emergence Epistemology of science Knowledge representation Metaphysics of mind Motor cognition Neuropsychology Philosophy of perception Philosophical theories Philosophy of psychology Psychological concepts Psychological theories Sociology of knowledge
Enactivism
[ "Biology" ]
5,736
[ "Behavioural sciences", "Behavior", "Behavioral neuroscience" ]
7,083,018
https://en.wikipedia.org/wiki/Inca%20technology
Inca technology includes devices, technologies and construction methods used by the Inca people of western South America (between the 1100s and their conquest by Spain in the 1500s), including the methods Inca engineers used to construct the cities and road network of the Inca Empire. Hydraulic engineering The builders of the empire planned and built impressive waterworks in their city centers, including canals, fountains, drainage systems and expansive irrigation. Inca's infrastructure and water supply system have been hailed as “the pinnacle of the architectural and engineering works of the Inca civilization”. Major Inca centers were chosen by experts who decided the site, its apportionment, and the basic layout of the city. In many cities we see great hydraulic engineering marvels. For example, in the city of Tipon, 3 irrigation canals diverted water from Rio Pukara to Tipon which is about 1.35 km north for Tipon's terraces. Tipon also had natural springs that they built fountains for that supplied noble residents with water for non agricultural purposes. Machu Picchu In 1450, Machu Picchu was constructed. This date was determined and based on the Carbon 14 test results. The famous lost Inca city is an architectural remnant of a society whose understanding of civil and hydraulic engineering was advanced. Today, it is famously known for its remarkable preservation as well as the beauty of the building's architecture. The site is located 120 km northwest of Cuzco in the Urubamba river valley, Peru. At 2560 m above sea level, sitting atop a mountain, the city planners had to consider the steep slopes of the site as well as the humid and rainy climate. The Inca people built this site atop a hill which was terraced (most likely for agricultural purposes). In addition to terraces, Machu Picchu is composed of two additional basic architectural elements; elite residential compounds and religious structures. The site is full of staircases and sculpted rock, which were also important to their architecture and engineering practices. Making models out of clay before beginning to build, the city planners remained consistent with Inca architecture and laid out a city that separated the agriculture and urban areas. Before construction began the engineers had to assess the spring and whether it could provide for all of the city’s anticipated citizens. After evaluating the water supply, the civil engineers designed a -long canal to what would become the city’s center. The canal descends the mountain slope, enters the city walls, passes through the agricultural sector, then crosses the inner wall into the urban sector, where it feeds a series of fountains. The fountains are publicly accessible and partially enclosed by walls that are typically about 1.2 m high, except for the lowest fountain, which is a private fountain for the Temple of the Condor and has higher walls. At the head of each fountain, a cut stone conduit carries the water to a rectangular spout, which is shaped to create a jet of water suitable for filling aryballos–a typical Inca clay water jug. The water collects in a stone basin in the floor of the fountain, then enters a circular drain that delivers it to the approach channel for the next fountain. The Incas built the canals on steady grades, using cut stones as the water channels. Most citizens worked on the construction and maintenance of the canal and irrigation systems, bronze and stone tools to complete the water-tight stone canals. The water then traveled through the channels into sixteen fountains known as the "stairway of fountains", reserving the first water source for the Emperor. This incredible feat supplied the population of Machu Picchu, which varied between 300 and 1000 people when the emperor was present and also helped irrigate water to the farming steppes. The fountains and canal system were built so well that they would, after a few minor repairs, still work today. To go along with the Incas' advanced water supply system, an equally impressive drainage system was built as well. Machu Picchu contains nearly 130 outlets in the center that moved the water out of the city through walls and other structures. The agriculture terraces are a feature of the complicated drainage system; the steppes helped avoid erosion and were built on a slope to aim excess water into channels that ran alongside the stairways. These channels carried the runoff into the main drain, avoiding the main water supply. This carefully planned drainage system shows the Incas' concern and appreciation for clean water. Water engineer Ken Wright and his archaeological team found the emperor’s bathing room complete with a separate drain that carried off his used bath water so it would never re-enter Machu Picchu’s water supply. Terraces Terrace function and structure The Inca faced many problems with living in areas with steep terrain. Two large issues were soil erosion and area to grow crops. The solution to these problems was the development of terraces, called Andenes. These terraces allowed the Inca to utilize the land for farming that they never could in the past. Everything about how the terrace functions, looks, its geometric alignment, etc. all depend on the slope of the land. The different layering of materials is part of what makes the terraces so successful. It starts with a base layer of large rocks, followed by a second layer of smaller rocks, then a layer of sand-like material, and finally the topsoil. You can practice this in a simulation here. The most impressive part of the terraces was their drainage systems. Drain outlets were placed in the numerous stone retaining walls. The larger rocks at the base of each terrace level are what allowed the water to flow more easily through the larger spaces in between the rocks, eventually coming out at the “Main Drain”. The Inca even constructed different types of drainage channels that are used for different purposes throughout the city. How they were built and why they were effective Studies have indicated that when terraces like the ones in the Colca Valley were being constructed, the first step was excavating into the slope, and then a subsequent infilling of the slope. A retaining wall was built to hold the fill material. This wall had many uses, including absorbing heat from the sun during the day and radiating it back out at night, often keeping crops from freezing in the chilling nighttime temperatures, and holding back the different layers of sediment. After the wall is built, the larger rocks would be placed on the bottom, then smaller rocks, then sand, then soil. Since the soil was now level, the water did not rush down the side of the mountain, which is what causes erosion. Previously, this erosion was so powerful that it had potential to wipe out major areas of the Inca road, as well as wash away all of the nutrients and fertile soil. Since the soil never washed away, nutrients would always be added from previously grown crops year after year. The Inca even grew specific crops together, to balance out the optimal amount of nutrients for all plants. For example, a planting method is known as "three sisters" incorporated the growth of corn, beans, and squash in the same terrace. This was because the fixed nitrogen in the beans helped the corn grow, while the squash acted as mulch keeping the soil moist, and also acted as a weed repellant. Freeze-drying Purpose All food grown or killed by the Inca could be freeze dried. Freeze drying is still very popular today. One of the biggest benefits for freeze-drying is that it takes out all of the water and moisture but leaves all of the nutritious value. The water in meats and vegetables is what gives them a lot of their weight. This is what made it very popular for transportation purposes and storage because dried meats lasted twice as long as non-freeze-dried foods. Vegetables Inca diet was largely vegetarian because large wild game was often reserved for special occasions. A very common and well known freeze-dried item was the potato, or when it was frozen, Chuño. Meats Common meats to freeze-dry included llama, alpaca, duck, and guinea pig. Transportation and storage of jerky (ch'arki in Quechua) was much easier to transport and lasted longer than not dried meats. These all had potential to be freeze-dried. Process Both meats and vegetables went through a similar freezing process. They would start by laying all the different foods on rocks and during the cold nights in high altitudes with dry air they would freeze. The next morning, a combination of the thin dry air and the heat from the sun would melt the ice and evaporate all the moisture.They would also trample over it in the morning to get any extra moisture out. The process of freeze-drying was important for transportation and storage. The high elevation (low atmospheric pressure) and low temperatures of the Andes mountains is what allowed them to take advantage of this process. Burning mirror The chronicler Inca Garcilaso de la Vega described the use of a burning mirror as part of the annual "Inti Raymi" (sun festival): "The fire for that sacrifice had to be new, given by the hand of the sun, as they said. For which they took a large bracelet, which they call Chipana (similar to others that the Incas commonly wore on the left wrist) which the high priest had; it was large, larger than the common ones, it had for a medallion a concave vessel, the shape of a half orange and brightly polished, they put it against the sun, and at a certain point where the rays that came out of the vessel hit each other, they put a bit of finely unravelled cotton (they did not know how to make tinder), which caught fire naturally in a short space of time. With this fire, thus given by the hand of the sun, the sacrifice was burned and all the meat of that day was roasted." Pathway systems The vast size of the Inca empire made it essential that efficient and effective transportation systems were created and built to assist in the exchanging of goods, services, people, etc. At one point, "their (the Inca) empire eventually extended across western South America from Quito in the north to Santiago in the south, making it the largest empire ever seen in the Americas and the largest in the world at that time (between c. 1400 and 1533 CE)." It is known to have "extended some 3500-4000 km along the mountainous backbone of South America." The trails, roads, and bridges were designed not only to link the empire physically, but these structures also helped the empire to maintain communication. Rope bridges Rope bridges were an integral part of the Inca road system. "Five centuries ago, the Andes were strung with suspension bridges. By some estimates there were as many as 200 of them." As pictured to the right, these structures were used to connect two land masses, allowing for the flow of ideas, goods, people, animals, etc. across the Incan empire. "The Inca suspension bridges achieved clear spans of at least 150 feet, probably much greater. This was a longer span than any European masonry bridges at the time." Since the Incan people did not use wheeled vehicles, most traveled by foot and/or used animals to help in the transporting of goods. Construction Although these bridges were assembled using twisted mountain grass, other vegetation, and saplings, they were dependable. These structures were able to both support the weight of traveling people and animals as well as withstand weather conditions over certain amounts of time. Since grass rots away over time, the bridges had to be rebuilt every year. When the Inca people began building a grass suspension bridge, they would first gather natural materials of grass and other vegetation. They would then braid these elements together into rope. This contribution was made by the Inca women. Vast amounts of thin-looking rope were produced. The villagers would then deliver their quota of rope to the builders. The rope was then divided into sections. Each section consisted of an amount of thin rope being laid out together in preparation to create a thicker rope cord. Once the sections are laid out, the strands of rope made earlier are twisted together tightly and evenly, producing the larger and thicker rope cord. These larger ropes are then braided together to create cables, some as thick as a human torso. Depending on the dimensions of the cable, each could weigh up to 200 pounds. These cables were then delivered to the bridge site. It was considered bad luck for women to be anywhere near the construction of the bridge, so the Inca men were therefore in charge of the on-site construction. At the bridge site, a builder(s) would travel to the opposite landmass that they were working to connect. Once they were positioned on the opposite side, one of the thin, light-weight ropes would be thrown over to them. This rope would then be used to pull the main cables over the gorge. Stone beams were built on either side of the gorge and were used in helping to position and secure the cables. The cables were wrapped around these stone beams and tightened inch by inch to decrease any slack in the bridge. Once this was finished, the riggers carefully made their way across the hanging cables, tying the foot-ropes together and connecting the handrails and the foot-ropes with the remainder of the thin grass ropes. Not all rope bridges were exactly alike in terms of design and build. Some riggers also wove pieces of wood into the foot-ropes. Modern-day rope bridge builders in Huinchiri, Peru make offerings to Pacha Mama, otherwise known as "Mother Earth," throughout their building process to ensure that the bridge will be strong and safe. This may have been a practice used by the Inca people since they too were religious. If all went smoothly and if tasks were performed in a timely fashion, a bridge had the potential of being constructed in three days. Modern rope bridges People today continue to honor Incan traditions and expand their knowledge in the building of rope bridges. "Each June in Huinchiri, Peru, four Quechua communities on two sides of a gorge join together to build a bridge out of grass, creating a form of ancient infrastructure that dates back at least five centuries to the Inca Empire." The previous Q’eswachaka Bridge is cut down and swept away by the Apurímac River current and a new bridge is built in its place. This tradition links the Quechua communities of the Huinchiri, Chaupibanda, Choccayhua, and Ccollana Quehue to their past ancestors. “According to our grandfathers, this bridge was built during the time of the Inkas 600 years ago, and on it they walked their llamas and alpacas carrying their produce.” - Eleuterio Ccallo Tapia "A small portion of a 60-foot replica built by Quechua weavers is on view in The Great Inka Road: Engineering an Empire at the Smithsonian’s National Museum of the American Indian in Washington, DC." This exhibit was on display at the museum through June 27, 2021. Visitors are also encouraged to experience this exhibit online. Either way, museums like the Smithsonian are working to preserve and display examples and knowledge of the Inca inspired rope bridges today. John Wilford shares in the New York Times that students at the Massachusetts Institute of Technology are learning much more than how objects are made. They are being taught to observe and test how archeology entwines with culture. Wilford's article was written in 2007. At this time, students involved in a course called “materials in human experience,” were busy making a 60-foot-long fiber bridge in the Peruvian style. Through this project, they were introduced to the Inca people's way of thinking and building. After creating their ropes and cables, they had planned to stretch the bridge across a dry basin between two campus buildings. Roads According to author Mark Cartwright, "Inca roads covered over 40,000 km (25,000 miles), principally in two main highways running north to south across the Inca Empire, which eventually spread over ancient Peru, Ecuador, Chile and Bolivia." Several sources challenge Cartwright's claim in stating that the Inca roads covered either more or less area then he describes. This number is difficult to solidify since some of the pathways of the Inca still may remain unaccounted for, being that they may have been washed away or covered by natural forces. "Inca engineers were also undaunted by geographical difficulties and built roads across ravines, rivers, deserts, and mountain passes up to 5,000 meters high." Many of the constructed roads are not uniform in design. Most of the uncovered roads are about one to four meters wide. Although this is true, some roads, such as the highway in Huanuco Pampa province, can be much larger. As mentioned in the Pathway systems section, the Inca people mainly traveled on foot. Knowing this, the roads created were most likely built and paved for both humans and animals to walk and/or run along. Several roads were paved with stones or cobbles and some were "edged and protected with the use of small stone walls, stone markers, wooden or cane posts, or piles of stones." Drainage was something that was of particular interest and importance to the Inca people. Drains and culverts were built to ensure that rainwater would effectively run off of the road's surface. The drains and culverts helped in directing the accumulating water either along or under the road. Uses As mentioned in the section Pathway systems, there were several uses for the Inca roads. The most obvious way in which the Inca people used the road/trail systems was to transport goods. They did this on foot and sometimes with the help of animals (llamas and alpacas). Not only were goods transported throughout the vast empire, but so were ideas and messages. The Inca needed a system of communication, so they relied on Chasquis, otherwise known as messengers. The Chasquis were chosen among the strongest and fittest young males. They ran several miles per day, only to deliver messages. These messengers resided in cabins called "tambos." These structures were positioned along the roads and built by the Inca people. These buildings provided the Chasquis with a place to rest. These places of rest could also be used to house the Inca army in a situation of rebellion or war. Modern Inca roads Today, many people travel to South America to hike the Inca trail. Walking and climbing the trail not only serves the purpose of allowing visitors to experience the historic pathways of the Inca people, but it allows for tourists and locals to see the Inca ruins, mountains, and exotic vegetation and animals. References Bibliography “Inka Hydraulic Engineering”, University of Colorado at Denver. 19 September 2006. Brown, Jeff L. “Water Supply and Drainage Systems at Machu Picchu” 19 September 2006 Wright, Kenneth R. “Machu Picchu: Prehistoric Public Works.” American Public Works Association APWA Reporter, 17 November 2003 D’Altroy, Terence N. and Christine A. Hastorf. Empire and Domestic Economy. New York: Kluwer Academic/Plenum Publishers, 2001. Wright, Kenneth, Jonathan M. Kelly, Alfredo Valencia Zegarra. “Machu Pichu: Ancient Hydraulic Engineering”. Journal of Hydraulic Engineering, October 1997. Bauer, Brian. The Development of the Inca State. University of Texas Press, Austin, 1992. Hyslop, John. Inka Settlement Planning. University of Texas Press, Austin, 1990. Inca Civil engineering History of engineering Technology by period
Inca technology
[ "Engineering" ]
3,981
[ "Construction", "Civil engineering" ]
7,083,038
https://en.wikipedia.org/wiki/Voice%20portal
Voice portals are the voice equivalent of web portals, giving access to information through spoken commands and voice responses. Ideally a voice portal could be an access point for any type of information, services, or transactions found on the Internet. Common uses include movie time listings and stock trading. In telecommunications circles, voice portals may be referred to as interactive voice response (IVR) systems, but this term also includes DTMF services. With the emergence of conversational assistants such as Apple's Siri, Amazon Alexa, Google Assistant, Microsoft Cortana, and Samsung's Bixby, Voice Portals can now be accessed through mobile devices and Far Field voice smart speakers such as the Amazon Echo and Google Home. Advantages Voice portals have no dependency on the access device; even low end mobile handsets can access the service. Voice portals talk to users in their local language and there is reduced customer learning required for using voice services compared to Internet/SMS based services. A complex search query that otherwise would take multiple widgets (drop down, check box, text box filling), can easily and effortlessly be formulated by anyone who can speak without needing to be familiar with any visual interfaces. For instance, one can say, "Find me an eyeliner, not too thick, dark brown, from Estee Lauder MAC, that's below thirty dollars" or "What is the closest liquor store from here and what time do they close?" Limitations Voice is the most natural communication medium, but the information that can be provided is limited compared to visual media. For example, most Internet users try a search term, scan results, then adjust the search term to eliminate irrelevant results. They may take two or three quick iterations to get a list that they are confident will contain what they are looking for. The equivalent approach is not practical when results are spoken, as it would take far too long. In this case, a multimodal interaction would be preferable to a voice-only interface. Trends Live-agent and Internet-based voice portals are converging, and the range of information they can provide is expanding. Live-agent portals are introducing greater automation through speech recognition and text-to-speech technology, in many cases providing fully automated service, while automated Internet-based portals are adding operator fallback in premium services. The live-agent portals, which used to rely entirely on pre-structured databases holding specific types of information are expanding into more free-form Internet access, while the Internet-based portals are adding pre-structured content to improve automation of the more common types of request. Speech technology is starting to introduce Artificial Intelligence concepts that make it practical to recognise a much broader range of utterances, learning from experience. This promises to make it practical to greatly improve speaker recognition rates and expand the range of information that can be provided by a voice portal. Technology providers A number of web-based companies are dedicated to providing voice-based access to Internet information to consumers. Quack.com launched its service in March 2000 and has since obtained the first overall voiceportal patent. Quack.com was acquired by AOL in 2000 and relaunched as AOL By Phone later that year. Tellme Networks was acquired by Microsoft in 2007. Nuance, the dominant provider of speech recognition and text-to-speech technology, is starting to deliver voice portal solutions. Other companies in this space include TelSurf Networks, FonGenie, Apptera and Call Genie. Apart from public voice portal services, a number of technology companies, including Alcatel-Lucent, Avaya, and Cisco, offer commercial enterprise-grade voice portal products to be used by companies to serve their clients. Avaya also has a carrier-grade portfolio. See also Call avoidance Mobile Search Mobile local search References External links Designing the Voice User Interface for Automated Directory Assistance. Amir Mané and Esther Levin 888-TelSurf (beta) Review & Rating | PCMag.com Start-ups dream of a Web that talks VoiceDBC: A semi-automatic tool for writing speech applications. Honours Thesis 2002. Stephen Choularton PhD Speech recognition Natural language processing Telephony
Voice portal
[ "Technology" ]
842
[ "Natural language processing", "Natural language and computing" ]
7,083,144
https://en.wikipedia.org/wiki/Helenalin
Helenalin, or (-)-4-Hydroxy-4a,8-dimethyl-3,3a,4a,7a,8,9,9a-octahydroazuleno[6,5-b]furan-2,5-dione, is a toxic sesquiterpene lactone which can be found in several plants such as Arnica montana and Arnica chamissonis Helenalin is responsible for the toxicity of the Arnica spp. Although toxic, helenalin possesses some in vitro anti-inflammatory and anti-neoplastic effects. Helenalin can inhibit certain enzymes, such as 5-lipoxygenase and leukotriene C4 synthase. For this reason the compound or its derivatives may have potential medical applications. Structure and reactivity Helenalin belongs to the group of sesquiterpene lactones which are characterised by a lactone ring. Beside this ring, the structure of helenalin has two reactive groups (α-methylene-γ-butyrolactone and a cyclopentenone group) that can undergo a Michael addition. The double bond in the carbonyl group can undergo a Michael addition with a thiol group, also called a sulfhydryl group. Therefore, helenalin can interact with proteins by forming covalent bonds to the thiol groups of cysteine-containing proteins/peptides, such as glutathione. This effect can disrupt the molecule's biological function. Addition reactions can occur because thiol groups are strong nucleophiles; a thiol has a lone pair of electrons. Chemical derivatives There are several derivatives of helenaline known within the same sesquiterpene lactone group; pseudoguaianolides. Most of these derivatives occur naturally, such as the compound dihydrohelenalin, but there are also some semi-synthetic derivatives known, such as 2β-(S-glutathionyl)-2,3-dihydrohelenalin. In general, most derivatives are more toxic than helenalin itself. Among these, derivatives with the shortest ester groups are most likely to contain a higher toxicity. Other derivatives include 11α,13-dihydrohelenalin acetate, 2,3-dehydrohelenalin and 6-O-isobutyrylhelenalin. The molecular conformation differs between helenalin and its derivatives, which affects the lipophilicity and the accessibility of the Michael addition sites. Poorer accessibility results in a compounds with lower toxicity. Another possibility is that a derivative lacking one of the reactive groups, such as the cyclopentenone group, may have a lower toxicity. Some biochemical effects of helenalin Helenalin can target the p65 subunit (also called RelA) of the transcription factor NF-κB. It can react with Cys38 in RelA by Michael addition. Both reactive groups, α-methylene-γ-butyrolactone and cyclopentene, can react with this cysteine. It was also found that helenalin can inhibit human telomerase, a ribonucleoprotein complex, by Michael addition. In this case also, both reactive groups of helenalin can interact with the thiol group of a cysteine and inhibit the telomerase activity. Helenalin inhibits the formation of leukotrienes in human blood cells by inhibiting LTC4 synthase activity. Helenalin reacts with its cyclopentenone ring to the thiol group of the synthase. Metabolism Helenalin inhibits cytochrome P450 enzymes by reacting with thiol groups, resulting in inhibition of the mixed-function oxidase system. These effects are important for the cytotoxicity of helenalin. The levels of glutathione, which contains sulfhydryl groups, are reduced in helenaline-treated cells, further increasing the toxicity of helenalin. Depending on the dose of helenalin, thiol-bearing compounds such as glutathione may provide some protection to cells from helenalin toxicity. It was also seen that helenalin increase CPK and LDH activities in serum and that it inhibits multiple enzymes of the liver involved in triglyceride synthesis. Therefore, helenaline causes acute liver toxicity, accompanied by a decrease in cholesterol levels. Helenalin also suppresses essential immune functions, such as those mediated by activated CD4+ T-cells, by multiple mechanisms. In vitro anti-inflammatory and anti-neoplastic effects Helenalin and some of its derivatives have been shown to have potent anti-inflammatory and anti-neoplastic effects in vitro. Some studies have suggested that the inhibition by helenalin of platelet leukotriene C4 synthase, telomerase activity and transcription factor NF-κB contributes to helenalin's in vitro anti-inflammatory and anti-neoplastic activity . The dose used varied per study. There is currently no in vivo evidence regarding helenalin's anti-inflammatory and anti-tumour effects, if any. The efficacy of helenalin for treatment of pain and swelling, when applied topically, is not supported by the current available evidence at doses of 10% or lower. For doses higher than 10%, more research is required whether those remain safe and are more efficient than the current available medications. Application In former times, plant extracts containing helenalin were used as a herbal medicine for the treatment of sprains, blood clots, muscle strain and rheumatic complaints. Currently helenalin is used topically in homeopathic gels and microemulsions. Helenalin is not FDA-approved for medical application. Toxicity When applied topically on humans, helenalin can cause contact dermatitis in sensitive individuals. However, it is considered generally safe when applied this way. Oral administration of large doses of helenalin can cause gastroenteritis, muscle paralysis, and cardiac and liver damage. The toxicity of helenalin was studied in mammalian species such as mice, rat, rabbit and sheep, where the oral of helenalin was established between 85 and 150 mg/kg. It was shown in a mouse model that helenalin caused reduced levels of cholesterol. In a rat model, alcohol hepatic injury was prevented by helenalin administration. Parenteral administration showed a higher toxic effect when compared to oral administration. Pharmacology Helenalin has a variety of observed effects in vitro including anti-inflammatory and antitumour activities. Helenalin has been shown to selectively inhibit the transcription factor NF-κB, which plays a key role in regulating immune response, through a unique mechanism. In vitro, it is also a potent, selective inhibitor of human telomerase—which may partially account for its antitumor effects—has anti-trypanosomal activity, and is toxic to Plasmodium falciparum. Animal and in vitro studies have also suggested that helenalin can reduce the growth of Staphylococcus aureus and reduce the severity of S. aureus infection. References Sesquiterpene lactones Secondary alcohols Enones Plant toxins Azulenofurans Cyclopentenes Vinylidene compounds
Helenalin
[ "Chemistry" ]
1,499
[ "Chemical ecology", "Plant toxins" ]
7,083,690
https://en.wikipedia.org/wiki/Omnitruncation
In geometry, an omnitruncation of a convex polytope is a simple polytope of the same dimension, having a vertex for each flag of the original polytope and a facet for each face of any dimension of the original polytope. Omnitruncation is the dual operation to barycentric subdivision. Because the barycentric subdivision of any polytope can be realized as another polytope, the same is true for the omnitruncation of any polytope. When omnitruncation is applied to a regular polytope (or honeycomb) it can be described geometrically as a Wythoff construction that creates a maximum number of facets. It is represented in a Coxeter–Dynkin diagram with all nodes ringed. It is a shortcut term which has a different meaning in progressively-higher-dimensional polytopes: Uniform polytope truncation operators For regular polygons: An ordinary truncation, . Coxeter-Dynkin diagram For uniform polyhedra (3-polytopes): A cantitruncation, . (Application of both cantellation and truncation operations) Coxeter-Dynkin diagram: For uniform polychora: A runcicantitruncation, . (Application of runcination, cantellation, and truncation operations) Coxeter-Dynkin diagram: , , For uniform polytera (5-polytopes): A steriruncicantitruncation, t0,1,2,3,4{p,q,r,s}. . (Application of sterication, runcination, cantellation, and truncation operations) Coxeter-Dynkin diagram: , , For uniform n-polytopes: . See also Expansion (geometry) Omnitruncated polyhedron References Further reading Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 145–154 Chapter 8: Truncation, p 210 Expansion) Norman Johnson Uniform Polytopes, Manuscript (1991) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 External links Polyhedra Uniform polyhedra
Omnitruncation
[ "Physics" ]
491
[ "Symmetry", "Uniform polytopes", "Truncated tilings", "Tessellation", "Uniform polyhedra" ]
7,084,895
https://en.wikipedia.org/wiki/Hot-carrier%20injection
Hot carrier injection (HCI) is a phenomenon in solid-state electronic devices where an electron or a “hole” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term "hot" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor, the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. Physics The term “hot carrier injection” usually refers to the effect in MOSFETs, where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric, which usually is made of silicon dioxide (SiO2). To become “hot” and enter the conduction band of SiO2, an electron must gain a kinetic energy of ~3.2 eV. For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term "hot electron" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal). The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. More broadly, the term describes electron distributions describable by the Fermi function, but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. Hot electrons can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increased leakage current and possible damage to the encasing dielectric material if the hot carrier disrupts the atomic structure of the dielectric. Hot electrons can be created when a high-energy photon of electromagnetic radiation (such as light) strikes a semiconductor. The energy from the photon can be transferred to an electron, exciting the electron out of the valence band, and forming an electron-hole pair. If the electron receives enough energy to leave the valence band, and to surpass the conduction band, it becomes a hot electron. Such electrons are characterized by high effective temperatures. Because of the high effective temperatures, hot electrons are very mobile, and likely to leave the semiconductor and travel into other surrounding materials. In some semiconductor devices, the energy dissipated by hot electron phonons represents an inefficiency as energy is lost as heat. For instance, some solar cells rely on the photovoltaic properties of semiconductors to convert light to electricity. In such cells, the hot electron effect is the reason that a portion of the light energy is lost to heat rather than converted to electricity. Hot electrons arise generically at low temperatures even in degenerate semiconductors or metals. There are a number of models to describe the hot-electron effect. The simplest predicts an electron-phonon (e-p) interaction based on a clean three-dimensional free-electron model. Hot electron effect models illustrate a correlation between power dissipated, the electron gas temperature and overheating. Effects on transistors In MOSFETs, hot electrons have sufficient energy to tunnel through the thin gate oxide to show up as gate current, or as substrate leakage current. In a MOSFET, when a gate is positive, and the switch is on, the device is designed with the intent that electrons will flow laterally through the conductive channel, from the source to the drain. Hot electrons may jump from the channel region or from the drain, for instance, and enter the gate or the substrate. These hot electrons do not contribute to the amount of current flowing through the channel as intended and instead are a leakage current. Attempts to correct or compensate for the hot electron effect in a MOSFET may involve locating a diode in reverse bias at gate terminal or other manipulations of the device (such as lightly doped drains or double-doped drains). When electrons are accelerated in the channel, they gain energy along the mean free path. This energy is lost in two different ways: The carrier hits an atom in the substrate. Then the collision creates a cold carrier and an additional electron-hole pair. In the case of nMOS transistors, additional electrons are collected by the channel and additional holes are evacuated by the substrate. The carrier hits a Si-H bond and break the bond. An interface state is created and the hydrogen atom is released in the substrate. The probability to hit either an atom or a Si-H bond is random, and the average energy involved in each process is the same in both case. This is the reason why the substrate current is monitored during HCI stress. A high substrate current means a large number of created electron-hole pairs and thus an efficient Si-H bond breakage mechanism. When interface states are created, the threshold voltage is modified and the subthreshold slope is degraded. This leads to lower current, and degrades the operating frequency of integrated circuit. Scaling Advances in semiconductor manufacturing techniques and ever increasing demand for faster and more complex integrated circuits (ICs) have driven the associated Metal–Oxide–Semiconductor field-effect transistor (MOSFET) to scale to smaller dimensions. However, it has not been possible to scale the supply voltage used to operate these ICs proportionately due to factors such as compatibility with previous generation circuits, noise margin, power and delay requirements, and non-scaling of threshold voltage, subthreshold slope, and parasitic capacitance. As a result, internal electric fields increase in aggressively scaled MOSFETs, which comes with the additional benefit of increased carrier velocities (up to velocity saturation), and hence increased switching speed, but also presents a major reliability problem for the long term operation of these devices, as high fields induce hot carrier injection which affects device reliability. Large electric fields in MOSFETs imply the presence of high-energy carriers, referred to as “hot carriers”. These hot carriers that have sufficiently high energies and momenta to allow them to be injected from the semiconductor into the surrounding dielectric films such as the gate and sidewall oxides as well as the buried oxide in the case of silicon on insulator (SOI) MOSFETs. Reliability impact The presence of such mobile carriers in the oxides triggers numerous physical damage processes that can drastically change the device characteristics over prolonged periods. The accumulation of damage can eventually cause the circuit to fail as key parameters such as threshold voltage shift due to such damage. The accumulation of damage resulting degradation in device behavior due to hot carrier injection is called “hot carrier degradation”. The useful life-time of circuits and integrated circuits based on such a MOS device are thus affected by the life-time of the MOS device itself. To assure that integrated circuits manufactured with minimal geometry devices will not have their useful life impaired, the life-time of the component MOS devices must have their HCI degradation well understood. Failure to accurately characterize HCI life-time effects can ultimately affect business costs such as warranty and support costs and impact marketing and sales promises for a foundry or IC manufacturer. Relationship to radiation effects Hot carrier degradation is fundamentally the same as the ionization radiation effect known as the total dose damage to semiconductors, as experienced in space systems due to solar proton, electron, X-ray and gamma ray exposure. HCI and NOR flash memory cells HCI is the basis of operation for a number of non-volatile memory technologies such as EPROM cells. As soon as the potential detrimental influence of HC injection on the circuit reliability was recognized, several fabrication strategies were devised to reduce it without compromising the circuit performance. NOR flash memory exploits the principle of hot carriers injection by deliberately injecting carriers across the gate oxide to charge the floating gate. This charge alters the MOS transistor threshold voltage to represent a logic '0' state. An uncharged floating gate represents a '1' state. Erasing the NOR Flash memory cell removes stored charge through the process of Fowler–Nordheim tunneling. Because of the damage to the oxide caused by normal NOR Flash operation, HCI damage is one of the factors that cause the number of write-erase cycles to be limited. Because the ability to hold charge and the formation of damage traps in the oxide affects the ability to have distinct '1' and '0' charge states, HCI damage results in the closing of the non-volatile memory logic margin window over time. The number of write-erase cycles at which '1' and '0' can no longer be distinguished defines the endurance of a non-volatile memory. See also Time-dependent gate oxide breakdown (also time-dependent dielectric breakdown, TDDB) Electromigration (EM) Negative bias temperature instability (NBTI) Stress migration Lattice scattering References External links An article about hot carriers at www.siliconfareast.com IEEE International Reliability Physics Symposium, the primary academic and technical conference for semiconductor reliability involving HCI and other reliability phenomena Integrated circuits Semiconductors Semiconductor device defects Charge carriers Electric and magnetic fields in matter
Hot-carrier injection
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,940
[ "Physical phenomena", "Matter", "Integrated circuits", "Physical quantities", "Charge carriers", "Computer engineering", "Semiconductors", "Technological failures", "Semiconductor device defects", "Electric and magnetic fields in matter", "Materials science", "Materials", "Electrical phenome...
7,085,075
https://en.wikipedia.org/wiki/Oswald%20Veblen%20Prize%20in%20Geometry
The Oswald Veblen Prize in Geometry is an award granted by the American Mathematical Society for notable research in geometry or topology. It was funded in 1961 in memory of Oswald Veblen and first issued in 1964. The Veblen Prize is now worth US$5000, and is awarded every three years. The first seven prize winners were awarded for works in topology. James Harris Simons and William Thurston were the first ones to receive it for works in geometry (for some distinctions, see geometry and topology). As of 2022, there have been thirty-seven prize recipients. List of recipients 1964 Christos Papakyriakopoulos, for: "On Solid Tori", Proceedings of the London Mathematical Society "On Dehn's lemma and the asphericity of knots", Annals of Mathematics 1964 Raoul Bott, for: "The space of loops on a Lie group", Michigan Math. J. "The stable homotopy of the classical groups", Annals of Mathematics 1966 Stephen Smale 1966 Morton Brown and Barry Mazur 1971 Robion Kirby, for: "Stable homeomorphisms and the annulus conjecture", Proc. Amer. Math. Soc 1971 Dennis Sullivan 1976 William Thurston 1976 James Harris Simons 1981 Mikhail Gromov for: "Manifolds of negative curvature." Journal of Differential Geometry 13 (1978), no. 2, 223–230. "Almost flat manifolds." Journal of Differential Geometry 13 (1978), no. 2, 231–241. "Curvature, diameter and Betti numbers." Comment. Math. Helv. 56 (1981), no. 2, 179–195. "Groups of polynomial growth and expanding maps." Inst. Hautes Études Sci. Publ. Math. 53 (1981), 53–73. "Volume and bounded cohomology." Inst. Hautes Études Sci. Publ. Math. 56 (1982), 5–99 1981 Shing-Tung Yau for: "On the regularity of the solution of the n-dimensional Minkowski problem." Comm. Pure Appl. Math. 29 (1976), no. 5, 495–516. (with Shiu-Yuen Cheng) "On the regularity of the Monge-Ampère equation" . Comm. Pure Appl. Math. 30 (1977), no. 1, 41–68. (with Shiu-Yuen Cheng) "Calabi's conjecture and some new results in algebraic geometry." Proc. Natl. Acad. Sci. U.S.A. 74 (1977), no. 5, 1798–1799. "On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I." Comm. Pure Appl. Math. 31 (1978), no. 3, 339–411. "On the proof of the positive mass conjecture in general relativity." Comm. Math. Phys. 65 (1979), no. 1, 45–76. (with Richard Schoen) "Topology of three-dimensional manifolds and the embedding problems in minimal surface theory." Ann. of Math. (2) 112 (1980), no. 3, 441–484. (with William Meeks) 1986 Michael Freedman for: The topology of four-dimensional manifolds. Journal of Differential Geometry 17 (1982), no. 3, 357–453. 1991 Andrew Casson for: his work on the topology of low dimensional manifolds and specifically for the discovery of an integer valued invariant of homology three spheres whose reduction mod(2) is the invariant of Rohlin. 1991 Clifford Taubes for: Self-dual Yang-Mills connections on non-self-dual 4-manifolds. Journal of Differential Geometry 17 (1982), no. 1, 139–170. Gauge theory on asymptotically periodic 4-manifolds. J. Differential Geom. 25 (1987), no. 3, 363–430. Casson's invariant and gauge theory. J. Differential Geom. 31 (1990), no. 2, 547–599. 1996 Richard S. Hamilton for: The formation of singularities in the Ricci flow. Surveys in differential geometry, Vol. II (Cambridge, MA, 1993), 7–136, Int. Press, Cambridge, MA, 1995. Four-manifolds with positive isotropic curvature. Comm. Anal. Geom. 5 (1997), no. 1, 1–92. 1996 Gang Tian for: On Calabi's conjecture for complex surfaces with positive first Chern class. Invent. Math. 101 (1990), no. 1, 101–172. Compactness theorems for Kähler-Einstein manifolds of dimension 3 and up. J. Differential Geom. 35 (1992), no. 3, 535–558. A mathematical theory of quantum cohomology. J. Differential Geom. 42 (1995), no. 2, 259–367. (with Yongbin Ruan) Kähler-Einstein metrics with positive scalar curvature. Invent. Math. 130 (1997), no. 1, 1–37. 2001 Jeff Cheeger for: Families index for manifolds with boundary, superconnections, and cones. I. Families of manifolds with boundary and Dirac operators. J. Funct. Anal. 89 (1990), no. 2, 313–363. (with Jean-Michel Bismut) Families index for manifolds with boundary, superconnections and cones. II. The Chern character. J. Funct. Anal. 90 (1990), no. 2, 306–354. (with Jean-Michel Bismut) Lower bounds on Ricci curvature and the almost rigidity of warped products. Ann. of Math. (2) 144 (1996), no. 1, 189–237. (with Tobias Colding) On the structure of spaces with Ricci curvature bounded below. I. J. Differential Geom. 46 (1997), no. 3, 406–480. (with Tobias Colding) 2001 Yakov Eliashberg for: Combinatorial methods in symplectic geometry. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), 531–539, Amer. Math. Soc., Providence, RI, 1987. Classification of overtwisted contact structures on 3-manifolds. Invent. Math. 98 (1989), no. 3, 623–637. 2001 Michael J. Hopkins for: Nilpotence and stable homotopy theory. I. Ann. of Math. (2) 128 (1988), no. 2, 207–241. (with Ethan Devinatz and Jeffrey Smith) The rigid analytic period mapping, Lubin-Tate space, and stable homotopy theory. Bull. Amer. Math. Soc. (N.S.) 30 (1994), no. 1, 76–86. (with Benedict Gross) Equivariant vector bundles on the Lubin-Tate moduli space. Topology and representation theory (Evanston, IL, 1992), 23–88, Contemp. Math., 158, Amer. Math. Soc., Providence, RI, 1994. (with Benedict Gross) Elliptic spectra, the Witten genus and the theorem of the cube. Invent. Math. 146 (2001), no. 3, 595–687. (with Matthew Ando and Neil Strickland) Nilpotence and stable homotopy theory. II. Ann. of Math. (2) 148 (1998), no. 1, 1–49. (with Jeffrey Smith) 2004 David Gabai 2007 Peter Kronheimer and Tomasz Mrowka for: The genus of embedded surfaces in the projective plane. Math. Res. Lett. 1 (1994), no. 6, 797–808. Embedded surfaces and the structure of Donaldson's polynomial invariants. J. Differential Geom. 41 (1995), no. 3, 573–734. Witten's conjecture and property P. Geom. Topol. 8 (2004), 295–310. 2007 Peter Ozsváth and Zoltán Szabó for: Holomorphic disks and topological invariants for closed three-manifolds. Ann. of Math. (2) 159 (2004), no. 3, 1027–1158. Holomorphic disks and three-manifold invariants: properties and applications. Ann. of Math. (2) 159 (2004), no. 3, 1159–1245. Holomorphic disks and genus bounds. Geom. Topol. 8 (2004), 311–334. 2010 Tobias Colding and William Minicozzi II for: The space of embedded minimal surfaces of fixed genus in a 3-manifold. I. Estimates off the axis for disks. Ann. of Math. (2) 160 (2004), no. 1, 27–68. The space of embedded minimal surfaces of fixed genus in a 3-manifold. II. Multi-valued graphs in disks. Ann. of Math. (2) 160 (2004), no. 1, 69–92. The space of embedded minimal surfaces of fixed genus in a 3-manifold. III. Planar domains. Ann. of Math. (2) 160 (2004), no. 2, 523–572. The space of embedded minimal surfaces of fixed genus in a 3-manifold. IV. Locally simply connected. Ann. of Math. (2) 160 (2004), no. 2, 573–615. The Calabi-Yau conjectures for embedded surfaces. Ann. of Math. (2) 167 (2008), no. 1, 211–243. 2010 Paul Seidel for: A long exact sequence for symplectic Floer cohomology. Topology 42 (2003), no. 5, 1003–1063. The symplectic topology of Ramanujam's surface. Comment. Math. Helv. 80 (2005), no. 4, 859–881. (with Ivan Smith) Fukaya categories and Picard-Lefschetz theory. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2008. viii+326 pp. Exact Lagrangian submanifolds in simply-connected cotangent bundles. Invent. Math. 172 (2008), no. 1, 1–27. (with Kenji Fukaya and Ivan Smith) 2013 Ian Agol for: Lower bounds on volumes of hyperbolic Haken 3-manifolds. With an appendix by Nathan Dunfield. J. Amer. Math. Soc. 20 (2007), no. 4, 1053–1077. (with Peter Storm and William Thurston) Criteria for virtual fibering. J. Topol. 1 (2008), no. 2, 269–284. Residual finiteness, QCERF and fillings of hyperbolic groups. Geom. Topol. 13 (2009), no. 2, 1043–1073. (with Daniel Groves and Jason Fox Manning) 2013 Daniel Wise for: Subgroup separability of graphs of free groups with cyclic edge groups. Q. J. Math. 51 (2000), no. 1, 107–129. The residual finiteness of negatively curved polygons of finite groups. Invent. Math. 149 (2002), no. 3, 579–617. Special cube complexes. Geom. Funct. Anal. 17 (2008), no. 5, 1551–1620. (with Frédéric Haglund) A combination theorem for special cube complexes. Ann. of Math. (2) 176 (2012), no. 3, 1427–1482. (with Frédéric Haglund) 2016 Fernando Codá Marques and André Neves for: Min-max theory and the Willmore conjecture. Ann. of Math. (2) 179 (2014), no. 2, 683–782. Min-max theory and the energy of links. J. Amer. Math. Soc. 29 (2016), no. 2, 561–578. (with Ian Agol) Existence of infinitely many minimal hypersurfaces in positive Ricci curvature. Invent. Math. 209 (2017), no. 2, 577–616. 2019 Xiuxiong Chen, Simon Donaldson and Song Sun for: Kähler-Einstein metrics on Fano manifolds. I: Approximation of metrics with cone singularities. J. Amer. Math. Soc. 28 (2015), no. 1, 183–197. Kähler-Einstein metrics on Fano manifolds. II: Limits with cone angle less than 2π. J. Amer. Math. Soc. 28 (2015), no. 1, 199–234. Kähler-Einstein metrics on Fano manifolds. III: Limits as cone angle approaches 2π and completion of the main proof. J. Amer. Math. Soc. 28 (2015), no. 1, 235–278. 2022 Michael A. Hill, Michael J. Hopkins, and Douglas Ravenel for: On the nonexistence of elements of Kervaire invariant one. Annals of Mathematics SECOND SERIES, Vol. 184, No. 1 (July, 2016), pp. 1-262 2025 Soheyla Feyzbakhsh and Richard Thomas for: Curve counting and S-duality, Épijournal de Géométrie Algébrique - arXiv:2007.03037 Rank r DT theory from rank 0, Duke Mathematical Journal - arXiv:2103.02915 Rank r DT theory from rank 1, Journal of the American Mathematical Society - arXiv:2108.02828 See also List of mathematics awards References External links Veblen prize home page Awards of the American Mathematical Society Awards established in 1964 Triennial events Geometry Topology 1964 establishments in the United States
Oswald Veblen Prize in Geometry
[ "Physics", "Mathematics" ]
2,981
[ "Spacetime", "Topology", "Space", "Geometry" ]
7,085,711
https://en.wikipedia.org/wiki/23%20Marina
23 Marina is an 88-story, residential skyscraper in Dubai, United Arab Emirates. As of 2022, it is the fourth tallest building in Dubai and the sixth tallest residential building in the world. The tower has 57 swimming pools and each duplex in the tower is equipped with its own private elevator. The building was 79 percent sold before construction started. The raft was completed on 30 April 2007. Construction gallery See also Dubai Marina List of tallest buildings in Dubai List of tallest buildings in the United Arab Emirates References External links Hircon-me.com Images of 23 Marina Construction Update Residential skyscrapers in Dubai Futurist architecture Architecture in Dubai High-tech architecture Postmodern architecture Residential buildings completed in 2012 2012 establishments in the United Arab Emirates
23 Marina
[ "Engineering" ]
151
[ "Postmodern architecture", "Architecture" ]
7,085,764
https://en.wikipedia.org/wiki/Chlorproguanil/dapsone
Chlorproguanil/dapsone (sold commercially as Lapdap) was a fixed dose antimalarial combination containing chlorproguanil and dapsone, which act synergistically against malaria. The drug was withdrawn in 2008 following increasing evidence of toxicity in the form of haemolysis occurring in patients with G6PD deficiency. References Antimalarial agents Combination antiviral drugs Withdrawn drugs
Chlorproguanil/dapsone
[ "Chemistry" ]
91
[ "Drug safety", "Withdrawn drugs" ]
7,085,773
https://en.wikipedia.org/wiki/Timeline%20of%20planetariums
This is a timeline of the history of planetariums. Historic influences Development of modern planetariums Digital and Fulldome video References Planetariums Planetariums
Timeline of planetariums
[ "Astronomy" ]
34
[ "Astronomy education", "Astronomy organizations", "Planetaria" ]
7,085,802
https://en.wikipedia.org/wiki/Free-floating%20barrel
A free-floating barrel is a firearm design used in precision rifles, particularly match grade benchrest rifles, to accurize the weapon system. With conventional rifles, the gun barrel rests in contact with the fore-end of the gunstock, sometimes along the whole length. If the stock is wooden, environmental conditions or operational use may warp the wood, which may also cause the barrel to shift its alignment slightly over time, altering the projectile's external ballistics and thus the point of impact. Contact between the barrel and the stock affects the natural frequency of the barrel, which can reduce accuracy especially when the barrel gets hot with repeated firing. The effect of the stock on the barrel can cause the barrel to vibrate inconsistently from shot to shot, depending on the external forces acting upon the stock at the time of the shot. Such vibrations affect the bullet's trajectory, changing the point of impact. A free-floating barrel is one where the barrel and stock do not touch at any point along the barrel's length. The barrel is attached to its receiver, which is attached to the stock, but the barrel does not touch any other gun parts except perhaps the front sight, which is often mounted on the barrel. This minimizes possible variance in mechanical pressure distortions of the barrel alignment, and allows vibration to occur at the natural frequency of the barrel consistently and uniformly. Alternatives include using a stock made from composite materials which do not deform as much under temperature or humidity changes, or a wooden stock with a fiberglass contact area ("glass bedding"). Stocks which contact the barrel are still popular for many utility weapons, though most precision rifle designs have adopted free-floating barrels. References RifleShooter Mag Firearm components
Free-floating barrel
[ "Technology" ]
352
[ "Firearm components", "Components" ]
7,085,910
https://en.wikipedia.org/wiki/Hyman%20Bass
Hyman Bass (; born October 5, 1932) is an American mathematician, known for work in algebra and in mathematics education. From 1959 to 1998 he was Professor in the Mathematics Department at Columbia University. He is currently the Samuel Eilenberg Distinguished University Professor of Mathematics and Professor of Mathematics Education at the University of Michigan. Life Born to a Jewish family in Houston, Texas, he earned his B.A. in 1955 from Princeton University and his Ph.D. in 1959 from the University of Chicago. His thesis, titled Global dimensions of rings, was written under the supervision of Irving Kaplansky. He has held visiting appointments at the Institute for Advanced Study in Princeton, New Jersey, Institut des Hautes Études Scientifiques and École Normale Supérieure (Paris), Tata Institute of Fundamental Research (Bombay), University of Cambridge, University of California, Berkeley, University of Rome, IMPA (Rio), National Autonomous University of Mexico, Mittag-Leffler Institute (Stockholm), and the University of Utah. He was president of the American Mathematical Society. Bass formerly chaired the Mathematical Sciences Education Board (1992–2000) at the National Academy of Sciences, and the Committee on Education of the American Mathematical Society. He was the President of ICMI from 1999 to 2006. Since 1996 he has been collaborating with Deborah Ball and her research group at the University of Michigan on the mathematical knowledge and resources entailed in the teaching of mathematics at the elementary level. He has worked to build bridges between diverse professional communities and stakeholders involved in mathematics education. Work His research interests have been in algebraic K-theory, commutative algebra and algebraic geometry, algebraic groups, geometric methods in group theory, and ζ functions on finite simple graphs. Awards and recognitions Bass was elected as a member of the National Academy of Sciences in 1982. In 1983, he was elected a Fellow of the American Academy of Arts and Sciences. In 2002 he was elected a fellow of The World Academy of Sciences. He is a 2006 National Medal of Science laureate. In 2009 he was elected a member of the National Academy of Education. In 2012 he became a fellow of the American Mathematical Society. He was awarded the Mary P. Dolciani Award in 2013. See also Bass number Bass–Serre theory Bass–Quillen conjecture References External links Directory page at University of Michigan Author profile in the database zbMATH 1932 births 20th-century American Jews Algebraists Columbia University faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Mathematical Society Living people American mathematics educators Members of the United States National Academy of Sciences National Medal of Science laureates Institute for Advanced Study visiting scholars Nicolas Bourbaki Presidents of the American Mathematical Society Academics from Houston Princeton University alumni University of Chicago alumni University of Michigan faculty Mathematicians from Texas 21st-century American Jews
Hyman Bass
[ "Mathematics" ]
572
[ "Algebra", "Algebraists" ]
7,085,992
https://en.wikipedia.org/wiki/Interplanetary%20magnetic%20field
The interplanetary magnetic field (IMF), also commonly referred to as the heliospheric magnetic field (HMF), is the component of the solar magnetic field that is dragged out from the solar corona by the solar wind flow to fill the Solar System. Coronal and solar wind plasma The coronal and solar wind plasmas are highly electrically conductive, meaning the magnetic field lines and the plasma flows are effectively "frozen" together and the magnetic field cannot diffuse through the plasma on time scales of interest. In the solar corona, the magnetic pressure greatly exceeds the plasma pressure and thus the plasma is primarily structured and confined by the magnetic field. However, with increasing altitude through the corona, the solar wind accelerates as it extracts energy from the magnetic field through the Lorentz force interaction, resulting in the flow momentum exceeding the restraining magnetic tension force and the coronal magnetic field is dragged out by the solar wind to form the IMF. This acceleration often leads the IMF to be locally supersonic up to 160 AU away from the sun. The dynamic pressure of the wind dominates over the magnetic pressure through most of the Solar System (or heliosphere), so that the magnetic field is pulled into an Archimedean spiral pattern (the Parker spiral) by the combination of the outward motion and the Sun's rotation. In near-Earth space, the IMF nominally makes an angle of approximately 45° to the Earth–Sun line, though this angle varies with solar wind speed. The angle of the IMF to the radial direction reduces with helio-latitude, as the speed of the photospheric footpoint is reduced. Depending on the polarity of the photospheric footpoint, the heliospheric magnetic field spirals inward or outward; the magnetic field follows the same shape of spiral in the northern and southern parts of the heliosphere, but with opposite field direction. These two magnetic domains are separated by a current sheet (an electric current that is confined to a curved plane). This heliospheric current sheet has a shape similar to a twirled ballerina skirt, and changes in shape through the solar cycle as the Sun's magnetic field reverses about every 11 years. Magnetic field at Earth orbit The plasma in the interplanetary medium is also responsible for the strength of the Sun's magnetic field at the orbit of the Earth being over 100 times greater than originally anticipated. If space were a vacuum, then the Sun's magnetic dipole field — about 10−4 teslas at the surface of the Sun — would reduce with the inverse cube of the distance to about 10−11 teslas. But satellite observations show that it is about 100 times greater at around 10−9 teslas. Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field induces electric currents, which in turn generates magnetic fields — and, in this respect, it behaves like an MHD dynamo. The interplanetary magnetic field at the Earth's orbit varies with waves and other disturbances in the solar wind, known as "space weather." The field is a vector, with components in the radial and azimuthal directions as well as a component perpendicular to the ecliptic. The field varies in strength near the Earth from 1 to 37 nT, averaging about 6 nT. Since 1997, the solar magnetic field has been monitored in real time by the Advanced Composition Explorer (ACE) satellite located in a halo orbit at the Sun–Earth Lagrange Point L1; since July 2016, it has been monitored by the Deep Space Climate Observatory (DSCOVR) satellite, also at the Sun–Earth L1 (with the ACE continuing to serve as a back-up measurement). See also Solar magnetic field Solar wind Magnetosphere References Solar System Outer space Magnetism in astronomy
Interplanetary magnetic field
[ "Astronomy" ]
804
[ "Magnetism in astronomy", "Outer space", "Solar System" ]
7,086,534
https://en.wikipedia.org/wiki/Kelvin%27s%20circulation%20theorem
In fluid mechanics, Kelvin's circulation theorem states:In a barotropic, ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time. The theorem is named after William Thomson, 1st Baron Kelvin who published it in 1869. Stated mathematically: where is the circulation around a material moving contour as a function of time . The differential operator is a substantial (material) derivative moving with the fluid particles. Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour remains constant. This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force) or non-barotropic pressure-density relations. Mathematical proof The circulation around a closed material contour is defined by: where u is the velocity vector, and ds is an element along the closed contour. The governing equation for an inviscid fluid with a conservative body force is where D/Dt is the convective derivative, ρ is the fluid density, p is the pressure and Φ is the potential for the body force. These are the Euler equations with a body force. The condition of barotropicity implies that the density is a function only of the pressure, i.e. . Taking the convective derivative of circulation gives For the first term, we substitute from the governing equation, and then apply Stokes' theorem, thus: The final equality arises since owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or for any function . For the second term, we note that evolution of the material line element is given by Hence The last equality is obtained by applying gradient theorem. Since both terms are zero, we obtain the result Poincaré–Bjerknes circulation theorem A similar principle which conserves a quantity can be obtained for the rotating frame also, known as the Poincaré–Bjerknes theorem, named after Henri Poincaré and Vilhelm Bjerknes, who derived the invariant in 1893 and 1898. The theorem can be applied to a rotating frame which is rotating at a constant angular velocity given by the vector , for the modified circulation Here is the position of the area of fluid. From Stokes' theorem, this is: The vorticity of a velocity field in fluid dynamics is defined by: Then: See also Bernoulli's principle Euler equations (fluid dynamics) Helmholtz's theorems Thermomagnetic convection Notes Equations of fluid dynamics Fluid dynamics Equations Circulation theorem
Kelvin's circulation theorem
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
579
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Mathematical objects", "Equations", "Piping", "Fluid dynamics" ]
7,086,661
https://en.wikipedia.org/wiki/Inverse%20image%20functor
In mathematics, specifically in algebraic topology and algebraic geometry, an inverse image functor is a contravariant construction of sheaves; here “contravariant” in the sense given a map , the inverse image functor is a functor from the category of sheaves on Y to the category of sheaves on X. The direct image functor is the primary operation on sheaves, with the simplest definition. The inverse image exhibits some relatively subtle features. Definition Suppose we are given a sheaf on and that we want to transport to using a continuous map . We will call the result the inverse image or pullback sheaf . If we try to imitate the direct image by setting for each open set of , we immediately run into a problem: is not necessarily open. The best we could do is to approximate it by open sets, and even then we will get a presheaf and not a sheaf. Consequently, we define to be the sheaf associated to the presheaf: (Here is an open subset of and the colimit runs over all open subsets of containing .) For example, if is just the inclusion of a point of , then is just the stalk of at this point. The restriction maps, as well as the functoriality of the inverse image follows from the universal property of direct limits. When dealing with morphisms of locally ringed spaces, for example schemes in algebraic geometry, one often works with sheaves of -modules, where is the structure sheaf of . Then the functor is inappropriate, because in general it does not even give sheaves of -modules. In order to remedy this, one defines in this situation for a sheaf of -modules its inverse image by . Properties While is more complicated to define than , the stalks are easier to compute: given a point , one has . is an exact functor, as can be seen by the above calculation of the stalks. is (in general) only right exact. If is exact, f is called flat. is the left adjoint of the direct image functor . This implies that there are natural unit and counit morphisms and . These morphisms yield a natural adjunction correspondence: . However, the morphisms and are almost never isomorphisms. For example, if denotes the inclusion of a closed subset, the stalk of at a point is canonically isomorphic to if is in and otherwise. A similar adjunction holds for the case of sheaves of modules, replacing by . References . See section II.4. Algebraic geometry Sheaf theory
Inverse image functor
[ "Mathematics" ]
531
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Sheaf theory", "Category theory", "Functors", "Topology", "Algebraic geometry", "Mathematical relations" ]
7,087,318
https://en.wikipedia.org/wiki/Triaxial%20shear%20test
In materials science, a triaxial shear test is a common method to measure the mechanical properties of many deformable solids, especially soil (e.g., sand, clay) and rock, and other granular materials or powders. There are several variations on the test. In a triaxial shear test, stress is applied to a sample of the material being tested in a way which results in stresses along one axis being different from the stresses in perpendicular directions. This is typically achieved by placing the sample between two parallel platens which apply stress in one (usually vertical) direction, and applying fluid pressure to the specimen to apply stress in the perpendicular directions. (Testing apparatus which allows application of different levels of stress in each of three orthogonal directions are discussed below.) The application of different compressive stresses in the test apparatus causes shear stress to develop in the sample; the loads can be increased and deflections monitored until failure of the sample. During the test, the surrounding fluid is pressurized, and the stress on the platens is increased until the material in the cylinder fails and forms sliding regions within itself, known as shear bands. The geometry of the shearing in a triaxial test typically causes the sample to become shorter while bulging out along the sides. The stress on the platen is then reduced and the water pressure pushes the sides back in, causing the sample to grow taller again. This cycle is usually repeated several times while collecting stress and strain data about the sample. During the test the pore pressures of fluids (e.g., water, oil) or gasses in the sample may be measured using Bishop's pore pressure apparatus. From the triaxial test data, it is possible to extract fundamental material parameters about the sample, including its angle of shearing resistance, apparent cohesion, and dilatancy angle. These parameters are then used in computer models to predict how the material will behave in a larger-scale engineering application. An example would be to predict the stability of the soil on a slope, whether the slope will collapse or whether the soil will support the shear stresses of the slope and remain in place. Triaxial tests are used along with other tests to make such engineering predictions. During the shearing, a granular material will typically have a net gain or loss of volume. If it had originally been in a dense state, then it typically gains volume, a characteristic known as Reynolds' dilatancy. If it had originally been in a very loose state, then contraction may occur before the shearing begins or in conjunction with the shearing. Sometimes, testing of cohesive samples is done with no confining pressure, in an unconfined compression test. This requires much simpler and less expensive apparatus and sample preparation, though the applicability is limited to samples that the sides won't crumble when exposed, and the confining stress being lower than the in-situ stress gives results which may be overly conservative. The compression test performed for concrete strength testing is essentially the same test, on apparatus designed for the larger samples and higher loads typical of concrete testing. Test execution For soil samples, the specimen is contained in a cylindrical latex sleeve with a flat, circular metal plate or platen closing off the top and bottom ends. This cylinder is placed into a bath of a hydraulic fluid to provide pressure along the sides of the cylinder. The top platen can then be mechanically driven up or down along the axis of the cylinder to squeeze the material. The distance that the upper platen travels is measured as a function of the force required to move it, as the pressure of the surrounding water is carefully controlled. The net change in volume of the material can also be measured by how much water moves in or out of the surrounding bath, but is typically measured - when the sample is saturated with water - by measuring the amount of water that flows into or out of the sample's pores. Rock For testing of high-strength rock, the sleeve may be a thin metal sheeting rather than latex. Triaxial testing on strong rock is fairly seldom done because the high forces and pressures required to break a rock sample require costly and cumbersome testing equipment. Effective stress The effective stress on the sample can be measured by using a porous surface on one platen, and measuring the pressure of the fluid (usually water) during the test, then calculating the effective stress from the total stress and pore pressure. Triaxial test to determine the shear strength of a discontinuity The triaxial test can be used to determine the shear strength of a discontinuity. A homogeneous and isotropic sample fails due to shear stresses in the sample. If a sample with a discontinuity is orientated such that the discontinuity is about parallel to the plane in which maximum shear stress will be developed during the test, the sample will fail due to shear displacement along the discontinuity, and hence, the shear strength of a discontinuity can be calculated. Types of triaxial tests There are several variations of the triaxial test: Consolidated drained (CD) In a 'consolidated drained' test, the sample is consolidated and sheared in compression slowly to allow pore pressures built up by the shearing to dissipate. The rate of axial deformation is kept constant, i.e., strain is controlled. The test allows the sample and the pore pressures to fully consolidate (i.e., adjust) to the surrounding stresses. The test may take a long time to allow the sample to adjust, in particular low permeability samples need a long time to drain and adjust strain to stress levels. Consolidated undrained (CU) In a 'consolidated undrained' test, the sample is not allowed to drain. The shear characteristics are measured under undrained conditions, and the sample is assumed to be fully saturated. Measuring the pore pressures in the sample (sometimes called CUpp) allows for approximating the consolidated-drained strength. Shear speed is often calculated based on the rate of consolidation under a specific confining pressure (while saturated). Confining pressures can vary anywhere from 1 psi to 100 psi or greater, sometimes requiring special load cells capable of handling higher pressures. Unconsolidated undrained In an 'unconsolidated undrained' test, the loads are applied quickly, and the sample is not allowed to consolidate during the test. The sample is compressed at a constant rate (strain-controlled). True triaxial test Triaxial testing systems have been developed to allow independent stress control in three perpendicular directions. This enables the investigation of stress paths not capable of being generated in axisymmetric triaxial test machines, which can be useful in studies of cemented sands and anisotropic soils. The test cell is cubical, and there are six separate plates applying pressure to the specimen, with LVDTs reading the movement of each plate. Pressure in the third direction can be applied using hydrostatic pressure in the test chamber, requiring only four stress application assemblies. The apparatus is significantly more complex than for axisymmetric triaxial tests and is, therefore, less commonly used. Free end condition in triaxial testing Triaxial tests of classical construction had been criticized for their nonuniform stress and strain field imposed within the specimen during larger deformation amplitudes. The highly localized discontinuity within a shear zone is caused by the combination of rough end plates and specimen height. To test specimens during larger deformation amplitude, "new" and "improved" version of the triaxial apparatus were made. The "new" and the "improved" triaxial follow the same principle - sample height is reduced to one diameter height, and friction with the end plates is canceled. The classical apparatus uses rough end plates - the whole surface of the piston head is made up of rough, porous filter. In upgraded apparatuses the tough end plates are replaced with smooth, polished glass, with a small filter at the center. This configuration allows a specimen to slide / expand horizontally while sliding along the polished glass. Thus, the contact zone between sample and the end plates does not buildup unnecessary shear friction, and a linear / isotropic stress field within the specimen is sustained. Due to extremely uniform, near isotropic stress field - isotropic yielding takes place. During isotropic yielding volumetric (dilatational) strain is isotopically distributed within the specimen, this improves measurement of volumetric response during CD tests and pore water pressure during CU loading. Also, isotropic yielding makes the specimen expand radially in uniform manner, as it is compressed axially. The walls of a cylindrical specimen remain straight and vertical even during large strain amplitudes (50% strain amplitude was documented by Vardoulakis (1980), using "improved" triaxial, on non saturated sand). This is in contrast with classical setup, where the specimen forms a bugle in the center, while keeping a constant radius at the contact with the end plates. The "new" apparatus has been upgraded to "the Danish triaxial" by L.B.Ibsen. The Danish triaxial can be used for testing all soil types. It provides improved measurements of volumetric response - as during isotropic yielding, volumetric strain is distributed isotopically within the specimen. Isotropic volume change is especially important for CU testing, as cavitation of pore water sets the limit of undrained sand strength. Measurement precision is improved by taking measurements near the specimen. The load cell is submerged and in direct contact with the upped pressure head of the specimen. Deformation transducers are attached directly to the piston heads as well. Control of the apparatus is highly automated, thus cyclic loading can be applied with great efficiency and precision. The combination of high automation, improved sample durability and large deformation compatibility expands the scope of triaxial testing. The Danish triaxial can yield CD and CU sand specimens into plasticity without forming a shear rupture or bulging. A sample can be tested for yielding multiple times in a single, continuous loading sequence. Samples can even be liquefied to a large strain amplitude, then crushed to CU failure. CU tests can be allowed to transition into CD state, and cyclic tested in CD mode to observe post liquefaction recovery of stiffness and strength. This allows to control the specimens to a very high degree, and observe sand response patterns which are not accessible using classical triaxial testing methods. Test standards The list is not complete; only the main standards are included. For a more extensive listing, please refer to the websites of ASTM International (USA), British Standards (UK), International Organization for Standardization (ISO), or local organisations for standards. ASTM D7181-11: Standard Test Method for Consolidated Drained Triaxial Compression Test for Soils ASTM D4767-11 (2011): Standard Test Method for Consolidated Undrained Triaxial Compression Test for Cohesive Soils ASTM D2850-03a (2007): Standard Test Method for Unconsolidated-Undrained Triaxial Compression Test on Cohesive Soils BS 1377-8:1990 Part 8: Shear strength tests (effective stress)Triaxial Compression Test ISO/TS 17892-8:2004 Geotechnical investigation and testing—Laboratory testing of soil—Part 8: Unconsolidated undrained triaxial test ISO/TS 17892-9:2004 Geotechnical investigation and testing—Laboratory testing of soil—Part 9: Consolidated triaxial compression tests on water-saturated soils References See also Civil engineering Direct shear test Earthworks (engineering) Effective stress Geotechnical engineering Shear strength (soil) Soil mechanics Mining engineering Soil shear strength tests
Triaxial shear test
[ "Engineering" ]
2,421
[ "Mining engineering" ]
7,087,423
https://en.wikipedia.org/wiki/Boolean%20circuit
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length. Boolean circuits are defined in terms of the logic gates they contain. For example, a circuit might contain binary AND and OR gates and unary NOT gates, or be entirely described by binary NAND gates. Each gate corresponds to some Boolean function that takes a fixed number of bits as input and outputs a single bit. Boolean circuits provide a model for many digital components used in computer engineering, including multiplexers, adders, and arithmetic logic units, but they exclude sequential logic. They are an abstraction that omits many aspects relevant to designing real digital logic circuits, such as metastability, fanout, glitches, power consumption, and propagation delay variability. Formal definition In giving a formal definition of Boolean circuits, Vollmer starts by defining a basis as set B of Boolean functions, corresponding to the gates allowable in the circuit model. A Boolean circuit over a basis B, with n inputs and m outputs, is then defined as a finite directed acyclic graph. Each vertex corresponds to either a basis function or one of the inputs, and there is a set of exactly m nodes which are labeled as the outputs. The edges must also have some ordering, to distinguish between different arguments to the same Boolean function. As a special case, a propositional formula or Boolean expression is a Boolean circuit with a single output node in which every other node has fan-out of 1. Thus, a Boolean circuit can be regarded as a generalization that allows shared subformulas and multiple outputs. A common basis for Boolean circuits is the set {AND, OR, NOT}, which is functionally complete, i. e. from which all other Boolean functions can be constructed. Computational complexity Background A particular circuit acts only on inputs of fixed size. However, formal languages (the string-based representations of decision problems) contain strings of different lengths, so languages cannot be fully captured by a single circuit (in contrast to the Turing machine model, in which a language is fully described by a single Turing machine). A language is instead represented by a circuit family. A circuit family is an infinite list of circuits , where has input variables. A circuit family is said to decide a language if, for every string , is in the language if and only if , where is the length of . In other words, a language is the set of strings which, when applied to the circuits corresponding to their lengths, evaluate to 1. Complexity measures Several important complexity measures can be defined on Boolean circuits, including circuit depth, circuit size, and the number of alternations between AND gates and OR gates. For example, the size complexity of a Boolean circuit is the number of gates in the circuit. There is a natural connection between circuit size complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is in , where is a function , then it has circuit size complexity . Complexity classes Several important complexity classes are defined in terms of Boolean circuits. The most general of these is P/poly, the set of languages that are decidable by polynomial-size circuit families. It follows directly from the fact that languages in have circuit complexity that PP/poly. In other words, any problem that can be computed in polynomial time by a deterministic Turing machine can also be computed by a polynomial-size circuit family. It is further the case that the inclusion is proper (i.e. PP/poly) because there are undecidable problems that are in P/poly. P/poly turns out to have a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related to P versus NP. For example, if there is any language in NP that is not in P/poly then PNP. P/poly also helps to investigate properties of the polynomial hierarchy. For example, if NP ⊆ P/poly, then PH collapses to . A full description of the relations between P/poly and other complexity classes is available at "Importance of P/poly". P/poly also has the interesting feature that it can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-bounded advice function. Two subclasses of P/poly that have interesting properties in their own right are NC and AC. These classes are defined not only in terms of their circuit size but also in terms of their depth. The depth of a circuit is the length of the longest directed path from an input node to the output node. The class NC is the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The class AC is defined similarly to NC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits). NC is an important class because it turns out that it represents the class of languages that have efficient parallel algorithms. Circuit evaluation The Circuit Value Problem — the problem of computing the output of a given Boolean circuit on a given input string — is a P-complete decision problem. Therefore, this problem is considered to be "inherently sequential" in the sense that there is likely no efficient, highly parallel algorithm that solves the problem. Completeness Logic circuits are physical representation of simple logic operations, AND, OR and NOT (and their combinations, such as non-sequential flip-flops or circuit networks), that form a mathematical structure known as Boolean algebra. They are complete in sense that they can perform any deterministic algorithm. However, it just happens that this is not all there is. In the physical world we also encounter randomness, notable in small systems governed by quantization effects, which is described by theory of Quantum Mechanics. Logic circuits cannot produce any randomness, and in that sense they form an incomplete logic set. Remedy to that is found in adding an ad-hoc random bit generator to logic networks, or computers, such as in Probabilistic Turing machine. A recent work has introduced a theoretical concept of an inherently random logic circuit named random flip-flop, which completes the set. It conveniently packs randomness and is inter-operable with deterministic Boolean logic circuits. However, an algebraic structure equivalent of Boolean algebra and associated methods of circuit construction and reduction for the extended set is yet unknown. See also Circuit satisfiability Logic gate Boolean logic Switching lemma Footnotes Computational complexity theory Digital circuits Logic in computer science
Boolean circuit
[ "Mathematics" ]
1,436
[ "Mathematical logic", "Logic in computer science" ]
7,087,967
https://en.wikipedia.org/wiki/Website%20content%20writer
A Website content writer or web content writer is a person who specializes in providing content for websites. Every website has a specific target audience and requires the most relevant content to attract business. Content should contain keywords (specific business-related terms, which internet users might use in order to search for services or products) aimed towards improving a website's SEO. A website content writer who also has knowledge of the SEO process is referred to as an SEO Content Writer. Most story pieces are centered on marketing products or services, though this is not always the case. Some websites are informational only and do not sell a product or service. These websites are often news sites or blogs. Informational sites educate the reader with complex information that is easy to understand and retain. Functions There is a growing demand for skilled web content writing on the Internet. Quality content often translates into higher revenues for online businesses. Website owners and managers depend on content writers to perform several major tasks: Understand the business concept and develop audience-centric content that aligns with the brand's messaging and objectives. Check for keywords or generate a keyword, and research limitations for the keywords. Create or copy edit to inform the reader, and to promote or sell the company, product, or service described on the website. Produce content to entice and engage visitors so they continue browsing the current website. The longer a visitor stays on a particular site, the greater the likelihood they will eventually become clients or customers. Produce content that is smart in its use of keywords, or is focused on search engine optimization (SEO). This means the text must contain relevant keywords and phrases that are most likely to be entered by users in web searches associated with the actual site for better search engine indexing and ranking. Create content that allows the site visitors to get the information they want quickly and efficiently. Efficient and focused web content gives readers access to information in a user-friendly manner. Create unique, useful, and compelling content on a topic primarily for the readers and not merely for the search engines. Website content writing aims for relevance and search-ability. Relevance means that the website text should be useful and beneficial to readers. Search-ability indicates the usage of keywords to help search engines direct users to websites that meet their search criteria. There are various ways through which websites come up with article writing, and one of them is outsourcing content writing. However, it is riskier than other options, as not all writers can write content specific to the web. Content can be written for various purposes in various forms. The most popular forms of content writing are: Blogging Writing white papers e-books Newsletters Promotional mails (content for email marketing purpose) Social media management and promotion Brochures Flyers or any other offline or online marketing purposes The content in website differs based on the product or service it is used for. Online writers vs. print writers Writing online is different from composing and constructing content for printed materials. Web users tend to scan text instead of reading it closely, skipping what they perceive to be unnecessary information and hunting for what they regard as most relevant. It is estimated that seventy-nine percent of users scan web content. It is also reported that it takes twenty-five percent more time to scan content online compared to print content. Web content writers must have the skills to insert paragraphs and headlines containing keywords for search engine optimization, as well as to make sure their composition is clear, to reach their target market. They need to be skilled writers and good at engaging an audience as well as understanding the needs of web users. Content writing providers Website content writing is frequently outsourced to external providers, such as individual web copywriters or for larger or more complex projects, a specialized digital marketing agency. It shall be said that most of the content writers also spend time learning about digital marketing with more focus on Search Engine Optimization, Pay Per Click, Social Media Optimization etc. so that they can develop right content which can help clients with marketing business easily. Digital marketing agencies combine copy-writing services with a range of editorial and associated services, that may include brand positioning, message consulting, social media, SEO consulting, developmental and copy editing, proofreading, fact checking, layout, content syndication, and design. Outsourcing allows businesses to focus on core competencies and to benefit from the specialized knowledge of professional copywriters and editors. See also Copywriting Web content development Blogging Content writing services Artificial Intelligence in Content Creation References Web design Writing occupations Computer occupations
Website content writer
[ "Technology", "Engineering" ]
919
[ "Computer occupations", "Web design", "Design" ]
7,087,984
https://en.wikipedia.org/wiki/Marudai
A is the most common of the traditional frames used for making , a type of Japanese braid. Etymology The marudai is generally made of a close-grained wood and consists of a round disk ( or "mirror") with a hole in the center, supported by four legs set in a base. The Japanese style is often about high and is used while kneeling or when placed on a table. The Western style allows the braider to sit in a chair to braid. The warp threads that form the braid are wound around weighted bobbins called . were once made of clay, but now are most commonly wood filled with lead. The weight of the maintains even tension on the warp threads, and is balanced by a bag of counterweights called that is attached to the base of the braid. Modern braiders often replace the with a foam disk with numbered slots that tightly grip the warp threads to maintain warp tension, so that weighted bobbins are not needed; instead, flexible plastic bobbins are used to prevent tangling of the threads. Unlike disks, have no indication of where the thread should be placed; it is done freehand. Related terms – "Mirror", the polished wooden top disk of the . – a class of patterns for round cord all involving eight threads folded in half for a total of sixteen strands. In clockwise order, each bobbins is moved to the opposite side. When different combinations of thread color are used, many interesting patterns emerge, including diagonal stripes, diamonds on a background, triangles resembling hearts, and tiny six-petalled flowers. is named for the venerable Kongō Gumi company of Japan, the oldest known company in the world. or – Japanese for "gathered threads". – the broad cloth sash worn with kimono; braids are often used as , worn on top of the . – the cord used to fasten the securely in some styles. Usually one string of is tied around the securely, and an accessory called the is often added in front for decoration. – Counterweights used in braiding. – a rectangular or square frame for . – little spools. The thread is kept from unwinding by passing the thread under itself, forming a loop around the . True silk – a hollow fiber with a rough surface that resists slipping past the loop unless gently pulled. For synthetic fibers, a flexible plastic "clamshell" bobbin may be preferable. Further reading Yamaoka, Kazuharu Issei. (1975) Domyo no kimihimo : marudai, yotsu-uchidai [Domyo style kumihimo]. Tokyo : Shufunotomo Publication (Handicraft series). , Kyōto Kimono Gakuin. (1979) A Step to kimono and kumihimo. Pasadena, Calif: International College of California. . Carey, Jacqui. (1994) Creative Kumihimo. Torquay: Devonshire Press. , . Tada, Makiko. (1996) Andesu No Kumihimo: Kādo to Marudai. [Andean sling braids]. (Kumihimo sōran series, 2) Hino : Tekusuto. 2nd ed., with some English. , . Carey, Jacqui. (1997) The Craft of Kumihimo. New York: Midpoint Trade Books, In. , . Carey, Jacqui. (1997) Beginner's Guide to Braiding, the Craft of Kumihimo. Tunbridge Wells : Search Press. , Tada, Makiko (2008). Tada Makiko Kumihimo-ten : dento no bi to sentan gijutsu [Makiko Tada kumihimo show : traditional beauty and the latest technics]. Naruse Memorial Hall (ed). Tokyo : Japan Women's College (Tsukuru series 5). . Sakai, Aiko; Tada, Makiko. E o mite wakaru kumihimo : Tanoshiku dekiru marudai kakudai ayadake-dai [Visual guide to kumihimo : practice on marudai, kakudai, and ayadake-dai with fun], Japan Vogue, . Owen, Rodrick. (1995) Braids: 250 Patterns from Japan, Peru & Beyond. Loveland, Colo. : Interweave Press, , . Softcover ed., Berkeley, CA : Lacis, 2004. Tada, Makiko. (April 2014) Marudai braids 120.(Comprehensive treatise of braids 1), 3rd ed., Hino : Tekusuto. with some English. Chottikampon K.; Mathurosemontri S.; Marui H.; Sirisuwan P.; Inoda M., et al. (2015) Comparison of braiding skills between expert and non-experts by eye’s movement measurement. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9184 (2015): 14-23 , Tada, Makiko. (2017). Utsukushii kumihimo to komono no reshipi: marudai de tsukuru honkakuteki na kumihimo o mijika na dōgu de yasashiku kawaiku. [Beautiful kumihimo recipes for marudai you braid with everyday tools, fun and pretty] Tōkyō : Nihon Bungeisha. , . Footnotes References Braids Handicrafts Ropework Manufacturing Hobbies Wood Japanese words and phrases
Marudai
[ "Engineering" ]
1,174
[ "Manufacturing", "Mechanical engineering" ]
7,088,018
https://en.wikipedia.org/wiki/Flat%20rate
A flat fee, also referred to as a flat rate or a linear rate refers to a pricing structure that charges a single fixed fee for a service, regardless of usage. Less commonly, the term may refer to a rate that does not vary with usage or time of use. Advantages A business can develop a dependable stance in a market, as consumers have a well-rounded price before the service is undertaken. For instance, a technician may charge $150 for his labor. Potential costs can be covered. The service may result in inevitable expenses like the parts needed to fix the issue or the items required to complete the order. No restricted structure is needed, as the pricing system can be adjusted to suit the business using it. Management can thus work out the pricing that best matches the company's objectives, efforts, costs, etc. Disadvantages The fixed pricing restricts the company's capability to meet the needs of individual consumers, and people search for cheaper alternatives. Pricing competition thickens, with other companies in the same industry compete for the lowest pricing, and tough competition occurs. Inflation can cause unprecedented losses, and companies must raise the charge to keep up with costs. Examples Postage There are flat rates in the postal service, regarding the delivery of items. Postage companies use different forms of post, boxes or envelopes, to avoid having to weigh items. The on-hand cost lets consumers identify the cost and removes the hassle of estimate the cost for items. The United States Postal Service offers flat-rate pricing for packages selling different postage options varying in size and shape. That provides consumers with an array of options upfront, creating a sense of ease. When shipped in higher volumes, it saves money but there are issues if both the flat rate and regular delivery systems are used simultaneously. Advertising Flat rate also passes into advertising. Purchasing advertisements on websites such as Facebook, Twitter and YouTube is sold a flat rates on the size (with a surcharge for images and posts) and length of the advertisement (video costs extra). Advertising on YouTube pitches at a flat rate of $0.30 per view. When a person runs by the YouTube home page, search page, or wherever the ad is running, the charge is $0.30. Tradesmen Tradesmen such as electricians, plumbers, and mechanics, also often charge flat rates to cover their labour for their services. In a survey in 2014 in Australia, the average labour rate for a painter was $39.92 per hour. Telephony American telecommunications companies commonly offer a flat rate to residential customers for local telephone calls. However, a regular rate or Message Rate is advantageous for those who make only a few short calls per month. Flat rates were rare outside the US and Canada until about 2005, but they have since become widespread in Europe for both local and long-distance calls and are now also available for mobile phone services, both for traditional GSM/UMTS voice calls and for Mobile VoIP. Most VoIP services are effectively flat-rate telephony services since only the broadband internet fees must be paid for PC-to-PC calls, and the calls themselves are free. Some PC-to-telephone services, such as SkypeOut offer flat rates for national calls to landlines. Television Premium television or Pay TV usually charges a flat monthly fee for a channel or a bundle or "tier" of channels, but some cable television companies also offer Pay per view pricing. Internet For Internet service providers, flat rate is access to the Internet at all hours and days of the year (linear rate) and for all customers of the telco operator (universal) at a fixed and cheap tariff. Flat rate is common in broadband access to the Internet in the US and many other countries. A charge tariff is a class of linear rate, different from the flat rate, where the user is charged by the uploads and downloads (data transfers). Some GPRS / data UMTS access to the Internet in some countries of Europe has no flat rate pricing, following the traditional "metered mentality". Because of this, users prefer using fixed lines (with narrow or broadband access) to connect to the Internet. A wavy rate is not a linear rate, because the Internet surfer pays the monthly fixed price to use the connection only during a certain range of hours of the day (i.e. only in the morning or, more typically, only at night). Street lighting Cities and towns normally arrange a flat fee for the power used by street lights. This is because the lights come on and turn off at predictable times, generally off-peak, and the total draw for the entire town can be accurately calculated in advance. This allows the lamps to be placed on existing poles and wired directly into the electrical wiring without a separate meter. Electricity A "flat rate" (more accurately known as fixed rate) for electricity is a fixed price per unit (kWh), not a fixed price per month, and thus different from that for other services. An electric utility that charges a flat rate for electricity does not charge different rates based upon the demand that the customer places on the system. A customer pays the same amount whether they use the electricity in bursts during mid-day, when demand and the utility's costs are highest, or if they spread it out over the entire day. However, if the customer uses a different amount of electricity, they are charged a higher or lower amount. Residential customers and small businesses are usually charged a flat rate, though not the same rate per kilowatt-hour. A special type of electricity meter, a time of use meter, is required to charge a non-flat rate. Time of use meters can lower a customer's electricity bill, if they use electricity mostly during off-peak hours. Some utilities will allow a customer to change to a time of use meter, but they charge for the cost of the meter and installation. Real estate In real estate, "flat rate" is an alternative, nontraditional full service listing where compensation to the listing agent is not based on a percentage of the selling price but instead is a fixed dollar amount that is typically paid at closing. The rate is generally less than a gross 6% commission, resulting in a lowered cost of selling real estate. "Flat rate" is different from "flat fee" in several ways: i) it is generally substantially more than a "flat fee" rate; ii) it generally represents a full service listing as opposed to a "flat fee" limited service listing; and iii) it is usually paid at closing, as opposed to a "flat fee", which is usually paid when the listing agreement is executed. Transport In most parts of the world regular users of public transport, especially commuters, make use of weekly, monthly or yearly season tickets that allow unlimited travel for a fixed fee. In some countries year passes are available for the entire national railway network (e.g. the Bahncard100 in Germany for about €3000 and the Österreichcard offered by the Austrian Federal Railways). Some, such as the Eurail Pass, are intended for foreigners, in order to encourage tourism. Road users are normally charged a combination of fixed and variable fees, in the form of vehicle duty and fuel duty. Motorway tolls in some countries (Switzerland, Austria, Czech Republic, Slovenia) are paid by purchasing weekly, monthly or annual stickers attached to the windscreen. At some stage, the concept of the flat rate was even introduced into passenger air traffic in the form of American Airlines' AAirpass. Parcel/document delivery In dealing with the shipping of parcels and documents, a "flat rate for international deliveries of packet size #1" would mean that the same shipping charge (for example US$15.00) would be applicable to all packets of this size, regardless of their designated destination (country of recipient), and regardless of the quantity of their contents, i.e. whether they contained one sheet of paper or were filled to the maximum. Labor Flat rate is a pricing scheme whereby the customer pays a fixed price for a service regardless of how long the worker takes to carry out the service. Flat rate manuals are based on timed studies of the typical time taken for each type of service. Flat rate helps provide a uniform pricing menu for service work and helps establish the worth of performing a particular job. In recent times some automotive companies have begun using computer algorithms to calculate labor times with a high degree of accuracy. The benefit to the customer is that if a worker takes longer than this, the cost does not rise. The downfall to the customer is that this can lead to overpaying in some cases. The benefit to the worker is that it promotes incentive to learn how to do the work more efficiently. This system can also cost the worker if they do not perform a job within the allotted time such as in the case of an inexperienced worker or on a job where there is something preventing the service from being performed that the labor manual can't take into account. In automotive shops this is common due to rusted, seized, or stripped bolts or aftermarket installations. In some circumstances automotive technicians can get paid 0 hours for working a 12-hour day. It can be difficult to compare prices between hourly-paid and flat-rate services, and this sometimes causes rejection of flat rate shops over hourly ones. Medical One of the newest areas where flat rate pricing is just beginning to make inroads is the medical industry. The concept has held a particular interest because of the high and rising costs of health care delivery despite legislative attempts to address them, such the Affordable Care Act (Obamacare). While there have been pilot programs launched by major insurance carriers such as UnitedHealthcare to control costs in the most costly medical conditions like cancer, for now the primary application of flat-rate pricing has been in medical imaging, such as x-rays, MRIs, mammograms, and ultrasounds. Regional companies such as Med Health Services Inc. in the Pittsburgh area and Northwest Radiology Network of Indianapolis have been among the first in the nation to implement the practice on a trial basis. See also Flat fee MLS Flat rate (finance) Flat tax Rural Internet Too cheap to meter References Pricing Internet access Mobile web
Flat rate
[ "Technology" ]
2,087
[ "Mobile web", "Internet access", "Wireless networking", "IT infrastructure" ]
7,088,035
https://en.wikipedia.org/wiki/Palmitoylation
In molecular biology, palmitoylation is the covalent attachment of fatty acids, such as palmitic acid, to cysteine (S-palmitoylation) and less frequently to serine and threonine (O-palmitoylation) residues of proteins, which are typically membrane proteins. The precise function of palmitoylation depends on the particular protein being considered. Palmitoylation enhances the hydrophobicity of proteins and contributes to their membrane association. Palmitoylation also appears to play a significant role in subcellular trafficking of proteins between membrane compartments, as well as in modulating protein–protein interactions. In contrast to prenylation and myristoylation, palmitoylation is usually reversible (because the bond between palmitic acid and protein is often a thioester bond). The reverse reaction in mammalian cells is catalyzed by acyl-protein thioesterases (APTs) in the cytosol and palmitoyl protein thioesterases in lysosomes. Because palmitoylation is a dynamic, post-translational process, it is believed to be employed by the cell to alter the subcellular localization, protein–protein interactions, or binding capacities of a protein. An example of a protein that undergoes palmitoylation is hemagglutinin, a membrane glycoprotein used by influenza to attach to host cell receptors. The palmitoylation cycles of a wide array of enzymes have been characterized in the past few years, including H-Ras, Gsα, the β2-adrenergic receptor, and endothelial nitric oxide synthase (eNOS). In signal transduction via G protein, palmitoylation of the α subunit, prenylation of the γ subunit, and myristoylation is involved in tethering the G protein to the inner surface of the plasma membrane so that the G protein can interact with its receptor. Mechanism S-palmitoylation is generally done by proteins with the DHHC domain. Exceptions exist in non-enzymatic reactions. Acyl-protein thioesterase (APT) catalyses the reverse reaction. Other acyl groups such as stearate (C18:0) or oleate (C18:1) are also frequently accepted, more so in plant and viral proteins, making S-acylation a more useful name. Several structures of the DHHC domain have been determined using X-ray crystallography. It contains a linearly-arranged catalytic triad of Asp153, His154, and Cys156. It runs on a ping-pong mechanism, where the cysteine attacks the acyl-CoA to form an S-acylated DHHC, and then the acyl group is transferred to the substrate. DHHR enzymes exist, and it (as well as some DHHC enzymes) may use a ternary complex mechanism instead. An inhibitor of S-palmitoylation by DHHC is 2-Bromopalmitate (2-BP). 2-BP is a nonspecific inhibitor that also halts many other lipid-processing enzymes. The palmitoylome A meta-analysis of 15 studies produced a compendium of approximately 2,000 mammalian proteins that are palmitoylated. The highest associations of the palmitoylome are with cancers and disorders of the nervous system. Approximately 40% of synaptic proteins were found in the palmitoylome. Biological function Substrate presentation Palmitoylation mediates the affinity of a protein for lipid rafts and facilitates the clustering of proteins. The clustering can increase the proximity of two molecules. Alternatively, clustering can sequester a protein away from a substrate. For example, palmitoylation of phospholipase D (PLD) sequesters the enzyme away from its substrate phosphatidylcholine. When cholesterol levels decrease or PIP2 levels increase the palmitate mediated localization is disrupted, the enzyme trafficks to PIP2 where it encounters its substrate and is active by substrate presentation. General Anesthesia Palmitoylation is necessary for the inactivation of anesthesia, inducing potassium channels and the localization of GABAAR in synapses. Anesthetics compete with palmitate in ordered lipids and this release gives rise to a component of membrane-mediated anesthesia. For example the anesthesia channel TREK-1 is activated by anesthetic displacement from GM1 lipids. The palmitoylation site is specific for palmitate over prenylation. However, the anesthetics appear to compete non-specifically. This non-selective competition of anesthetic with palmitate likely gives rise to the Myer-Overton correlation. Synapse formation Scientists have appreciated the significance of attaching long hydrophobic chains to specific proteins in cell signaling pathways. A good example of its significance is in the clustering of proteins in the synapse. A major mediator of protein clustering in the synapse is the postsynaptic density (95kD) protein PSD-95. When this protein is palmitoylated it is restricted to the membrane. This restriction to the membrane allows it to bind to and cluster ion channels in the postsynaptic membrane. Also, in the presynaptic neuron, palmitoylation of SNAP-25 directs it to partition in the cell membrane and allows the SNARE complex to dissociate during vesicle fusion. This provides a role for palmitoylation in regulating neurotransmitter release. Palmitoylation of delta catenin seems to coordinate activity-dependent changes in synaptic adhesion molecules, synapse structure, and receptor localizations that are involved in memory formation. Palmitoylation of gephyrin has been reported to influence GABAergic synapses. See also DHHC domain Myristoylation Myelin proteolipid protein Palmitoleoylation Prenylation Membrane-mediated anesthesia References Further reading Resh, M. (2006) "Palmitoylation of Ligands, Receptors, and Intracellular Signaling Molecules". Sci STK. 359 October 31. External links CSS-Palm - Palmitoylation Site Prediction with a Clustering and Scoring Strategy CKSAAP-Palm Swisspalm - S-Palmitoylation database Peripheral membrane proteins Post-translational modification
Palmitoylation
[ "Chemistry" ]
1,340
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
7,088,458
https://en.wikipedia.org/wiki/Child%20Exploitation%20Tracking%20System
Child Exploitation Tracking System (CETS) is a Microsoft software based solution that assists in managing and linking worldwide cases related to child protection. CETS was developed in collaboration with law enforcement in Canada. Administered by the loose partnership of Microsoft and law enforcement agencies, CETS offers tools to gather and share evidence and information so they can identify, prevent and punish those who commit crimes against children. About the CETS partnership In 2003, Detective Sergeant Paul Gillespie, Officer in Charge of the Child Exploitation Section of the Toronto Police Service's Sex Crimes Unit, made a request directly to Bill Gates, CEO and Chief Architect at Microsoft at the time, for assistance with these types of crimes. Agencies experienced in tracking and apprehending those who perpetrate such crimes were involved in the design, implementation, and policy. The solution needed to assist law enforcement agencies from the initial point of detection, through the investigative phase, to arrest, prosecution, and conviction of the criminal. In addition, it was imperative that the solution adhered to existing rights and civil liberties of the citizens of the various countries. This included remaining independent of Internet traffic and any individual user’s computer. Finally, such a solution needed to be global in nature and enable collaboration among nations and agencies. In order to increase the effectiveness of investigators worldwide, such a system would allow law enforcement entities to: Collect evidence of online child exploitation gathered by multiple law enforcement agencies. Organize and store the information safely and securely. Search the database of information. Securely share the information with other agencies, across jurisdictions. Analyze the information and provide pertinent matches. Adhere to global software industry standards. Law enforcement partnerships worldwide A number of law enforcement agencies use or are deploying the CETS tool, these include: Australia: High Tech Crime Centre Brazil: Federal Police Canada: Royal Canadian Mounted Police, Toronto Police Services Sex Crime Unit, & Twenty-six other Canadian police services Chile: National Investigative Police Indonesia: National Police Italy: Ministry of Interior and Postal police Romania: National Police Spain: Interior Ministry United Kingdom: Child Exploitation and Online Protection Command of the National Crime Agency. United States: Department of Homeland Security and Federal Bureau of Investigation In Planning : Poland, Argentina and United Arab Emirates Child exploitation crimes have been an increasing problem as technology advances. The tracking system has a proven success rate, bringing many of those who violate the law to justice. Microsoft contributed technology in creating NCMEC which furthered the development of a system which captures criminals in addition to removing offensive images. "Microsoft has implemented PhotoDNA on its own online properties including Bing, OneDrive (previously known as SkyDrive) and Hotmail, which has already resulted in the identification, reporting and removal of thousands of images of child pornography" (Microsoft, 2013). Microsoft. "Child Exploitation Crimes." Microsoft. N.p., 2013. Web. Microsoft has been a large contributor toward the efforts of online surveillance, which has broken down the walls of online anonymity. See also Microsoft litigation National Cyber Security Awareness Month Notes and references External links The International Centre for Missing & Exploited Children Kids' Internet Safety Alliance (KINSA) Child Exploitation Tracking System - Australian Criminal Intelligence Commission Child abuse Computer security software Child sexual abuse
Child Exploitation Tracking System
[ "Engineering" ]
649
[ "Cybersecurity engineering", "Computer security software" ]
7,088,631
https://en.wikipedia.org/wiki/Finite-difference%20frequency-domain%20method
The finite-difference frequency-domain (FDFD) method is a numerical solution method for problems usually in electromagnetism and sometimes in acoustics, based on finite-difference approximations of the derivative operators in the differential equation being solved. While "FDFD" is a generic term describing all frequency-domain finite-difference methods, the title seems to mostly describe the method as applied to scattering problems. The method shares many similarities to the finite-difference time-domain (FDTD) method, so much so that the literature on FDTD can be directly applied. The method works by transforming Maxwell's equations (or other partial differential equation) for sources and fields at a constant frequency into matrix form . The matrix A is derived from the wave equation operator, the column vector x contains the field components, and the column vector b describes the source. The method is capable of incorporating anisotropic materials, but off-diagonal components of the tensor require special treatment. Strictly speaking, there are at least two categories of "frequency-domain" problems in electromagnetism. One is to find the response to a current density J with a constant frequency ω, i.e. of the form , or a similar time-harmonic source. This frequency-domain response problem leads to an system of linear equations as described above. An early description of a frequency-domain response FDTD method to solve scattering problems was published by Christ and Hartnagel (1987). Another is to find the normal modes of a structure (e.g. a waveguide) in the absence of sources: in this case the frequency ω is itself a variable, and one obtains an eigenproblem (usually, the eigenvalue λ is ω2). An early description of an FDTD method to solve electromagnetic eigenproblems was published by Albani and Bernardi (1974). Implementing the method Use a Yee grid because it offers the following benefits: (1) it implicitly satisfies the zero divergence conditions to avoid spurious solutions, (2) it naturally handles physical boundary conditions, and (3) it provides a very elegant and compact way of approximating the curl equations with finite-differences. Much of the literature on finite-difference time-domain (FDTD) methods applies to FDFD, particularly topics on how to represent materials and devices on a Yee grid. Comparison with FDTD and FEM The FDFD method is very similar to the finite element method (FEM), though there are some major differences. Unlike the FDTD method, there are no time steps that must be computed sequentially, thus making FDFD easier to implement. This might also lead one to imagine that FDFD is less computationally expensive; however, this is not necessarily the case. The FDFD method requires solving a sparse linear system, which even for simple problems can be 20,000 by 20,000 elements or larger, with over a million unknowns. In this respect, the FDFD method is similar to the FEM, which is a finite differential method and is also usually implemented in the frequency domain. There are efficient numerical solvers available so that matrix inversion—an extremely computationally expensive process—can be avoided. Additionally, model order reduction techniques can be employed to reduce problem size. FDFD, and FDTD for that matter, does not lend itself well to complex geometries or multiscale structures, as the Yee grid is restricted mostly to rectangular structures. This can be circumvented by either using a very fine grid mesh (which increases computational cost), or by approximating the effects with surface boundary conditions. Non uniform gridding can lead to spurious charges at the interface boundary, as the zero divergence conditions are not maintained when the grid is not uniform along an interface boundary. E and H field continuity can be maintained to circumvent this problem by enforcing weak continuity across the interface using basis functions, as is done in FEM. Perfectly matched layer (PML) boundary conditions can also be used to truncate the grid, and avoid meshing empty space. Susceptance element equivalent circuit The FDFD equations can be rearranged in such a way as to describe a second order equivalent circuit, where nodal voltages represent the E field components and branch currents represent the H field components. This equivalent circuit representation can be extremely useful, as techniques from circuit theory can be used to analyze or simplify the problem and can be used as a spice-like tool for three-dimensional electromagnetic simulation. This susceptance element equivalent circuit (SEEC) model has the advantages of a reduced number of unknowns, only having to solve for E field components, and second order model order reduction techniques can be employed. Applications The FDFD method has been used to provide full wave simulation for modeling interconnects for various applications in electronic packaging. FDFD has also been used for various scattering problems at optical frequencies. See also Finite-difference time-domain method Finite element method References Computational electromagnetics Numerical differential equations Frequency-domain analysis Finite differences
Finite-difference frequency-domain method
[ "Physics", "Mathematics" ]
1,061
[ "Mathematical analysis", "Computational electromagnetics", "Spectrum (physical sciences)", "Frequency-domain analysis", "Finite differences", "Computational physics" ]
7,088,692
https://en.wikipedia.org/wiki/Martin%20Tower
Martin Tower was a 21-story, building at 1170 8th Avenue in Bethlehem, Pennsylvania. It was the tallest building in both Bethlehem and the greater Lehigh Valley, taller than the PPL Building in Allentown. Martin Tower was placed on the National Register of Historic Places on June 28, 2010. Originally built as the headquarters of now-defunct Bethlehem Steel, the building, which once dominated Bethlehem's city's skyline, was completed in 1972. It stood vacant from early 2007 until its eventual demolition on May 19, 2019 at 7:03 AM EDT. History 20th century Martin Tower was constructed as the corporate headquarters for Bethlehem Steel, then one of the world's largest steel manufacturers. Construction of the tower began in 1969. The building was completed and opened in 1972 and was named after then-Bethlehem Steel chairman Edmund F. Martin. Bethlehem Steel spared little expense in their new skyscraper headquarters. The building was built in a cruciform shape rather than a more conventional square, in order to create more corner- and window-offices. The architect for Martin Tower was Haines Lundberg Waehler. It was built by George A. Fuller Construction Co. of New York City, which also built the Flatiron Building in New York in 1903, the CBS Building in New York in 1963 and 1251 Avenue of the Americas at Rockefeller Center in 1971. Under the initial plan, Bethlehem Steel was to build a second tower, which is why some people refer to it as "Martin Towers." An annex was built, intended to connect the two towers, but the second tower was never built. The original offices were designed by decorators from New York and included wooden furniture, doorknobs with the company logo, and handwoven carpets. The building was a testament to the economic heights the Lehigh Valley reached in the 1970s before the large economic downturn caused by the decline of the steel industry. The building was a symbol of Bethlehem Steel's power, money and dominance in the steel industry. The building had 21 floors, and each floor housed a different department of the company. When Martin Tower opened, Bethlehem Steel was the second-largest steel producer in the world and the 14th-largest industrial corporation in the nation. In 1973, the first full year the Tower was occupied, Bethlehem Steel set a company record, producing 22.3 million tons of raw steel and shipping 16.3 million tons of finished steel. It made a $207 million profit that year, and exceeded that the following year. By 1987, a shrinking white-collar work force had the Tower sitting almost completely vacant; it was then put up for sale and other companies occupied the Tower and its annex. 21st century In 2001, Bethlehem Steel filed for bankruptcy and officially left Martin Tower in 2003. Several companies remained until the last tenant, Receivable Management Services, departed in 2007, leaving it completely vacant. In 2007, the entire building became vacant, although surface parking around the building continued in use as park-and-ride lots for local festivals. Proposals to convert the building to condominiums or apartments, along with recreational and retail space on the property, proved unfeasible due to the presence of asbestos and the cost of its removal along with the housing market crash. The City of Bethlehem subsequently applied for City Revitalization and Improvement Zone (CRIZ) designation, winning one of the two CRIZ designations on December 30, 2013. Restoration of the building, including the removal of asbestos and addition of a sprinkler system, was envisioned by the third year of the CRIZ, with renovations beginning in 2016. In July 2015, Bethlehem Mayor Robert Donchez announced plans to rezone the Martin Tower property. The zoning at that time had allowed mostly residential in and around the building, while protecting the building from being razed. After many public hearings and votes, the Martin Tower property was approved on December 15, 2015, for mixed-use rezoning to allow more retail space on the property. The decision also permitted demolition of Martin Tower at the owner/developer's discretion. The public had many concerns about the new rezoning. Some feared it would make it easier to remove the building. Others feared it would create a third downtown in the city and create competition to business owners. City Council passed the zoning despite the concerns of a few members of the public. On January 13, 2017, almost 10 years since the building was vacated, owners Ronca and Herrick announced removal of asbestos from the building and annex would begin, regardless of whether the Tower was ultimately renovated for adaptive reuse or demolished. In January 2019, the owners announced their redevelopment master plan would include demolition of the Tower. Martin Tower was imploded by Controlled Demolition, Inc., on May 19, 2019, at a reported cost of $575,000. Demolition officials said it was a "textbook implosion". The entire building, consisting of 6,500 cubic feet of concrete and 16,000 tons of steel, came down in only 16 seconds. Nearby roads and highways were open soon after it came down. See also National Register of Historic Places listings in Lehigh County, Pennsylvania References 1972 establishments in Pennsylvania 2007 disestablishments in Pennsylvania Former skyscrapers Buildings and structures demolished in 2019 Buildings and structures in Lehigh County, Pennsylvania Commercial buildings completed in 1972 Commercial buildings on the National Register of Historic Places in Pennsylvania Demolished buildings and structures in Pennsylvania Towers in Pennsylvania Buildings and structures demolished by controlled implosion National Register of Historic Places in Lehigh County, Pennsylvania Skyscraper office buildings in Pennsylvania Skyscrapers in Pennsylvania
Martin Tower
[ "Engineering" ]
1,130
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
7,088,707
https://en.wikipedia.org/wiki/Halocarban
Halocarban (INN; also known as cloflucarban (USAN) and trifluoromethyldichlorocarbanilide; brand name ) is a chemical with antibacterial properties sometimes used in deodorant and soap. References Disinfectants Ureas Trifluoromethyl compounds Chloroarenes 4-Chlorophenyl compounds
Halocarban
[ "Chemistry" ]
85
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Ureas" ]
2,950,634
https://en.wikipedia.org/wiki/Stamping%20press
A stamping press is a metalworking machine tool used to shape or cut metal by deforming it with a die. A stamping press uses precision-made male and female dies to shape the final product. It is a modern-day counterpart to the hammer and anvil. Components A press has a bolster plate, and a ram. Presses come in various types of frame configurations, C-Frame where the front & left and right sides are open, straight-side, or H-Frame for stronger higher tonnage applications. It is very important to size the press and tonnage based on the type of applications, blanking, forming, progressive, or transfer. Strong consideration should be given to avoiding off-center load conditions to prevent premature wear to the press. Bolster Plate The bolster plate is mounted on top of the press bed and is a large block of metal upon which the bottom portion of a die is clamped; the bolster plate is stationary. Large presses (like the ones used in the automotive industry) may be equipped with die cushions integrated in the bolster plate to apply blank holder or counter draw forces. This is necessary when a single acting press is used for deep drawing. The ram / slide is the moving or reciprocating member that the upper die is mounted to. Ram or Slide guidance is a critical element to assure long die life between die maintenance. Different types of slide guides are available, 4 point V-Gibs or 6 point square gibs on smaller presses and 8 point full length slide guides on larger straight side frame presses. The dies and material are fed into the press between the bolster and slide. Good press designs must account for plastic deformation, otherwise known as deflection when frame design and loads are considered. Ram / Slide The vertical motion of the slide acts like a hammer to an anvil. The most common Mechanical Presses use an eccentric drive to move the press's ram slide, length of stroke or slide travel depends on the crankshaft or eccentric, whereas hydraulic cylinders are used in hydraulic presses. The nature of drive system determines the force progression during the ram's stroke. Mechanical presses have a full tonnage rating point above BDC / Bottom Dead Center, normal full tonnage rating points are .187", .25" & .5". Hence a mechanical press has a tonnage curve and should be operated within the press capacity limits. Link Motion mechanical is yet another option, this provides a slide slow down near BDC / bottom dead center for soft touch tooling. This link feature can improve die life and reduce reverse-snap thru tonnage for blanking operations. On the contrary, Hydraulic Presses do not have a tonnage curve and can produce full tonnage at any point in the stroke. The trade-off is speed, a mechanical press is much faster when compared to hydraulic. On the other hand, Hydraulic Presses are much more practical for deep forming or drawing or parts or when dwell time at the bottom is desired. Another classification is single-acting presses versus double- (seldom triple) acting presses. Single-acting presses have one single ram. Double-acting presses have a subdivided ram, to manage, for example, blank holding (to avoid wrinkles) with one ram segment and the forming operation with the second ram segment. Other Components & Controls Typically, presses are electronically linked (with a programmable logic controller) to an automatic feeder which feeds metal raw material through the die. The raw material is fed into the automatic feeder after it has been unrolled from a coil and put through a straightener. A tonnage monitor may be provided to observe the amount of force used for each stroke. References External links See a Stamping Press in Action I-PRESS AB PLUS Control Video Machine tools Metalworking tools sv:Stansmaskin zh:沖床
Stamping press
[ "Engineering" ]
778
[ "Machine tools", "Industrial machinery" ]
2,950,875
https://en.wikipedia.org/wiki/Arndt%E2%80%93Eistert%20reaction
In organic chemistry, the Arndt–Eistert reaction is the conversion of a carboxylic acid to its homologue. It is named for the German chemists Fritz Arndt (1885–1969) and Bernd Eistert (1902–1978). The method entails treating an acid chlorides with diazomethane. It is a popular method of producing β-amino acids from α-amino acids. Conditions Aside from the acid chloride substrate, three reagents are required: diazomethane, water, and a metal catalyst. Each has been well investigated. The diazomethane is required in excess so as to react with the HCl formed previously. Not taking diazomethane in excess results in HCl reacting with the diazoketone to form chloromethyl ketone and N2. Mild conditions allow this reaction to take place while not affecting complex or reducible groups in the reactant-acid. The reaction requires the presence of a nucleophile (water). A metal catalyst is required. Usually Ag2O is chosen but other metals and even light effect the reaction. Variants The preparation of the beta-amino acid from phenylalanine illustrates the Arndt–Eistert synthesis carried out with the Newman–Beal modification, which involves the inclusion of triethylamine in the diazomethane solution. Either triethylamine or a second equivalent of diazomethane will scavenge HCl, avoiding the formation of α-chloromethylketone side-products. Diazomethane is the traditional reagent, but analogues can also be applied. Diazomethane is toxic and potentially violently explosive, which has led to safer alternative procedures, For example, diazo(trimethylsilyl)methane has been demonstrated. Acid anhydrides can be used in place of acid chloride. The reaction yields a 1:1 mixture of the homologated acid and the corresponding methyl ester. This method can also be used with primary diazoalkanes, to produce secondary α-diazo ketones. However, there are many limitations. Primary diazoalkanes undergo azo coupling to form azines; thus the reaction conditions must be altered such that acid chloride is added to a solution of diazoalkane and triethylamine at low temperature. In addition, primary diazoalkanes are very reactive, incompatible with acidic functionalities, and will react with activated alkenes including α,β-unsaturated carbonyl compounds to give 1,3-dipolar cycloaddition products. An alternative to the Arndt–Eistert reaction is the Kowalski ester homologation, which also involves the generation of a carbene equivalent but avoids diazomethane. Reaction mechanism The acid chloride suffers attack by diazomethane with loss of HCl. The alpha-diazoketone (RC(O)CHN2) product undergoes the metal-catalyzed Wolff rearrangement to form a ketene, which hydrates to the acid. The rearrangement leaves untouched the stereochemistry at the carbon alpha to the acid chloride. Historical readings See also Curtius rearrangement Kowalski ester homologation Lossen rearrangement Nierenstein reaction Wolff rearrangement References Rearrangement reactions Carbon-carbon bond forming reactions Homologation reactions Name reactions
Arndt–Eistert reaction
[ "Chemistry" ]
700
[ "Name reactions", "Carbon-carbon bond forming reactions", "Rearrangement reactions", "Organic reactions" ]
2,950,924
https://en.wikipedia.org/wiki/Accession%20number%20%28bioinformatics%29
An accession number, in bioinformatics, is a unique identifier given to a DNA or protein sequence record to allow for tracking of different versions of that sequence record and the associated sequence over time in a single data repository. Because of its relative stability, accession numbers can be utilized as foreign keys for referring to a sequence object, but not necessarily to a unique sequence. All sequence information repositories implement the concept of "accession number" but might do so with subtle variations. LRG Locus Reference Genomic (LRG) records have unique accession numbers starting with LRG_ followed by a number. They are recommended in the Human Genome Variation Society Nomenclature guidelines as stable genomic reference sequences to report sequence variants in LSDBs and the literature. Notes and references External links sample GenBank record Bioinformatics
Accession number (bioinformatics)
[ "Engineering", "Biology" ]
168
[ "Bioinformatics", "Biological engineering" ]
2,950,926
https://en.wikipedia.org/wiki/Dimethylglyoxime
Dimethylglyoxime is a chemical compound described by the formula CH3C(NOH)C(NOH)CH3. Its abbreviation is dmgH2 for neutral form, and dmgH− for anionic form, where H stands for hydrogen. This colourless solid is the dioxime derivative of the diketone butane-2,3-dione (also known as diacetyl). DmgH2 is used in the analysis of palladium or nickel. Its coordination complexes are of theoretical interest as models for enzymes and as catalysts. Many related ligands can be prepared from other diketones, e.g. benzil. Preparation and reactions Dimethylglyoxime can be prepared from butanone first by reaction with ethyl nitrite to give biacetyl monoxime. The second oxime is installed using sodium hydroxylamine monosulfonate: 2,3-Butanediamine is produced by reduction of dimethylglyoxime with lithium aluminium hydride. Complexes Dimethylglyoxime forms complexes with metals including nickel, palladium and cobalt. These complexes are used to separate those cations from solutions of metal salts and in gravimetric analysis. It is also used in precious metals refining to precipitate palladium from solutions of palladium chloride. References Ketoximes Chelating agents
Dimethylglyoxime
[ "Chemistry" ]
296
[ "Chelating agents", "Process chemicals" ]
2,950,941
https://en.wikipedia.org/wiki/Lobe%20Attachment%20Module
In Token Ring networks, A Lobe Attachment Module is a box with multiple interfaces to which new network nodes (known as lobes) can be attached. A LAM may have interfaces up to 20 lobes. Functionally a LAM is like a multi-station access unit (MAU), but with a larger capacity: 20 nodes as opposed to 8 nodes for MAU. The LAM interface may use either IBM connectors or 8P8C (RJ-45) modular plugs. LAMs can be daisy chained and connected to a HUB, known as Controlled Access Unit (CAU) in Token Ring terminology. Each CAU can handle up to 4 LAMs for a total of 80 lobes. Networking hardware
Lobe Attachment Module
[ "Engineering" ]
144
[ "Computer networks engineering", "Networking hardware" ]
2,950,949
https://en.wikipedia.org/wiki/Accession%20number%20%28cultural%20property%29
In galleries, libraries, archives, and museums, an accession number is a unique identifier assigned to, and achieving initial control of, each acquisition. Assignment of accession numbers typically occurs at the point of accessioning or cataloging. If an item is removed from the collection, its number is usually not reused for new items. In libraries In libraries, this numbering system is usually in addition to the library classification number (or alphanumeric code) and to the ISBN or International Standard Book Number assigned by publishers. In botany Accession numbers are also used in botany, by institutions with living collections like arboreta, botanic gardens, etc., to identify plants or groups of plants that are of the same taxon, are of the same propagule type (or treatment), were received from the same source, were received at the same time. Herbaria and other botanic institutions collecting non living material also use accession numbers. In museums An accession number may include the year acquired, sometimes the full date (as at the British Museum), and a sequential number separated by a period. In addition, departments or art classifications within the collection or museum may reserve sections of numbers. For example, objects identified by the numbers 11.000 through 11.999 may indicate objects obtained by the museum in 1911; the first 300 numbers might be used to indicate American art, while the next fifty (11.301–350) might be used for African art. In some cases, they also include letters and other punctuation, such as commas, hyphens or slashes. Uses with other parallel systems In older institutions, simpler numbering systems are sometimes maintained alongside, or incorporated within, newer systems. Where the objects are unique, institutions normally need to retain the original number in some form as it will have been used in old references that are still of use in scholarship. In particular, collections of manuscripts use the prefix "MS", and many well known manuscripts are known by their old MS numbers, often incorporating a prefix for a particular collection within a library. These collections may be divided by former owners, as with several British Library "closed" collections, or by language, as with Froissart of Louis of Gruuthuse (BnF MS Fr. 2643-6), indicating a two volume manuscript in French at the . See also Accession number (bioinformatics) Universally unique identifier Library of Congress Control Number References Library cataloging and classification Museology Archival science Identifiers Index (publishing) Metadata
Accession number (cultural property)
[ "Technology" ]
510
[ "Metadata", "Data" ]
2,951,017
https://en.wikipedia.org/wiki/RealMagic
RealMagic (or ReelMagic), from Sigma Designs, was one of the first fully compliant MPEG playback boards on the market in the mid-1990s. RealMagic is a hardware-accelerated MPEG decoder that mixes its video stream into a computer video card's output through the video card's feature connector. It is also a SoundBlaster-compatible sound card. Successors Sigma design's Realmagic superseded by Realmagic Hollywood+ Realmagic XCard Realmagic NetStream2000 - 4000 Several software companies in 1993 promised to support the card, including Access, Interplay, and Sierra. Software written for RealMagic includes: Under a Killing Moon - Access Software Gabriel Knight Escape from Cybercity Kings Quest VI - Sierra Online Dragon's Lair Police Quest IV - Sierra Online Return to Zork - Infocom Lord of the Rings - Interplay Entertainment Note: the above titles were on a REELMAGIC demo CD that came with the hardware. The CD also contained corporate promotion videos, training videos, news footage of John F. Kennedy and the Apollo Moon mission. Also included in the bundle, was a complete version of The Horde - published by Crystal Dynamics (1994) Other software includes: The Psychotron (an interactive mystery movie) - Merit Software References Graphics cards
RealMagic
[ "Technology" ]
267
[ "Computing stubs", "Computer hardware stubs" ]
2,951,035
https://en.wikipedia.org/wiki/Integrated%20circuit%20design
Integrated circuit design, semiconductor design, chip design or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography. IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry. Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. The design of some processors has become complicated enough to be difficult to fully test, and this has caused problems at large cloud providers. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out. Artificial Intelligence has been demonstrated in chip design for creating chip layouts which are the locations of standard cells and macro blocks in a chip. Fundamentals Integrated circuit design involves the creation of electronic components, such as transistors, resistors, capacitors and the interconnection of these components onto a piece of semiconductor, typically silicon. A method to isolate the individual components formed in the substrate is necessary since the substrate silicon is conductive and often forms an active region of the individual components. The two common methods are p-n junction isolation and dielectric isolation. Attention must be given to power dissipation of transistors and interconnect resistances and current density of the interconnect, contacts and vias since ICs contain very tiny devices compared to discrete components, where such concerns are less of an issue. Electromigration in metallic interconnect and ESD damage to the tiny components are also of concern. Finally, the physical layout of certain circuit subblocks is typically critical, in order to achieve the desired speed of operation, to segregate noisy portions of an IC from quiet portions, to balance the effects of heat generation across the IC, or to facilitate the placement of connections to circuitry outside the IC. Design flow A typical IC design cycle involves several steps: System specification Feasibility study and die size estimate Function analysis Architectural or system-level design Logic design Analogue design, simulation, and layout Digital design and simulation System simulation, emulation, and verification Circuit design Digital design synthesis Design for testing and automatic test pattern generation Design for manufacturability Physical design Floorplanning Place and route Parasitic extraction Physical verification and signoff Static timing Co-simulation and timing Mask data preparation (layout post-processing) Chip finishing with tape out Reticle layout Layout-to-mask preparation Reticle fabrication Photomask fabrication Wafer fabrication Packaging Die test Post silicon validation and integration Device characterization Tweak (if necessary) Chip deployment Datasheet generation (usually a PDF file) Ramp up Production Yield analysis / warranty analysis reliability Failure analysis on any returns Plan for next generation chip using production information if possible Focused ion beams may be used during chip development to establish new connections in a chip. Summary Roughly saying, digital IC design can be divided into three parts. Electronic system-level design: This step creates the user functional specification. The user may use a variety of languages and tools to create this description. Examples include a C/C++ model, VHDL, SystemC, SystemVerilog Transaction Level Models, Simulink, and MATLAB. RTL design: This step converts the user specification (what the user wants the chip to do) into a register transfer level (RTL) description. The RTL describes the exact behavior of the digital circuits on the chip, as well as the interconnections to inputs and outputs. Physical circuit design: This step takes the RTL, and a library of available logic gates (standard cell library), and creates a chip design. This step involves use of IC layout editor, layout and floor planning, figuring out which gates to use, defining places for them, and wiring (clock timing synthesis, routing) them together. Note that the second step, RTL design, is responsible for the chip doing the right thing. The third step, physical design, does not affect the functionality at all (if done correctly) but determines how fast the chip operates and how much it costs. A standard cell normally represents a single logic gate, a diode or simple logic components such as flip-flops, or logic gates with multiple inputs. The use of standard cells allows the chip's design to be split into logical and physical levels. A fabless company would normally only work on the logical design of a chip, determining how cells are connected and the functionality of the chip, while following design rules from the foundry the chip will be made in, while the physical design of the chip, the cells themselves, are normally done by the foundry and it comprises the physics of the transistor devices and how they are connected to form a logic gate. Standard cells allow chips to be designed and modified more quickly to respond to market demands, but this comes at the cost of lower transistor density in the chip and thus larger die sizes. Foundries supply libraries of standard cells to fabless companies, for design purposes and to allow manufacturing of their designs using the foundry's facilities. A Process design kit (PDK) may be provided by the foundry and it may include the standard cell library as well as the specifications of the cells, and tools to verify the fabless company's design against the design rules specified by the foundry as well as simulate it using the foundry's cells. PDKs may be provided under non-disclosure agreements. Macros/Macrocells/Macro blocks, Macrocell arrays and IP blocks have greater functionality than standard cells, and are used similarly. There are soft macros and hard macros. Standard cells are usually placed following standard cell rows. Design lifecycle The integrated circuit (IC) development process starts with defining product requirements, progresses through architectural definition, implementation, bringup and finally production. The various phases of the integrated circuit development process are described below. Although the phases are presented here in a straightforward fashion, in reality there is iteration and these steps may occur multiple times. Requirements Before an architecture can be defined some high level product goals must be defined. The requirements are usually generated by a cross functional team that addresses market opportunity, customer needs, feasibility, and much more. This phase should result in a product requirements document. Architecture The architecture defines the fundamental structure, goals and principles of the product. It defines high level concepts and the intrinsic value proposition of the product. Architecture teams take into account many variables and interface with many groups. People creating the architecture generally have a significant amount of experience dealing with systems in the area for which the architecture is being created. The work product of the architecture phase is an architectural specification. Micro-architecture The micro-architecture is a step closer to the hardware. It implements the architecture and defines specific mechanisms and structures for achieving that implementation. The result of the micro-architecture phase is a micro-architecture specification which describes the methods used to implement the architecture. Implementation In the implementation phase the design itself is created using the micro-architectural specification as the starting point. This involves low level definition and partitioning, writing code, entering schematics and verification. This phase ends with a design reaching tapeout. Bringup After a design is created, taped-out and manufactured, actual hardware, 'first silicon', is received which is taken into the lab where it goes through bringup. Bringup is the process of powering, testing and characterizing the design in the lab. Numerous tests are performed starting from very simple tests such as ensuring that the device will power on to much more complicated tests which try to stress the part in various ways. The result of the bringup phase is documentation of characterization data (how well the part performs to spec) and errata (unexpected behavior). Productization Productization is the task of taking a design from engineering into mass production manufacturing. Although a design may have successfully met the specifications of the product in the lab during the bringup phase there are many challenges that product engineers face when trying to mass-produce those designs. The IC must be ramped up to production volumes with an acceptable yield. The goal of the productization phase is to reach mass production volumes at an acceptable cost. Sustaining Once a design is mature and has reached mass production it must be sustained. The process must be continually monitored and problems dealt with quickly to avoid a significant impact on production volumes. The goal of sustaining is to maintain production volumes and continually reduce costs until the product reaches end of life. Design process Microarchitecture and system-level design The initial chip design process begins with system-level design and microarchitecture planning. Within IC design companies, management and often analytics will draft a proposal for a design team to start the design of a new chip to fit into an industry segment. Upper-level designers will meet at this stage to decide how the chip will operate functionally. This step is where an IC's functionality and design are decided. IC designers will map out the functional requirements, verification testbenches, and testing methodologies for the whole project, and will then turn the preliminary design into a system-level specification that can be simulated with simple models using languages like C++ and MATLAB and emulation tools. For pure and new designs, the system design stage is where an Instruction set and operation is planned out, and in most chips existing instruction sets are modified for newer functionality. Design at this stage is often statements such as encodes in the MP3 format or implements IEEE floating-point arithmetic. At later stages in the design process, each of these innocent looking statements expands to hundreds of pages of textual documentation. RTL design Upon agreement of a system design, RTL designers then implement the functional models in a hardware description language like Verilog, SystemVerilog, or VHDL. Using digital design components like adders, shifters, and state machines as well as computer architecture concepts like pipelining, superscalar execution, and branch prediction, RTL designers will break a functional description into hardware models of components on the chip working together. Each of the simple statements described in the system design can easily turn into thousands of lines of RTL code, which is why it is extremely difficult to verify that the RTL will do the right thing in all the possible cases that the user may throw at it. To reduce the number of functionality bugs, a separate hardware verification group will take the RTL and design testbenches and systems to check that the RTL actually is performing the same steps under many different conditions, classified as the domain of functional verification. Many techniques are used, none of them perfect but all of them useful – extensive logic simulation, formal methods, hardware emulation, lint-like code checking, code coverage, and so on. Verification such as that done by emulators can be carried out in FPGAs or special processors, and emulation replaced simulation. Simulation was initially done by simulating logic gates in chips but later on, RTLs in chips were simulated instead. Simulation is still used when creating analog chip designs. Prototyping platforms are used to run software on prototypes of the chip design while it is under development using FPGAs but are slower to iterate on or modify and can't be used to visualize hardware signals as they would appear in the finished design. A tiny error here can make the whole chip useless, or worse. The famous Pentium FDIV bug caused the results of a division to be wrong by at most 61 parts per million, in cases that occurred very infrequently. No one even noticed it until the chip had been in production for months. Yet Intel was forced to offer to replace, for free, every chip sold until they could fix the bug, at a cost of $475 million (US). Physical design RTL is only a behavioral model of the actual functionality of what the chip is supposed to operate under. It has no link to a physical aspect of how the chip would operate in real life at the materials, physics, and electrical engineering side. For this reason, the next step in the IC design process, physical design stage, is to map the RTL into actual geometric representations of all electronics devices, such as capacitors, resistors, logic gates, and transistors that will go on the chip. The main steps of physical design are listed below. In practice there is not a straightforward progression - considerable iteration is required to ensure all objectives are met simultaneously. This is a difficult problem in its own right, called design closure. Logic synthesis: The RTL is mapped into a gate-level netlist in the target technology of the chip. Floorplanning: The RTL of the chip is assigned to gross regions of the chip, input/output (I/O) pins are assigned and large objects (arrays, cores, etc.) are placed. Placement: The gates in the netlist are assigned to nonoverlapping locations on the die area. Logic/placement refinement: Iterative logical and placement transformations to close performance and power constraints. Clock insertion: Clock signal wiring is (commonly, clock trees) introduced into the design. Routing: The wires that connect the gates in the netlist are added. Postwiring optimization: Performance (timing closure), noise (signal integrity), and yield (Design for manufacturability) violations are removed. Design for manufacturability: The design is modified, where possible, to make it as easy and efficient as possible to produce. This is achieved by adding extra vias or adding dummy metal/diffusion/poly layers wherever possible while complying to the design rules set by the foundry. Final checking: Since errors are expensive, time-consuming and hard to spot, extensive error checking is the rule, making sure the mapping to logic was done correctly, and checking that the manufacturing rules were followed faithfully. Chip finishing with Tapeout and mask generation: the design data is turned into photomasks in mask data preparation. Analog design Before the advent of the microprocessor and software based design tools, analog ICs were designed using hand calculations and process kit parts. These ICs were low complexity circuits, for example, op-amps, usually involving no more than ten transistors and few connections. An iterative trial-and-error process and "overengineering" of device size was often necessary to achieve a manufacturable IC. Reuse of proven designs allowed progressively more complicated ICs to be built upon prior knowledge. When inexpensive computer processing became available in the 1970s, computer programs were written to simulate circuit designs with greater accuracy than practical by hand calculation. The first circuit simulator for analog ICs was called SPICE (Simulation Program with Integrated Circuits Emphasis). Computerized circuit simulation tools enable greater IC design complexity than hand calculations can achieve, making the design of analog ASICs practical. As many functional constraints must be considered in analog design, manual design is still widespread today, in contrast to digital design which is highly automated, including automated routing and synthesis. As a result, modern design flows for analog circuits are characterized by two different design styles – top-down and bottom-up. The top-down design style makes use of optimization-based tools similar to conventional digital flows. Bottom-up procedures re-use “expert knowledge” with the result of solutions previously conceived and captured in a procedural description, imitating an expert's decision. An example are cell generators, such as PCells. Coping with variability A challenge most critical to analog IC design involves the variability of the individual devices built on the semiconductor chip. Unlike board-level circuit design which permits the designer to select devices that have each been tested and binned according to value, the device values on an IC can vary widely which are uncontrollable by the designer. For example, some IC resistors can vary ±20% and β of an integrated BJT can vary from 20 to 100. In the latest CMOS processes, β of vertical PNP transistors can even go below 1. To add to the design challenge, device properties often vary between each processed semiconductor wafer. Device properties can even vary significantly across each individual IC due to doping gradients. The underlying cause of this variability is that many semiconductor devices are highly sensitive to uncontrollable random variances in the process. Slight changes to the amount of diffusion time, uneven doping levels, etc. can have large effects on device properties. Some design techniques used to reduce the effects of the device variation are: Using the ratios of resistors, which do match closely, rather than absolute resistor value. Using devices with matched geometrical shapes so they have matched variations. Making devices large so that statistical variations become an insignificant fraction of the overall device property. Segmenting large devices, such as resistors, into parts and interweaving them to cancel variations. Using common centroid device layout to cancel variations in devices which must match closely (such as the transistor differential pair of an op amp). Vendors The three largest companies selling electronic design automation tools are Synopsys, Cadence, and Mentor Graphics. See also Integrated circuit layout design protection Electronic circuit design Electronic design automation Power network design (IC) Processor design IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Multi-project wafer service Standard cell References Further reading Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field of electronic design automation, one of the main enablers of modern IC design. Integrated circuits Electronic design Electronic engineering
Integrated circuit design
[ "Technology", "Engineering" ]
3,865
[ "Computer engineering", "Electronic design", "Electronic engineering", "Electrical engineering", "Design", "Integrated circuits" ]
2,951,168
https://en.wikipedia.org/wiki/Moving%20parts
Machines include both fixed and moving parts. The moving parts have controlled and constrained motions. Moving parts are machine components excluding any moving fluids, such as fuel, coolant or hydraulic fluid. Moving parts also do not include any mechanical locks, switches, nuts and bolts, screw caps for bottles etc. A system with no moving parts is described as "solid state". Mechanical efficiency and wear The amount of moving parts in a machine is a factor in its mechanical efficiency. The greater the number of moving parts, the greater the amount of energy lost to heat by friction between those parts. For example, in a modern automobile engine, roughly 7% of the total power obtained from burning the engine's fuel is lost to friction between the engine's moving parts. Conversely, the fewer the number of moving parts, the greater the efficiency. Machines with no moving parts at all can be very efficient. An electrical transformer, for example, has no moving parts, and its mechanical efficiency is generally above the 90% mark. (The remaining power losses in a transformer are from other causes, including loss to electrical resistance in the copper windings and hysteresis loss and eddy current loss in the iron core.) Two means are used for overcoming the efficiency losses caused by friction between moving parts. First, moving parts are lubricated. Second, the moving parts of a machine are designed so that they have a small amount of contact with one another. The latter, in its turn, comprises two approaches. A machine can be reduced in size, thereby quite simply reducing the areas of the moving parts that rub against one another; and the designs of the individual components can be modified, changing their shapes and structures to reduce or avoid contact with one another. Lubrication also reduces wear, as does the use of suitable materials. As moving parts wear out, this can affect the precision of the machine. Designers thus have to design moving parts with this factor in mind, ensuring that if precision over the lifetime of the machine is paramount, that wear is accounted for and, if possible, minimized. (A simple example of this is the design of a simple single-wheel wheelbarrow. A design where the axle is fixed to the barrow arms and the wheel rotates around it is prone to wear which quickly causes wobble, whereas a rotating axle that is attached to the wheel and that rotates upon bearings in the arms does not start to wobble as the axle wears through the arms.) The scientific and engineering discipline that deals with the lubrication, friction, and wear of moving parts is tribology, an interdisciplinary field that encompasses materials science, mechanical engineering, chemistry, and mechanics. Failure As mentioned, wear is a concern for moving parts in a machine. Other concerns that lead to failure include corrosion, erosion, thermal stress and heat generation, vibration, fatigue loading, and cavitation. Fatigue is related to large inertial forces, and is affected by the type of motion that a moving part has. A moving part that has a uniform rotation motion is subject to less fatigue than a moving part that oscillates back and forth. Vibration leads to failure when the forcing frequency of the machine's operation hits a resonant frequency of one or more moving parts, such as rotating shafts. Designers avoid these problems by calculating the natural frequencies of the parts at design time, and altering the parts to limit or eliminate such resonance. Yet further factors that can lead to failure of moving parts include failures in the cooling and lubrication systems of a machine. One final, particular, factor related to failure of moving parts is kinetic energy. The sudden release of the kinetic energy of the moving parts of a machine causes overstress failures if a moving part is impeded in its motion by a foreign object. For example, consider a stone caught on the blades of a fan or propeller, or even the proverbial "spanner/monkey wrench in the works". (See foreign object damage for further discussion of this.) Kinetic energy of the moving parts of a machine The kinetic energy of a machine is the sum of the kinetic energies of its individual moving parts. A machine with moving parts can, mathematically, be treated as a connected system of bodies, whose kinetic energies are simply summed. The individual kinetic energies are determined from the kinetic energies of the moving parts' translations and rotations about their axes. The kinetic energy of rotation of the moving parts can be determined by noting that every such system of moving parts can be reduced to a collection of connected bodies rotating about an instantaneous axis, which form either a ring or a portion of an ideal ring, of radius rotating at revolutions per second. This ideal ring is known as the equivalent flywheel, whose radius is the radius of gyration. The integral of the squares of the radii all the portions of the ring with respect to their mass , also expressible if the ring is modelled as a collection of discrete particles as the sum of the products of those mass and the squares of their radii is the ring's moment of inertia, denoted . The rotational kinetic energy of the whole system of moving parts is , where is the angular velocity of the moving parts about the same axis as the moment of inertia. The kinetic energy of translation of the moving parts is , where is the total mass and is the magnitude of the velocity. This gives the formula for the total kinetic energy of the moving parts of a machine as . Representing moving parts in engineering diagrams In technical drawing, moving parts are, conventionally, designated by drawing the solid outline of the part in its main or initial position, with an added outline of the part in a secondary, moved, position drawn with a phantom line (a line comprising "dot-dot-dash" sequences of two short and one long line segments) outline. These conventions are enshrined in several standards from the American National Standards Institute and the American Society of Mechanical Engineers, including ASME Y14.2M published in 1979. In recent decades, the use of animation has become more practical and widespread in technical and engineering diagrams for the illustration of the motions of moving parts. Animation represents moving parts more clearly and enables them and their motions to be more readily visualized. Furthermore, computer aided design tools allow the motions of moving parts to be simulated, allowing machine designers to determine, for example, whether the moving parts in a given design would obstruct one another's motion or collide by simple visual inspection of the (animated) computer model rather than by the designer performing a numerical analysis directly. See also Kinetic art — sculpture that contains moving parts Movement (clockwork) — the specific name for the moving parts of a clock or watch References Further reading Machinery
Moving parts
[ "Physics", "Technology", "Engineering" ]
1,376
[ "Physical systems", "Machines", "Machinery", "Mechanical engineering" ]
2,951,275
https://en.wikipedia.org/wiki/Session-based%20testing
Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. The method can also be used in conjunction with scenario testing. Session-based testing was developed in 2000 by Jonathan and James Marcus Bach. Session-based testing can be used to introduce measurement and control to an immature test process and can form a foundation for significant improvements in productivity and error detection. Session-based testing can offer benefits when formal requirements are not present, incomplete, or changing rapidly. Elements of session-based testing Mission The mission in Session Based Test Management identifies the purpose of the session, helping to focus the session while still allowing for exploration of the system under test. According to Jon Bach, one of the co-founders of the methodology, the mission explains "what we are testing or what problems we are looking for." Charter A charter is a goal or agenda for a test session. Charters are created by the test team prior to the start of testing, but they may be added or changed at any time. Often charters are created from a specification, test plan, or by examining results from previous sessions. Session An uninterrupted period of time spent testing, ideally lasting one to two hours. Each session is focused on a charter, but testers can also explore new opportunities or issues during this time. The tester creates and executes tests based on ideas, heuristics or whatever frameworks to guide them and records their progress. This might be through the use of written notes, video capture tools or by whatever method as deemed appropriate by the tester. Session report The session report records the test session. Usually this includes: Charter. Area tested. Detailed notes on how testing was conducted. A list of any bugs found. A list of issues (open questions, product or project concerns) Any files the tester used or created to support their testing Percentage of the session spent on the charter vs investigating new opportunities. Percentage of the session spent on: Testing - creating and executing tests. Bug investigation / reporting. Session setup or other non-testing activities. Session Start time and duration. Debrief A debrief is a short discussion between the manager and tester (or testers) about the session report. Jonathan Bach uses the acronym PROOF to help structure his debriefing. PROOF stands for:- Past. What happened during the session? Results. What was achieved during the session? Obstacles. What got in the way of good testing? Outlook. What still needs to be done? Feelings. How does the tester feel about all this? Parsing results With a standardized Session Report, software tools can be used to parse and store the results as aggregate data for reporting and metrics. This allows reporting on the number of sessions per area or a breakdown of time spent on testing, bug investigation, and setup / other activities. Planning Testers using session-based testing can adjust their testing daily to fit the needs of the project. Charters can be added or dropped over time as tests are executed and/or requirements change. See also Software testing Test case Test script Exploratory testing Scenario testing References External links Software testing
Session-based testing
[ "Engineering" ]
662
[ "Software engineering", "Software testing" ]
2,951,323
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold%20theorem
In mathematics, the Cramér–Wold theorem or the Cramér–Wold device is a theorem in measure theory and which states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold, who published the result in 1936. Let and be random vectors of dimension k. Then converges in distribution to if and only if: for each , that is, if every fixed linear combination of the coordinates of converges in distribution to the correspondent linear combination of coordinates of . If takes values in , then the statement is also true with . References Theorems in measure theory Probability theorems Convergence (mathematics)
Cramér–Wold theorem
[ "Mathematics" ]
155
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Functions and mappings", "Convergence (mathematics)", "Mathematical analysis stubs", "Mathematical structures", "Theorems in measure theory", "Mathematical objects", "Theorems in proba...
2,951,380
https://en.wikipedia.org/wiki/Exploratory%20testing
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run. Exploratory testing is often thought of as a black box testing technique. Instead, those who have studied it consider it a test approach that can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time. History Exploratory testing has always been performed by skilled testers. In the early 1990s, ad hoc was too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves the Context-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published by Cem Kaner in his book Testing Computer Software and expanded upon in Lessons Learned in Software Testing. Exploratory testing can be as disciplined as any other intellectual activity. Description Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be. To further explain, comparison can be made of freestyle exploratory testing to its antithesis scripted testing. In the latter activity test cases are designed in advance. This includes both the individual steps and the expected results. These tests are later performed by a tester who compares the actual result with the expected. When performing exploratory testing, expectations are open. Some results may be predicted and expected; others may not. The tester configures, operates, observes, and evaluates the product and its behaviour, critically investigating the result, and reporting information that seems likely to be a bug (which threatens the value of the product to some person) or an issue (which threatens the quality of the testing effort). In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency towards either one, depending on context. According to Kaner and James Marcus Bach, exploratory testing is more a mindset or "...a way of thinking about testing" than a methodology. They also say that it crosses a continuum from slightly exploratory (slightly ambiguous or vaguely scripted testing) to highly exploratory (freestyle exploratory testing). The documentation of exploratory testing ranges from documenting all tests performed to just documenting the bugs. During pair testing, two persons create test cases together; one performs them, and the other documents. Session-based testing is a method specifically designed to make exploratory testing auditable and measurable on a wider scale. Exploratory testers often use tools, including screen capture or video tools as a record of the exploratory session, or tools to quickly help generate situations of interest, e.g. James Bach's Perlclip. Benefits and drawbacks The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than execution of scripted tests. Another major benefit is that testers can use deductive reasoning based on the results of previous results to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to exploring a more target rich environment. This also accelerates bug detection when used intelligently. Another benefit is that, after initial testing, most bugs are discovered by some sort of exploratory testing. This can be demonstrated logically by stating, "Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored." Disadvantages are that tests invented and performed on the fly can't be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run. Freestyle exploratory test ideas, when revisited, are unlikely to be performed in exactly the same manner, which can be an advantage if it is important to find new errors; or a disadvantage if it is more important to repeat specific details of the earlier tests. This can be controlled with specific instruction to the tester, or by preparing automated tests where feasible, appropriate, and necessary, and ideally as close to the unit level as possible. Scientific studies Replicated experiment has shown that while scripted and exploratory testing result in similar defect detection effectiveness (the total number of defects found) exploratory results in higher efficiency (the number of defects per time unit) as no effort is spent on pre-designing the test cases. Observational study on exploratory testers proposed that the use of knowledge about the domain, the system under test, and customers is an important factor explaining the effectiveness of exploratory testing. A case-study of three companies found that ability to provide rapid feedback was a benefit of Exploratory Testing while managing test coverage was pointed as a short-coming. A survey found that Exploratory Testing is also used in critical domains and that Exploratory Testing approach places high demands on the person performing the testing. See also Ad hoc testing Spike testing References External links James Bach, Exploratory Testing Explained Cem Kaner, James Bach, The Nature of Exploratory Testing , 2004 Cem Kaner, James Bach, The Seven Basic Principles of the Context-Driven School Jonathan Kohl, Exploratory Testing: Finding the Music of Software Investigation, Kohl Concepts Inc., 2007 Software testing
Exploratory testing
[ "Engineering" ]
1,349
[ "Software engineering", "Software testing" ]
2,951,410
https://en.wikipedia.org/wiki/Notarikon
Notarikon () is a Talmudic method of interpreting Biblical words as acronyms. The same term may also be used for a Kabbalistic method of using the acronym of a Biblical verse as a name for God. Another variation uses the first and last letters, or the two middle letters of a word, to form another word. The word "notarikon" is borrowed from the Greek language (νοταρικόν), and was derived from the Latin word "notarius" meaning "shorthand writer." Notarikon is one of the three methods used by the Kabbalists (the other two are gematria and temurah) to rearrange words and sentences. These methods were used to derive the esoteric substratum and deeper spiritual meaning of the words in the Bible. Notarikon was also used in alchemy. Usage in the Talmud Until the end of the Talmudic period, notarikon is understood in Judaism as a method of Scripture interpretation by which the letters of individual words in the Bible text indicate the first letters of independent words. Usage in Kabbalah A common usage of notarikon in the practice of Kabbalah, is to form sacred names of God derived from religious or biblical verses. AGLA, an acronym for Atah Gibor Le-olam Adonai, translated, "You, O Lord, are mighty forever," is one of the most famous examples of notarikon. Dozens of examples are found in the Berit Menuchah, as is referenced in the following passage: The Sefer Gematriot of Judah ben Samuel of Regensburg is another book where many examples of notarikon for use on talismans are given from Biblical verses. See also AGLA, notarikon for Atah Gibor Le-olam Adonai Bible code, a purported set of secret messages encoded within the Torah. Biblical and Talmudic units of measurement Chol HaMoed, the intermediate days during Passover and Sukkot. Chronology of the Bible Counting of the Omer Gematria, Jewish system of assigning numerical value to a word or phrase. Hebrew acronyms Hebrew calendar Hebrew numerals Jewish and Israeli holidays 2000–2050 Lag BaOmer, 33rd day of counting the Omer. Sephirot, the 10 attributes/emanations found in Kabbalah. Significance of numbers in Judaism Weekly Torah portion, division of the Torah into 54 portions. References Alchemical processes Hebrew words and phrases History of cryptography Kabbalistic words and phrases Greek words and phrases Language and mysticism
Notarikon
[ "Chemistry" ]
536
[ "Alchemical processes" ]
2,951,506
https://en.wikipedia.org/wiki/Absorption%20%28acoustics%29
In acoustics, absorption refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. Part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. The energy transformed into heat is said to have been 'lost'. When sound from a loudspeaker collides with the walls of a room, part of the sound's energy is reflected back into the room, part is transmitted through the walls, and part is absorbed into the walls. Just as the acoustic energy was transmitted through the air as pressure differentials (or deformations), the acoustic energy travels through the material which makes up the wall in the same manner. Deformation causes mechanical losses via conversion of part of the sound energy into heat, resulting in acoustic attenuation, mostly due to the wall's viscosity. Similar attenuation mechanisms apply for the air and any other medium through which sound travels. The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction. Acoustic absorption is of particular interest in soundproofing. Soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location. In general, soft, pliable, or porous materials (like cloths) serve as good acoustic insulators - absorbing most sound, whereas dense, hard, impenetrable materials (such as metals) reflect most. How well a room absorbs sound is quantified by the effective absorption area of the walls, also named total absorption area. This is calculated using its dimensions and the absorption coefficients of the walls. The total absorption is expressed in Sabins and is useful in, for instance, determining the reverberation time of auditoria. Absorption coefficients can be measured using a reverberation room, which is the opposite of an anechoic chamber (see below). Absorption coefficients of common materials Applications Acoustic absorption is critical in areas such as: Soundproofing Sound recording and reproduction Loudspeaker design Acoustic transmission lines Room acoustics Architectural acoustics Sonar Noise Barrier Walls Anechoic chamber An acoustic anechoic chamber is a room designed to absorb as much sound as possible. The walls consist of a number of baffles with highly absorptive material arranged in such a way that the fraction of sound they do reflect is directed towards another baffle instead of back into the room. This makes the chamber almost devoid of echos which is useful for measuring the sound pressure level of a source and for various other experiments and measurements. Anechoic chambers are expensive for several reasons and are therefore not common. They must be isolated from outside influences (e.g., planes, trains, automobiles, snowmobiles, elevators, pumps, ...; indeed any source of sound which may interfere with measurements inside the chamber) and they must be physically large. The first, environmental isolation, requires in most cases specially constructed, nearly always massive, and likewise thick, walls, floors, and ceilings. Such chambers are often built as spring supported isolated rooms within a larger building. The National Research Council in Canada has a modern anechoic chamber, and has posted a video on the Web, noting these as well as other constructional details. Doors must be specially made, sealing for them must be acoustically complete (no leaks around the edges), ventilation (if any) carefully managed, and lighting chosen to be silent. The second requirement follows in part from the first and from the necessity of preventing reverberation inside the room from, say, a sound source being tested. Preventing echoes is almost always done with absorptive foam wedges on walls, floors and ceilings, and if they are to be effective at low frequencies, these must be physically large; the lower the frequencies to be absorbed, the larger they must be. An anechoic chamber must therefore be large to accommodate those absorbers and isolation schemes, but still allow for space for experimental apparatus and units under test. Electrical and mechanical analogy The energy dissipated within a medium as sound travels through it is analogous to the energy dissipated in electrical resistors or that dissipated in mechanical dampers for mechanical motion transmission systems. All three are equivalent to the resistive part of a system of resistive and reactive elements. The resistive elements dissipate energy (irreversibly into heat) and the reactive elements store and release energy (reversibly, neglecting small losses). The reactive parts of an acoustic medium are determined by its bulk modulus and its density, analogous to respectively an electrical capacitor and an electrical inductor, and analogous to, respectively, a mechanical spring attached to a mass. Note that since dissipation solely relies on the resistive element it is independent of frequency. In practice however the resistive element varies with frequency. For instance, vibrations of most materials change their physical structure and so their physical properties; the result is a change in the 'resistance' equivalence. Additionally, the cycle of compression and rarefaction exhibits hysteresis of pressure waves in most materials which is a function of frequency, so for every compression there is a rarefaction, and the total amount of energy dissipated due to hysteresis changes with frequency. Furthermore, some materials behave in a non-Newtonian way, which causes their viscosity to change with the rate of shear strain experienced during compression and rarefaction; again, this varies with frequency. Gasses and liquids generally exhibit less hysteresis than solid materials (e.g., sound waves cause adiabatic compression and rarefaction) and behave in a, mostly, Newtonian way. Combined, the resistive and reactive properties of an acoustic medium form the acoustic impedance. The behaviour of sound waves encountering a different medium is dictated by the differing acoustic impedances. As with electrical impedances, there are matches and mismatches and energy will be transferred for certain frequencies (up to nearly 100%) whereas for others it could be mostly reflected (again, up to very large percentages). In amplifier and loudspeaker design electrical impedances, mechanical impedances, and acoustic impedances of the system have to be balanced such that the frequency and phase response least alter the reproduced sound across a very broad spectrum whilst still producing adequate sound levels for the listener. Modelling acoustical systems using the same (or similar) techniques long used in electrical circuits gave acoustical designers a new and powerful design tool. See also Soundproofing Acoustic attenuation Attenuation coefficient Anechoic chamber Acoustic wave Acoustic impedance References Acoustics
Absorption (acoustics)
[ "Physics" ]
1,411
[ "Classical mechanics", "Acoustics" ]
2,951,507
https://en.wikipedia.org/wiki/Iproniazid
Iproniazid (Marsilid, Rivivol, Euphozid, Iprazid, Ipronid, Ipronin) is a non-selective, irreversible monoamine oxidase inhibitor (MAOI) of the hydrazine class. It is a xenobiotic that was originally designed to treat tuberculosis, but was later most prominently used as an antidepressant drug. However, it was withdrawn from the market because of its hepatotoxicity. The medical use of iproniazid was discontinued in most of the world in the 1960s, but remained in use in France until its discontinuation in 2015. History Iproniazid was originally developed for the treatment of tuberculosis, but in 1952, its antidepressant properties were discovered when researchers noted that patients became inappropriately happy when given isoniazid, a structural analog of iproniazid. Subsequently N-isopropyl addition led to development as an antidepressant and was approved for use in 1958. It was withdrawn in most of the world a few years later in 1961 due to a high incidence of hepatitis, and was replaced by less hepatotoxic drugs such as phenelzine and isocarboxazid. Canada surprisingly withdrew iproniazid in July 1964 due to interactions with food products containing tyramine. Nevertheless, iproniazid has historic value as it helped establish the relationship between psychiatric disorders and the metabolism of neurotransmitters. Although iproniazid was one of the first antidepressants ever marketed, amphetamine (marketed as Benzedrine from 1935, for "mild depression", amid other indications) predates it; and frankincense has been marketed traditionally for millennia for, among other things, altering mood, although it was not until 2012 that one of the components of its smoke was found to have antidepressant effects in mice. Structure and reactivity The structure of iproniazid is chemically, in both structure and reactivity, similar to isoniazid. Iproniazid is a substituted hydrazine of which the isopropyl hydrazine moiety is essential for the inhibition of monoamine oxidase activity. Synthesis There are multiple routes to synthesize iproniazid. The most common precursor is methyl isonicotinate which formes isonicotinohydrazide when it reacts with hydrazine. Isonicotinohydrazide can be converted into iproniazid via different pathways. One synthesis pathway involves AcMe which results in the formation of N'-(propan-2-ylidene)isonicotinohydrazide. Subsequently, the C=N linkage is selectively hydrogenated in the presence of a platinum catalyst and with water, alcohol or acetic acid as solvent. In another pathway isonicotinohydrazide reacts with either 2-bromopropane or 2-chloropropane in an N-isopropyl addition reaction to the hydrazine moiety. This directly results in the formation of iproniazid. Reactions and mechanism of action Iproniazid inhibits the activity of monoamine oxidases (MAOs) both directly and by formation of an active metabolite, isopropylhydrazine. The formation of isopropylhydrazine from iproniazid has been observed without MAOs present. Both iproniazid and isopropylhydrazine react near the active site of MAOs. The reaction is a progressive first-order reaction with a high activation energy. In the presence of oxygen it is an irreversible reaction, as dehydrogenation of iproniazid at the active site of the enzyme takes place. This dehydrogenation resembles the first step of amine oxidation. After dehydrogenation iproniazid further reacts with the enzyme. Inhibition of MAOs by iproniazid is competitive and sensitive to changes in pH and temperature, similar to oxidation of the monoamine substrate. Inhibition cannot be reversed by addition of the substrate. Iproniazid is able to displace non-hydrazine inhibitors, but not other hydrazine inhibitors from the active site of the enzyme. To increase the inhibition of monoamine oxidase, cyanide can be used. The reaction however remains oxygen-dependent. MAO inhibition can be decreased by addition of glutathione, suggesting non enzymatic conjugation of either iproniazid or isopropylhydrazine with glutathione. Metabolism and toxicity Iproniazid is metabolized in the body. Iproniazid is converted to isopropyl hydrazine and isonicotinic acid in an initial hydrolysis reaction. Isopropyl hydrazine can either be released in the blood or it can be metabolically activated by microsomal CYP450 enzymes. This oxidation of isopropyl hydrazine is a toxification reaction that eventually can lead to the formation of an alkylating agent: the isopropyl radical. Hepatic necrosis was found in rats with doses as low as 10 mg/kg. Isopropyl radical The presence of the isopropyl radical was indicated by another observed product of the metabolism of iproniazid: the gas propane. Alkylating agents have the capability to bind to chemical groups such as amino, phosphate hydroxyl, imidazole and sulfhydryl groups. The formed isopropyl radical is able to form S-isopropyl conjugates in vitro. This diminishes covalent binding to other proteins, however it was only observed in vitro. In vivo, hepatotoxic doses of isopropyl hydrazine, the precursor of the isopropyl radical, did not deplete sulfhydryl-group containing compounds. Liver necrosis The isopropyl radical formed as a result of the metabolism of iproniazid, is able to covalently bind to proteins and other macromolecules in the liver. These interactions are the reason for the hepatotoxicity of iproniazid. Covalent binding results in liver necrosis by presumably changing protein function leading to organelle stress and acute toxicity. However, the exact mechanism of how the binding of iproniazid derivatives to liver proteins would induce liver necrosis remains unclear. Cytochrome P450 enzymes are present at the highest concentrations in the liver, causing most alkylating agents to be produced in the liver. This explains why the liver is mostly damaged by covalent binding of alkylating agents such as the isopropyl radical. Rat models and other animal models have shown that cytochrome P450 enzymes convert isopropyl hydrazine to alkylating compounds that induce liver necrosis. An inducer of a class of hepatic microsomal cytochrome P450 enzymes, phenobarbital, highly increased the chance of necrosis. In contrast, the compounds cobalt chloride, piperonyl butoxide and alpha-naphthylisothiocyanate inhibit microsomal enzymes which resulted in a decreased chance of necrosis due to isopropyl hydrazine. Metabolism to other forms Iproniazid can also be metabolised by O-dealkylation from iproniazid to acetone and isoniazid. Isoniazid can undergo further metabolism via multiple metabolic pathways, of which one eventually results in alkylating agents as well. This toxifying metabolic pathway includes N-acetylation. Reactions involving acetylation are influenced by genetic variance: the acetylator phenotype. The toxicological response to isoniazid (and thus iproniazid) can therefore be subjected to interindividual differences. Acetone can also be produced in alternative pathway as a metabolite of isopropyl hydrazine. It is eventually converted to CO2 and exhaled. Isonicotinic acid Isonicotinic acid, formed during the hydrolysis of iproniazid, is described as a moderately toxic compound and allergen with cumulative effects. Isonicotinic acid is further metabolized by glycine-conjugation or glucuronic acid-conjugation. Other toxic effects Iproniazid can also interact with tyrosine-containing food products which may have toxic effects. Excretion Excretion can occur via different routes: via the lungs, the urine, bile and sometimes via the skin or breast milk. Iproniazid has a molecular weight of 179.219 g/mol, which is far below 500 g/mol, and it is hydrophilic (because of e.g. the N-H groups in the molecule). These two properties together indicate that iproniazid is likely to be excreted in the urine via the kidneys. Iproniazid can also be metabolized and excreted afterwards in the form of one of its metabolites which can be found in the figure above. Isoniazid is hydrophilic and has a molecular weight of 137.139 g/mol. Isoniazid is therefore expected to be excreted via the urine, if it is not further metabolized in the body. The same holds for isonicotinic acid and isonicotinoyl glycine. Carbon dioxide and propane are gaseous which are presumably transported out of the body by exhalation via the lungs. Indication Iproniazid was originally produced as anti-tuberculosis medicine, but found to be more effective as antidepressant. When it was discovered that iproniazid is hepatotoxic, it was replaced by medicinal xenobiotics that are less harmful to the liver. Examples of antidepressant drugs that are nowadays used instead of iproniazid are isocarboxazid, phenelzine, and tranylcypromine. Drugs more effective for treatment of tuberculosis are isoniazid, pyrazinamide, ethambutol and rifampicin. Efficacy and side effects Efficacy Iproniazid was designed to treat tuberculosis, but its most significant positive effect is that it has a mood-stimulating property. Therefore, it was used as an antidepressant drug. Adverse effects The most significant adverse effects of using iproniazid is the hepatotoxicity caused by its metabolites. Moreover, usage of iproniazid results in several adverse effects such as dizziness (when lying down), drowsiness, headaches, ataxia, numbness of the feet and hands, and muscular twitching. However, these adverse effects disappear after approximately 10 weeks. Effects on animals Rat animal models have been used to investigate the hepatotoxic (bio)chemical mechanism of iproniazid. A metabolite of iproniazid, isopropyl hydrazine, was found to be a potent hepatotoxin in rats. Hepatic necrosis was found in rats with doses as low as 10 mg/kg. It was predicted with admetSAR that iproniazid had a LD50 of 2.6600 mol/kg in rats. Lethality See the table for experimentally determined LD50, TDLo and LDLo values of various organisms. See also Hydrazine (antidepressant) Isoniazid References Monoamine oxidase inhibitors Hepatotoxins Hydrazides 4-Pyridyl compounds Withdrawn drugs Isopropylamino compounds
Iproniazid
[ "Chemistry" ]
2,427
[ "Drug safety", "Withdrawn drugs" ]
2,951,632
https://en.wikipedia.org/wiki/Symbiote%20%28comics%29
The Klyntar (), colloquially and more commonly referred to as symbiotes, are a fictional species of extraterrestrial parasitic life forms appearing in American comic books published by Marvel Comics, most commonly in association with Spider-Man. The symbiotes, as their alternative name suggest, form a symbiotic bond with their hosts, through which a single entity is created. They are able to alter their hosts' personalities and/or memories, often influencing their darkest desires, along with amplifying their physical and emotional traits and personality and thereby granting them super-human abilities. The symbiotes are also weakened when in range of extreme sounds or sonic frequencies. There are more than 40 known symbiotes in the Marvel Universe. The first and most well-known symbiote is Venom, who originally attached itself to Spider-Man during the 1984 Secret Wars miniseries. After Spider-Man rejected it upon discovering its alien, parasitic nature, the symbiote bonded with his rival, Eddie Brock, with whom it first became Venom, but still possessed the powers of Spider-Man. The character has since endured as one of Spider-Man's archenemies, though he has also been occasionally depicted as an antihero. Other characters have later merged with the Venom symbiote, including the villain Mac Gargan, and Flash Thompson, who became the superhero Agent Venom. Other well-known symbiotes are Carnage, an offspring of Venom who, when merged with its most infamous host, Cletus Kasady, has served as an enemy of both Spider-Man and Venom; and Anti-Venom, which originated when the Venom symbiote re-merged with Brock after being separated from him for a long time, gaining a new white appearance and additional powers as a result of Martin Li using his powers on Brock to cure his cancer. Since their conception, the symbiotes have appeared in various media adaptations, including films, television series, and video games. Venom has been the most featured one, appearing in the 2007 film Spider-Man 3, and as the titular protagonist of the 2018 film Venom. Carnage also made its cinematic debut in the film Venom: Let There Be Carnage (2021). Publication history The first appearance of a symbiote occurs in The Amazing Spider-Man #252, The Spectacular Spider-Man #90, and Marvel Team-Up #141 (released concurrently in May 1984), in which Spider-Man brings one home to Earth after the Secret Wars (Secret Wars #8, which was months later, details his first encounter with it). The concept was created by a Marvel Comics reader; the publisher purchased the idea for $220. The original design was then modified by Mike Zeck, at which point it became the Venom symbiote. The concept would be explored and used throughout multiple storylines, spin-off comics, and derivative projects. Depiction Fictional history Symbiotes were originally created by an ancient malevolent primordial deity named Knull. When the Celestials began their vast plan to evolve the universe, Knull, seeing that his "Kingdom" was being touched, retaliated by constructing All-Black, the first symbiote, and subsequently cut off a Celestial's head. The other Celestials then banished Knull, along with the severed Celestial head, deeper into space. After that, he started using the head's cosmic energies as a forge for the symbiotes, which is how they developed the weaknesses to sound and fire. The head would later become interdimensional crossroads and laboratory Knowhere. Knull then embarked on a campaign of genocide against the other gods. During a battling with the gods, he crashed on a desolate planet where All-Black left him and went to Gorr, drawn to his murderous hate, who tried to kill Knull. Knull later reawakened and created an army of symbiotes, which he used to conquer planets and destroy entire civilizations, establishing the Symbiote Imperium in the process. However, when a dragon-like creature journeyed to the medieval Earth, Thor defeated it and destroyed the connection between Knull and the symbiotes. Subsequently, the symbiote hive-mind began to explore notions of honor and nobility as they bonded to benevolent hosts. The symbiotes subsequently rebelled against their god, imprisoning him at the heart of an artificial planet in the Andromeda Galaxy they called Klyntar, from which they derived the name of their species. Ashamed of their dark past, the symbiotes desired to spread and maintain peace throughout the Cosmos by seeking out worthy hosts from various species to create an organization of noble warriors. However, these altruistic goals were imperfect, as the Klyntar symbiotes could be corrupted by hosts with harmful chemical imbalances or problematic personality traits, turning them into destructive parasites that would spread lies and disinformation about their own kind to make other peoples fear and hate the Klyntar species as a whole. The corrupted Klyntar became more widespread than their benevolent counterparts, establishing a spacefaring culture dedicated to infecting and overtaking whole planets and reestablishing the Imperium. These symbiotes forced their hosts to perform death-defying feats to feed off of the resulting surges of hormones, like adrenaline and phenethylamine. These hosts would die quickly, either because of the wear from constant stress and exertion or as a result of the inherent danger of the stunts performed. At some point it was believed that a symbiote-run planet was devoured by Galactus. Due to their hive-mind's memory, all symbiotes now loathe both Galactus and his former herald, the Silver Surfer, but it was later revealed that their hatred for the Silver Surfer was because he had time traveled to a time where the Klyntar were rebelling against Knull and the Silver Surfer had made the God of the symbiotes bleed. ZZZXX, a symbiote with a predilection for eating brains, was also captured by the Shi'ar, and imprisoned and studied for years until it was released and employed as a Praetorian Guard by Gabriel Summers. The corrupted symbiotes had invaded the Microverse and tried to absorb the Enigma Force, but they were defeated by the avatar of the force, after they had caused destructive effects on this world and its people. The symbiote would arrive on the Savage Land, where it remained trapped for years to the point of madness and bonded to Conan during a confrontation between the Savage Avengers and Kulan Gath. During the Kree-Skrull War, the Kree wanted to replicate the Skrull's shapeshifting abilities; they acquired a newborn symbiote which had been outcast from its species on the planet where Knull had created them. They recruited Tel-Kar to be bonded to the young symbiote, and modified both Tel-Kar and the symbiote so that he could have full control over it. He infiltrated the Skrulls using the symbiote's shapeshifting ability, but was discovered. He deleted the symbiote's memories and separated himself from it. The symbiote then reunited with the parasitic symbiotes, retaining little memory of its first host. When the corrupted symbiotes found out that this symbiote wanted to commit to its host rather than exploit it as they tended to do, they decided that it was insane and trapped it in a canister to be condemned to die on a planet that would later become part of Beyonder's Battleworld. There, it would be encountered by Spider-Man in the 1984 miniseries Secret Wars. In that story, which saw the heroes of Earth transported to this planet to battle their archenemies, Spider-Man sought to repair or replace his tattered costume, which had been damaged in battle, and was directed by Thor and Hulk to a device inside the alien compound that they had come to use as headquarters. Mistaking the device in which the symbiote was imprisoned for the device Thor and Hulk mentioned, Spider-Man activated it, freeing the symbiote, which appeared before him as a black sphere that enveloped his body and took on the form of a black version of his costume that could respond to his mental commands. Spider-Man assumed that the device produced clothing designed to do this. He did not know that Deadpool had already briefly bonded with the symbiote and had corrupted it with his unstable personality. Spider-Man returned to Earth with the symbiote, where, after discovering that it was an alien lifeform that wanted to bond with him, he managed to separate himself from it by using sound waves to hurt the creature, which took refuge in a church's bell tower. It later bonded with Eddie Brock, who went to the church, despondent and vengeful after his journalism career was destroyed because he incorrectly identified the serial killer known as Sin-Eater as a man who later turned out to be a compulsive confessor; he blamed this turn of events on Spider-Man. Having bonded with the symbiote, the two became the being known as Venom. During this time, it spawned seven offspring and a clone; its first child later had three of its own, producing the symbiotes known as Carnage, Scream, Lasher, Phage, Agony, and Riot. The Venom symbiote eventually becomes too much for Eddie to handle, and he separates himself from it. This separation causes a telepathic scream that is heard by the other corrupted symbiotes, who then invade Earth. Eddie, Spider-Man and Scarlet Spider team up against the invasion. The battle comes to an end when Eddie rebinds with Venom, causing another scream which results in the symbiotes committing suicide. While bonded to Flash Thompson as part of Project Rebirth, who originally struggled to control it, the symbiote developed a slight affection for him. It is later established that the host's mental state affects the symbiote just as much as the other way around: Venom's first child, the Carnage symbiote, is as psychotic as its host, Cletus Kasady, and the Venom suit's explosiveness worsened after bonding with Angelo Fortunato and Mac Gargan, both of whom are career criminals. Likewise, the various symbiotes bonded to heroes are not shown to be as twisted, though they do occasionally struggle with aggression. A swarm of Brood that had been overtaken by symbiotes later invade the S.W.O.R.D. satellite and possess all of its inhabitants, including Deathbird and her unborn child, to expand the symbiote Imperium. However, Spider-Man, bonded to a second symbiote, defeats the symbiotes with help from his class at Jean Grey's School. The Klyntar were later raided by the Poisons with help from Haze Mancer, a symbiote poacher, resulting in the apparent death of the Agents of the Cosmos and the abduction of all the symbiotes. The abducted symbiotes were later modified by the Poisons so they could be used on the superheroes on Earth, in order for the Poisons to consume. After the defeat of the Poisons, the surviving symbiotes were returned to Klyntar. When the body of Grendel, the dragon-like composite symbiote defeated by Thor, is discovered on Earth, this reawakens Knull enough to allow him to control the creature. It is subsequently stopped by the combined efforts of Venom and Miles Morales, and is later incinerated by Eddie, denying Knull the chance to escape Klyntar. After some months, a cult had gotten hold of Cletus's damaged body inside a chamber and had planned to revive him by using the Grendel's remnants, which they stole from Maker. This cult, who worships Knull and Carnage as Knull's prophet, was led by Scorn. They implanted the remnants inside Cletus, reviving him, and at first he resembled Ancient Venom (Venom possessed by Knull), until the Carnage pieces were absorbed by the ancient symbiote and acquired Scorn's remnants by killing her. When Cletus came in contact with Knull, he got a new purpose: to free Knull. The only way left to do this was to acquire every single Codex – the symbiote remnants containing the genetic information of the host – left inside the bodies of every single host, dead or alive, who came into physical contact with the symbiotes on Earth, to overload the symbiote hive mind and scatter the Klyntar. Knull slowly began reawakening as a result of Carnage's efforts on Earth and the symbiotes of Klyntar began succumbing to his control once more. When Sleeper was drawn to Klyntar, the symbiotes attacked and tried to assimilate it into the hive-mind. After escaping, Sleeper realized that Eddie was in danger and returned to Earth as quickly as it could. When Knull fully awakened, he destroyed Klyntar and seized control of its constituent symbiotes, coalescing them into a horde of symbiote-dragons. Culture The symbiotes, when they were originally created, were used as tools by Knull to conquer the universe. At the time, they had a symbiote dialect. When they were freed from Knull's control and began learning about compassion, they established the lie about their nature to redeem themselves. They formed the Agents of Cosmos, symbiotes bonded to benevolent hosts, forming noble warriors who try to maintain peace across the universe. However, some symbiotes were corrupted by malevolent hosts, turning them back into monsters and reestablishing the symbiote Imperium first formed by Knull; these symbiotes were cut from the Klyntar hive mind. The symbiote Imperium would conquer planets and infect their inhabitants to drain and consume them. The symbiotes in general don't have an actual culture. As seen with Venom and Carnage, the symbiote's personality and psychology depends largely on the host's nature, as the link between the host and the symbiote is what gives the symbiote a purpose and meaning to their life. As for the Nameless, a group of Kree explorers infected by the Exolon parasites, after being infected by the parasites which consumed their souls, they lost all sense of time and sentience and started engaging in gruesome self-inflicted pain rituals to remember their past lives. Biology The symbiotes are an alien species of inorganic, amorphous and multicellular symbiotic parasites formed from Knull's "Living Abyss". The symbiotes function as living extradimensional tesseracts, requiring living hosts to anchor them to the fabric of space and time. They record the genetic material of each of their hosts in a genetic codex. They also empower a host's natural abilities to the point that they far exceed that of normal members of the host's species. These abilities include the following: superhuman strength (strong enough to lift 50 tons or more), speed, endurance, agility, healing powers, and intelligence genetic memory, allowing them to recall information from previous hosts. They also leave traces of themselves, called codex, attached to the host's DNA, to send information to the hive mind the ability to negate damage caused by terminal illnesses and permanent injuries. While symbiotes can somewhat heal their hosts, they generally seek to force their hosts to depend on them in order ensure their own survival. For example, Eddie Brock was able to survive indefinitely with terminal cancer, and Scott Washington was able to walk despite being paraplegic. Similarly, Flash Thompson and Cletus Kasady had received "legs" when bonded with the Venom symbiote and Carnage symbiote even though they had lost their legs. Wraith was able to use his Exolon powers to cure the Kree who were infected by the Phalanx. they can reproduce asexually with a limited number of seeds inside their mass. For example, Venom gave birth to seven "children", and its first child Carnage had three. senses that extend over its entire surface, enabling hosts to "see" what is behind them or otherwise not in their line of sight (like a Spider-Sense). the ability to change shape and size at will. This ability functions regardless of the host's actual stature and bodily dimensions, as the symbiotes are living tesseracts. This includes expanding to any size as long as they have something to grow on, such as a host or an object. Symbiotes can form multi-layered shields against powerful attacks and fit inside of small areas, such as electric wires and the insides of cars, to completely disable them. This shapeshifting allows the symbiote to change its color and texture, which allows it to blend into the environment as a form of camouflage, or to change the host's outward appearance (including mimicking the appearances of other beings). the ability to sense the thoughts and will of the host. When Spider-Man was originally selected, he had been thinking about Spider-Woman's costume in the Secret Wars. The symbiote acted on this and formed a similar costume to hers and Knull's emblem, which is the one seen on Spider-Man and Venom. the ability to excrete matter that enters in its body, like bullets, turning them into the green saliva immortality, as evidenced by Venom 2099, which was still alive in the year 2099, and All-Black, which was created in the beginning of the Universe and was still alive in King Thor's timeline. the ability to merge with other symbiotes or otherwise absorb one another. This is similar to how Hybrid was formed, or when Carnage absorbed another symbiote from the Negative Zone, regenerating itself. The symbiote can also absorb the codices of other symbiotes, obtaining their genetic memory - for example, when Spider-Man bonded to two other symbiotes, they absorbed the Venom's codex, allowing then to appear exactly like Venom. the ability to force their hosts into a comatose state, as shown with Zak-Del and Eddie Brock the ability to prolong their host's life by replacing their failing organs with simulacrums manifested from their living abyss - however, they cannot do this indefinitely Because they record the genetic material of each of its hosts, there are also additional powers that have been demonstrated, but are not necessarily universal to all symbiotes: the ability to block parts of the host's mind - Venom and all its descendants possess the ability to bypass Spider-Man's Spider-Sense; because the original symbiote was attached to Peter Parker (Spider-Man) first, it took his genetic information and spider-powers by using its Parasitic Inheritance. This means that battles between Peter and Venom or any of its descendants would essentially be a fight between Peter and his black-suited self, which wouldn't set off his Spider-Sense (during the Clone Saga, this became complicated, as Venom did set off Ben Reilly's Spider-Sense; however, this has been attributed to Ben being cloned from Peter prior to his first encounter with the Venom symbiote). the ability to form fangs or simple bladed weapons out of their limbs. The first appearance of this was the Carnage symbiote. the ability to form tendrils and tentacles of various lengths from their body the ability to form wings, as shown when Venom came into contact with Knull and grew a pair of web-like wings; in some cases the symbiote has also been shown to form gliding wings (see Venom-Punisher and Hybrid) in the case of the purified Klyntar, Cosmic Awareness, which allows the Agents of Cosmos to sense people in need the ability to project the surface of the symbiote to attack at a distance the ability to sustain its humanoid body even without a host, but only for a certain period of time the ability to stick to walls (adapted from Spider-Man) the ability to produce acid, toxins, and venoms, like the venomous bite Venom delivered to Sandman (see Venom, Agony, and Venom 2099) the ability to produce webbing from its own mass (adapted from Spider-Man) the ability to sense the presence of other beings within a certain distance the ability to protect hosts from Ghost Rider's Penance Stare and the Inheritors's Life Absorption Touch the ability to generate and manipulate an ice-like substance (adapted from Iceman), use telepathy and telekinesis (adapted from Marvel Girl), create powerful kinetic blasts (adapted from Cyclops), increase strength and intelligence (adapted from Beast) and grant the host with the ability to fly (adapted from Angel) the ability to create storage portals inside of themselves (this allowed Peter Parker to stow and access his camera) the ability to filter breathable air for its host, allowing them to breathe underwater (seen in Vengeance of Venom), inhale poisonous fumes, and even survive in the vacuum of space the ability to transfer symbiote traits to its host - for example, when Carnage ate Karl Malus and he became a symbiote-human hybrid in the case of the Venom symbiote, the possession of empathic abilities, and the ability to project desires and needs into the thoughts of its host or potential hosts; this ability can also aid Venom in detecting the truth from those he interrogates. in some realities, the symbiote feeds on the baser emotions of its host, creating an increasingly hostile personality. The longer the host is exposed to the symbiote, the more overpowering this state of mind becomes. each symbiote has its own unique abilities: Venom has a venomous bite; Toxin can change its shape and form into a Spider-Man-like build (slim, but strong) and Venom-like build (big and muscular) depending on its mood; Scream can use its web-like hair as a weapon; Agony can spit acid and manipulate matter; Phage can create bladed weapons; Lasher can create tendrils on its back; Riot can use bludgeoning weapons and agility; Payback can produce electricity; Scorn can fuse itself with technology; All-Black can grant its host immortality; and Sleeper possesses chemokinesis, the ability to manipulate chemicals, providing limited telepathy and excellent cloaking abilities through pheromones. some symbiotes are immune to sonic attacks and fire through modification, like Anti-Venom, Red Goblin, Mayhem, Payback and Grendel. the ability to change the mood of its host by manipulating their brain chemicals the ability to replicate itself, as seen with Carnage and All-Black in the mainstream universe and Venom in Spider-Man Reign However, the symbiotes also possess weaknesses that can be fatal. Some of these weaknesses include: a natural weakness to sonic attacks and heat-based attacks, which Knull unintentionally gave them while they were being forged. However, symbiotes have a growing resistance to sound and fire due to their evolution. Still, there has not been an invulnerable symbiote in mainstream continuity, because the newest breeds can be harmed by incredible amounts of sonic waves and heat. Symbiotes, like Krobaa, are also seemingly vulnerable to light. The symbiotes in Ultimate Marvel are only vulnerable to the heat produced by high voltage electricity. vulnerability to chemical and biological attacks - for example, Iron Man created a cure to a virus-like bio-weapon based on the Venom symbiote that was created by Doctor Doom. Venom and Carnage have shown susceptibility to chemical inhibitors. Whether a symbiote can mutate and reduce the effect of these weaknesses is unknown. potential hosts with advanced healing factors, such as Wolverine, have shown resistance to symbiosis. in some incarnations, the symbiote is depicted as requiring a certain chemical (most likely phenethylamine) to stay sane and healthy, which has been said to be found abundantly in two sources: chocolate and human brain tissue. Thus, the host is forced to either consume large amounts of chocolate or become a cannibal who devours the brains of those they kill. This peculiar trait has only been witnessed in the Venom symbiote. However, both Carnage and Toxin have threatened their enemies with aspirations to "eat their brains", as well as various other body parts. When Toxin teamed up with Spider-Man and Black Cat, he struggled to keep himself together, but told Spider-Man that he was only "joking" about eating the robbers' brains. Similarly, the Exolons feed on the immortal soul of the hosts, making the hosts immortal; however, this causes the host to descend into madness, as well making them forget all of their old memories unless they inflict pain to themselves in an attempt to keep their memories for longer (see Zak-Del and the Nameless) on at least one occasion, Spider-Man was able to exhaust the Venom symbiote by taking advantage of the fact that it made its webbing out of itself; after the symbiote had already used a great deal of webbing to bind him to a bell, Spider-Man forced Venom to use further webbing so that it would exhaust itself, like blood dripping from a wound (although the sheer amount of webbing that the symbiotes would need to use for this weakness to be exploited makes its use in a fight limited). inability to bond to more than one host, as shown when Venom tried to bond to both Eddie and Peter at the same time and again with Flash and Eddie (although the Carnage symbiote did not display this weakness when bonding itself to people in Doverton, Colorado) susceptibility to feelings - in the storyline Planet of the Symbiotes, Eddie Brock releases a cry of pain and agony so great that the entire symbiote race commits mass suicide, but how they kill themselves is not clear. the , a race of extraterrestrial shapeshifters which prey on symbiotes, possess the ability to spew an unknown incendiary chemical that can paralyze symbiotes and enhance their taste. numerous occasions have shown that when a corrupted symbiote remains bonded to a host for too long, the symbiote will eventually consume the body of the host, leaving the host a dead husk (see the soldiers who were bonded to the Grendel symbiotes and with Peter Parker in two What If?!) when Eddie Brock was diagnosed with cancer, Martin Li used his Lightforce healing ability to cure him, accidentally producing white blood cells in Eddie's blood which combatted the Venom's symbiote remnants; this created a new, non-sentient symbiote called Anti-Venom. This symbiote had the ability to cure every sickness (including Spider-Man's powers) and it was also corrosive to the symbiotes, as shown when Eddie and Flash nearly killed Venom, Mania, the Poisons and Red Goblin. There have been no symbiotes shown to be immune to Anti-Venom. vulnerability to the abilities of telepaths a new and still mysteriously extraterrestrial race known as Poisons, apparently nature's answer to the symbiotes, prey on them through direct contact infection, which forms an unstoppable one-sided union that the symbiote wants no part of. List of symbiotes Major symbiote characters The following symbiotes have appeared throughout several years of Spider-Man's history, appeared in multiple media such as film and video games, and were main characters/villains in story arcs. Other symbiote characters The following symbiotes have made only a few other appearances in comic books and are usually excluded from adaptations in other media. Other versions Ultimate Marvel In the Ultimate Marvel universe, the Venom suit is a man-made creation born of an experiment by Richard Parker and Edward Brock Sr., who were hoping to develop a protoplasmic cure for severe illnesses. Bolivar Trask, who was funding the research, intended to weaponize it. It used Richard's DNA as the starting base; thus, himself and Peter are "related" to it. When bonding to a host, the organic matter that comprises the suit envelops the host, regardless of resistance, and temporarily blinds it, before encasing itself in a hard casing, similar to a pupa. When the host emerges, the suit then shifts its appearance and function to assist its host, such as creating eyes for it to see through; if bonded with an incompatible host, it tries to take it over, inducing a homicidal rage in the suit's attempt to feed itself. When bonded with a host and forcibly removed, the suit leaves trace amounts of itself in their bloodstream, which attracts other samples of Venom and allows it to overload Peter's spider-sense. In the video game Ultimate Spider-Man, absorbing the trace amounts in Peter's blood allowed Eddie to take complete control of the suit, giving him a greater ability to talk and adorning him with a spider symbol on his chest. Venom's only known weakness is electricity. Larger amounts of the suit will need more electricity to kill, as varying amounts of the suit will be stunned or vaporized by electric shocks. This was first seen in Ultimate Spider-Man #38, when an electric wire got tangled around Venom's foot. An electrocution from live power-lines vaporized the smaller amount on Peter, while a similar amount disabled Eddie. Note that in the video game Ultimate Spider-Man, when Electro electrocutes Venom during a cutscene, the suit is not affected by the shock like it was by the live power-line in the "Venom" arc. The suit can take the Shocker's vibro-shocks, and can protect its host from a bullet, which feels like nothing more than a relaxing vibration. When worn by a host other than Richard's son Peter, the host is compelled to take the life energy of other human beings or else have their own be consumed by the suit. The original Spider-Man (Peter Parker) was able to control the suit to a greater extent than anyone else because of his powers and because the suit was designed for his father. The Carnage symbiote also appears in the Ultimate universe as a parasite genetically engineered by Curt Conners and Ben Reilly from Peter's DNA based on Richard's research. Traces of the Venom suit remaining in Peter's blood give Carnage similar properties to those of the Venom suit. It also devours people, but does not require a host. When first introduced, the organism was a blob of instinct, with no intelligence or self-awareness, its only aim to feed on the DNA of others, including that of Gwen Stacy, to stabilize itself. After feeding on multiple people, Carnage turns into a damaged form of Richard and Peter, with the memories of itself as Spider-Man. Carnage tries to absorb Peter so it can become whole, but Peter throws Carnage into a smokestack, burning the beast, although it is revealed that the organism had survived and turned into a replica of Gwen's form with Gwen's memories. During an encounter with Eddie Brock, the Venom suit absorbs the Carnage suit into itself, making itself complete and leaving Gwen a normal human being. Spider-Gwen In Spider-Gwen's universe, Dr. Elsa Brock creates a cure to Harry Osborn's Lizard DNA by using Spider-Gwen's radioactive isotopes, given to her by S.I.L.K. Leader Cindy Moon. When Gwen injects the isotopes into Harry, the Lizard serum combines with the Spider isotopes and transforms into Venom. Venom then bonds to Spider-Gwen, which gives her her powers back and she becomes Gwenom. This symbiote, in its natural form, is made up of some spiders working together and is weak to sonic attacks only when bonded to a host; without a host, it is not susceptible to this weakness. Amalgam Comics In the Amalgam Comics universe, the Project Cadmus facility which created Spider-Boy started experimenting on a substance that they obtained from an alien spaceship. They inadvertently created a crystalline symbiote named Bizarnage (amalgamation of Carnage and Bizarro). It had the powers of Spider-Boy and started attacking everyone, until Spider-Boy defeated it. MC2 In the alternate universe of the Marvel Comics 2, or MC2 imprint, Norman Osborn obtained Eddie's blood (he was still bonded to Venom at the time) and extracted the symbiote codex. Norman then combined the codices with May's DNA and created a symbiote/human hybrid clone of Mayday Parker. The clone stayed in stasis inside a chamber until Peter, with Norman's mind, became Goblin God and awoke the hybrid. When Peter returned to normal, the hybrid, under the alias Mayhem/Spider-Girl, went to live with the Parker family, naming herself April Parker. In a later timeline, Mayhem accidentally killed the real Spider-Girl and became a murderous vigilante after killing American Dream. The government, in an attempt to stop her, used pieces of the dead Carnage symbiote (after it had been killed by Mayday) to create living weapons dubbed Bio-Predators. The Bio-preds ran wild, decimating the world and its defenders. Mayhem, seeing the error of her ways, went back in time and sacrificed herself to stop her past self from killing Spider-Girl, ensuring the events that led to the Biopreds' creation never occurred, even though she may have survived. "Spider-Verse" During the 2014 "Spider-Verse" storyline, in Spider-Punk's universe, V.E.N.O.M, also known as Variable Engagement Neuro-sensitive Organic Mesh, was created by Oscorp and was worn by the Thunderbolt Department, the police and fire department of President Osborn, so that he could have full control over the city. However, they are all subsequently defeated by Spider-Punk using his guitar. "Spider-Geddon" During the 2018 "Spider-Geddon" storyline, in the universe of Peni Parker, aka SP//dr, VEN#m is a giant mech-suit, powered by a Sym Engine and created to serve as back-up in case the SP//dr failed. It was piloted by Addy Brock until, in a battle against a technological monster named M.O.R.B.I.U.S., the suit gained a conscience and went rogue. Though SP//dr is able to defeat VEN#m, she is too late to stop it from consuming Addy, as well as her version of Aunt May, who flew in to fix the problem manually. What If... ...Spider-Man had rejected the Spider? "What if?: The Other", set during "The Other" storyline, features an alternative version of Peter who abandons the Spider when given the choice. Some time afterward, the Venom symbiote leaves its current host Mac Gargan and merges with Peter, who was inside a cocoon to become Poison. Poison, now calling himself "I", chooses Mary Jane to be his companion. He fails to gain her affection and digs up the grave of Gwen Stacy instead. The last images reveals Poison watching over a new cocoon like his own, as it bursts forth showing a hand similar to Carnage's, even though the normal symbiotes are unable to bond with dead hosts. "Age of Apocalypse" In a "What if?" "Age of Apocalypse" reality, in which both Charles Xavier and Eric Lensherr were killed, Apocalypse is served by clones of a symbiote Spider-Man, although the clones seem to be more symbiote than man. Spider-Man: India In Spider-Man: India, the symbiotes are parasitic demons with outward tusk-like fangs, who had ruled the world in the past but got trapped inside an amulet. The amulet was eventually found by Nalin Oberoi and transformed him into the Green Goblin. During a fight with Spider-Man, the Green Goblin releases a demon to possess Spider-Man, but is expelled. After the defeat of Green Goblin, the amulet is thrown into ocean, leaving Venom the only demon alive. What The--?! In the What The--?!, "The Bee-Yonder" gives Spider-Ham a version of the black uniform, but Spider-Ham likes his classic suit more, so he gets rid of it. In #20, Pork Grind, a pig version of Venom, is introduced as an enemy of Spider-Ham. Contest of Champions In the 2016 Contest of Champions series, where Maestro and Collector use the heroes of different worlds to battle with each other, when this version of Venom was killed by Punisher 2099, the remnants fused with the remains of the Void, creating the Symbioids. Earth X In the universe of Earth-9997 / Earth X, the symbiotes, like all sentient life, were created by the Celestials as "antibodies" to protect the embryos which resided in the core of the planets. Like the Asgardians and Mephisto, the symbiotes eventually reached the third stage of metamorphosis and apotheosized into metaphysical entities, given physical form by what others believed them to be and required of them. The Venom symbiote was given form by Spider-Man, who believed it to be a symbiotic living costume; after being bonded to Eddie Brock for years, it bonded to Peter's daughter May Parker, who managed to tame and rehabilitate it to start her career as the superhero Venom. Spider-Man Unlimited In the Spider-Man Unlimited series, a Synoptic is introduced. Synoptics are parasites that can control organic beings via touch. Venom and Carnage, who act as double agents to the High Evolutionary, are able to revive the Synoptic. Spider-Man: Spider's Shadow In the 2021 miniseries Spider's Shadow, the symbiote manages to form a stronger bond with Peter after the Hobgoblin kills May Parker, which leads to Peter succumbing to its influence and killing several of his familiar rogues before the FF are able to expel the symbiote from him. Unfortunately, the symbiote is able to escape captivity and bond with Reed Richards, allowing its subsequent spawn to be altered so that they are immune to most of its traditional weaknesses. Despite these symbiotes managing to bond with various Avengers, X-Factor, and the rest of the FF, Peter and Johnny Storm are able to trick the original symbiote into trying to re-bond with Peter, only to reveal that it was pursuing Johnny while he was using an image inducer. The death of the prime symbiote destroys all of its spawn (although it kills Reed before its defeat). Marvel Adventures In the Marvel Adventures continuity, primarily aimed at younger readers, this universe's Spider-Man comes into contact with the Black Suit at the Tinkerer's junkyard. While trying to take down the combined efforts of Stilt-Man, Rocket Racer, and Leap Frog, Spider-Man comes into contact with a "stealth fabric", a liquid alloy that can quickly adapt to the user's body. Sometime later, Reed Richards of the Fantastic Four analyzes the suit, concluding that it is using Spider-Man's body to power itself and will eventually drain him of his energy. In other media Television The Venom and Carnage symbiotes appear in Spider-Man (1994). The Venom and Carnage symbiotes appear in Spider-Man Unlimited. The Venom symbiote appears in The Spectacular Spider-Man. Additionally, Carnage was also set to appear before the series was cancelled. The Venom, Carnage, and Anti-Venom symbiotes appear in Ultimate Spider-Man. A gamma variant of the Venom symbiote appears in the Hulk and the Agents of S.M.A.S.H. episode "The Venom Within". The Klyntar, Venom and Carnage symbiotes, and the Exolons appear in Guardians of the Galaxy. This version of the Klyntar's homeworld was destroyed by Thanos, who took them to Planet X to weaponize them by altering their genealogy. Additionally, the Exolons are referenced as inhabiting Wraith's body. The Venom, Anti-Venom, Scream, Scorn, and Mania symbiotes, the Klyntar, and a variation of All-Black appear in Spider-Man (2017). An original symbiote named Syphon8r appears in the Moon Girl and Devil Dinosaur episode "The Borough Bully". Upon detecting a boy named Angelo (voiced by Josh Keaton) who was envious of Moon Girl and Devil Dinosaur for upstaging him, Syphon8r bonds with him, takes on the form of a four-armed troll with four tentacles for legs, and becomes an internet troll to feed off of his and Moon Girl's anger. While attempting to destroy the George Washington Bridge to increase his popularity, Moon Girl depowers the symbiote before she and Devil defeat it, forcing it to retreat and abandon Angelo. Film The Venom symbiote appears in Spider-Man 3. The Venom symbiote makes a cameo appearance in trailers for The Amazing Spider-Man 2, but was replaced with the Rhino's armor in the theatrical cut. Sony's Spider-Man Universe The Symbiotes appear in Sony's Spider-Man Universe: The Venom and Riot symbiotes appear in Venom. Additionally, a blue symbiote designated SYM-A02 and a yellow symbiote designated SYM-A03 make minor appearances. The Venom and Carnage symbiotes appear in Venom: Let There Be Carnage. The Venom, Toxin, Lasher, Lava, Animal, Tendril, Jim and Agony symbiotes, and Knull appear in Venom: The Last Dance. Marvel Cinematic Universe Elements of symbiote-related characters serve as inspiration for media set in the Marvel Cinematic Universe (MCU): Exolon monks appear in the live-action film Guardians of the Galaxy as followers of Ronan the Accuser. A Necrosword inspired by All-Black appears in the live-action film Thor: Ragnarok as Hela's primary weapon. Two Necroswords appear in the Disney+ animated series What If...? episode "What If... T'Challa Became a Star-Lord?", wielded by an alternate reality version of the Collector. The SSU incarnation of the Venom symbiote makes an uncredited cameo appearance in the mid-credits scene of Spider-Man: No Way Home. A non-symbiote Necrosword appears in Thor: Love and Thunder, wielded by Gorr the God Butcher. Video games An infinite number of symbiote clones created by Doctor Doom serve as the collective final boss of Spider-Man: The Video Game. The Venom, Carnage, Scream, Agony, Riot, Lasher, and Phage symbiotes appear in Venom/Spider-Man: Separation Anxiety. Clones of the Carnage symbiote created by Doctor Octopus, in addition to the original, appear in Spider-Man (2000). The Venom symbiote appears in the Spider-Man 3 film tie-in game. Additionally, an unidentified symbiote bonded to Shriek appears in the PS2, PSP, and Wii versions of the game. The Venom symbiote, Klyntar, Snatchers, Zombies, Berserkers, Grapplers, Slashers, Electrolings, Vulturelings, and Symbiote Pods appear in Spider-Man: Web of Shadows. The Ultimate Marvel incarnations of the Venom and Carnage symbiotes appear in Spider-Man: Shattered Dimensions. The Anti-Venom symbiote appears in Spider-Man: Edge of Time. The Venom, Scream, Anti-Venom, and Hybrid symbiotes appear in Marvel: Avengers Alliance. The Venom symbiote appears in Lego Marvel Super Heroes. A nanite-based incarnation of the Venom symbiote appears in The Amazing Spider-Man 2 film tie-in game. Numerous symbiote-related characters appear in Spider-Man Unlimited (2014). Additionally, a "Symbiote Dimension" appears as a stage. Clones of the Venom symbiote created by the Green Goblin and Mysterio appear in Disney Infinity 2.0. The Venom, Carnage, and Anti-Venom symbiotes as well as symbiotes merged with Adaptoids called Symbioids appear in Marvel: Contest of Champions. The Venom and Carnage symbiotes as well as original symbiotes Carrier, Horror, Demolisher, and Mutation appear in Marvel Puzzle Quest. The Klyntar appear in Marvel Avengers Academy. The Venom symbiote, several unnamed symbiotes, and a giant, unnamed symbiote appear in Marvel vs. Capcom: Infinite. Jedah Dohma uses the Soul Stone to steal a million souls from Earth and feed them to the giant symbiote in addition to giving pieces of it to A.I.M.brella, who bond them to virus-infected subjects to stabilize them. Spider-Man, Chris Redfield, Frank West, and Mike Haggar defeat Dohma, but he unleashes the creature on New Metro City. Nonetheless, the Avengers and heroes from the Capcom universe gather three of the Infinity Stones and use them to destroy the giant symbiote. The Venom and Carnage symbiotes, along with a hybridization of them called Carnom, appears in Lego Marvel Super Heroes 2. The Venom symbiote makes a cameo appearance in the ending of Marvel's Spider-Man (2018). The Venom symbiote makes a cameo appearance in the mid-credits scene of Spider-Man: Miles Morales. The Venom, Scream, and Anti-Venom symbiotes as well as several unnamed symbiote foot soldiers and Symbiote Behemoths created by Venom appear in Marvel's Spider-Man 2. Miscellaneous The Scream symbiote appears in The Amazing Adventures of Spider-Man. The Carnage symbiote appears in Spider-Man: Turn Off the Dark. On October 10, 2022, Marvel Comics announced the Summer of Symbiotes upcoming event for New York Comic Con in summer 2023. References External links List of Venom Comics at TheVenomSite.com symbiotes at Comic Vine Marvel's most powerful symbiotes at IGN 16 symbiotes More Powerful Than Venom (And 9 Weaker) at Screenrant Characters created by Tom DeFalco Characters created by David Michelinie Characters created by Roger Stern Characters created by Mike Zeck Venom (character) Fictional amorphous creatures Fictional characters who can duplicate themselves Fictional species and races Fictional superhuman healers Fictional superorganisms Hive minds in fiction Marvel Comics alien species Marvel Comics shapeshifters Marvel Comics characters who can move at superhuman speeds Marvel Comics characters with accelerated healing Marvel Comics characters with superhuman durability or invulnerability Marvel Comics characters with superhuman strength
Symbiote (comics)
[ "Biology" ]
10,024
[ "Superorganisms", "Fictional superorganisms" ]
2,951,653
https://en.wikipedia.org/wiki/Supercritical%20carbon%20dioxide
Supercritical carbon dioxide (s) is a fluid state of carbon dioxide where it is held at or above its critical temperature and critical pressure. Carbon dioxide usually behaves as a gas in air at standard temperature and pressure (STP), or as a solid called dry ice when cooled and/or pressurised sufficiently. If the temperature and pressure are both increased from STP to be at or above the critical point for carbon dioxide, it can adopt properties midway between a gas and a liquid. More specifically, it behaves as a supercritical fluid above its critical temperature () and critical pressure (), expanding to fill its container like a gas but with a density like that of a liquid. Supercritical is becoming an important commercial and industrial solvent due to its role in chemical extraction, in addition to its relatively low toxicity and environmental impact. The relatively low temperature of the process and the stability of also allows compounds to be extracted with little damage or denaturing. In addition, the solubility of many extracted compounds in varies with pressure, permitting selective extractions. Applications Solvent Carbon dioxide is gaining popularity among coffee manufacturers looking to move away from classic decaffeinating solvents. s is forced through green coffee beans which are then sprayed with water at high pressure to remove the caffeine. The caffeine can then be isolated for resale (e.g., to pharmaceutical or beverage manufacturers) by passing the water through activated charcoal filters or by distillation, crystallization or reverse osmosis. Supercritical carbon dioxide is used to remove organochloride pesticides and metals from agricultural crops without adulterating the desired constituents from plant matter in the herbal supplement industry. Supercritical carbon dioxide can be used as a solvent in dry cleaning. Supercritical carbon dioxide is used as the extraction solvent for creation of essential oils and other herbal distillates. Its main advantages over solvents such as hexane and acetone in this process are that it is non-flammable and does not leave toxic residue. Furthermore, separation of the reaction components from the starting material is much simpler than with traditional organic solvents. The can evaporate into the air or be recycled by condensation into a recovery vessel. Its advantage over steam distillation is that it operates at a lower temperature, which can separate the plant waxes from the oils. In laboratories, s is used as an extraction solvent, for example for determining total recoverable hydrocarbons from soils, sediments, fly-ash, and other media, and determination of polycyclic aromatic hydrocarbons in soil and solid wastes. Supercritical fluid extraction has been used in determining hydrocarbon components in water. Processes that use s to produce micro and nano scale particles, often for pharmaceutical uses, are under development. The gas antisolvent process, rapid expansion of supercritical solutions, and supercritical antisolvent precipitation (as well as several related methods) process a variety of substances into particles. Due to its ability to selectively dissolve organic compounds and assist enzyme functioning, s has been suggested as a potential solvent to support biological activity on Venus- or super-Earth-type planets. Manufactured products Environmentally beneficial, low-cost substitutes for rigid thermoplastic and fired ceramic are made using s as a chemical reagent. The s in these processes is reacted with the alkaline components of fully hardened hydraulic cement or gypsum plaster to form various carbonates. The primary byproduct is water. s is used in the foaming of polymers. Supercritical carbon dioxide can saturate the polymer with solvent. Upon depressurization and heating, the carbon dioxide rapidly expands, causing voids within the polymer matrix, i.e., creating a foam. Research is ongoing on microcellular foams. An electrochemical carboxylation of a para-isobutylbenzyl chloride to ibuprofen is promoted under s. Working fluid s is chemically stable, reliable, low-cost, non-flammable and readily available, making it a desirable candidate working fluid for transcritical cycles. Supercritical is used as the working fluid in domestic water heat pumps. Manufactured and widely used, heat pumps are available for domestic and business heating and cooling. While some of the more common domestic water heat pumps remove heat from the space in which they are located, such as a basement or garage, heat pump water heaters are typically located outside, where they remove heat from the outside air. Power generation The unique properties of s present advantages for closed-loop power generation and can be applied to power generation applications. Power generation systems that use traditional air Brayton and steam Rankine cycles can use s to increase efficiency and power output. The relatively new Allam power cycle uses s as the working fluid in combination with fuel and pure oxygen. The produced by combustion mixes with the s working fluid. A corresponding amount of pure must be removed from the process (for industrial use or sequestration). This process reduces atmospheric emissions to zero. s promises substantial efficiency improvements. Due to its high fluid density, s enables compact and efficient turbomachinery. It can use simpler, single casing body designs while steam turbines require multiple turbine stages and associated casings, as well as additional inlet and outlet piping. The high density allows more compact, microchannel-based heat exchanger technology. For concentrated solar power, carbon dioxide critical temperature is not high enough to obtain the maximum energy conversion efficiency. Solar thermal plants are usually located in arid areas, so it is impossible to cool down the heat sink to sub-critical temperatures. Therefore, supercritical carbon dioxide blends, with higher critical temperatures, are in development to improve concentrated solar power electricity production. Further, due to its superior thermal stability and non-flammability, direct heat exchange from high temperature sources is possible, permitting higher working fluid temperatures and therefore higher cycle efficiency. Unlike two-phase flow, the single-phase nature of s eliminates the necessity of a heat input for phase change that is required for the water to steam conversion, thereby also eliminating associated thermal fatigue and corrosion. The use of s presents corrosion engineering, material selection and design issues. Materials in power generation components must display resistance to damage caused by high-temperature, oxidation and creep. Candidate materials that meet these property and performance goals include incumbent alloys in power generation, such as nickel-based superalloys for turbomachinery components and austenitic stainless steels for piping. Components within s Brayton loops suffer from corrosion and erosion, specifically erosion in turbomachinery and recuperative heat exchanger components and intergranular corrosion and pitting in the piping. Testing has been conducted on candidate Ni-based alloys, austenitic steels, ferritic steels and ceramics for corrosion resistance in s cycles. The interest in these materials derive from their formation of protective surface oxide layers in the presence of carbon dioxide, however in most cases further evaluation of the reaction mechanics and corrosion/erosion kinetics and mechanisms is required, as none of the materials meet the necessary goals. In 2016, General Electric announced a s-based turbine that enabled a 50% efficiency of converting heat energy to electrical energy. In it the is heated to 700 °C. It requires less compression and allows heat transfer. It reaches full power in 2 minutes, whereas steam turbines need at least 30 minutes. The prototype generated 10 MW and is approximately 10% the size of a comparable steam turbine. The 10 MW US$155-million Supercritical Transformational Electric Power (STEP) pilot plant was completed in 2023 in San Antonio. It is the size of a desk and can power around 10,000 homes. Other Work is underway to develop a s closed-cycle gas turbine to operate at temperatures near 550 °C. This would have implications for bulk thermal and nuclear generation of electricity, because the supercritical properties of carbon dioxide at above 500 °C and 20 MPa enable thermal efficiencies approaching 45 percent. This could increase the electrical power produced per unit of fuel required by 40 percent or more. Given the volume of carbon fuels used in producing electricity, the environmental impact of cycle efficiency increases would be significant. Supercritical is an emerging natural refrigerant, used in new, low carbon solutions for domestic heat pumps. Supercritical heat pumps are commercially marketed in Asia. EcoCute systems from Japan, developed by Mayekawa, develop high temperature domestic water with small inputs of electric power by moving heat into the system from the surroundings. Supercritical has been used since the 1980s to enhance recovery in mature oil fields. "Clean coal" technologies are emerging that could combine such enhanced recovery methods with carbon sequestration. Using gasifiers instead of conventional furnaces, coal and water is reduced to hydrogen gas, carbon dioxide and ash. This hydrogen gas can be used to produce electrical power In combined cycle gas turbines, is captured, compressed to the supercritical state and injected into geological storage, possibly into existing oil fields to improve yields. Supercritical can be used as a working fluid for geothermal electricity generation in both enhanced geothermal systems and sedimentary geothermal systems (so-called Plume Geothermal). EGS systems utilize an artificially fractured reservoir in basement rock while CPG systems utilize shallower naturally-permeable sedimentary reservoirs. Possible advantages of using in a geologic reservoir, compared to water, include higher energy yield resulting from its lower viscosity, better chemical interaction, and permanent storage as the reservoir must be filled with large masses of . As of 2011, the concept had not been tested in the field. Aerogel production Supercritical carbon dioxide is used in the production of silica, carbon and metal based aerogels. For example, silicon dioxide gel is formed and then exposed to s. When the goes supercritical, all surface tension is removed, allowing the liquid to leave the aerogel and produce nanometer sized pores. Sterilization of biomedical materials Supercritical is an alternative for thermal sterilization of biological materials and medical devices with combination of the additive peracetic acid (PAA). Supercritical does not sterilize the media, because it does not kill the spores of microorganisms. Moreover, this process is gentle, as the morphology, ultrastructure and protein profiles of inactivated microbes are preserved. Cleaning Supercritical is used in certain industrial cleaning processes. See also Caffeine Dry cleaning Perfume Supercritical fluid Atmosphere of Venus, nearly all carbon dioxide, supercritical at the surface References Further reading Mukhopadhyay M. (2000). Natural extracts using supercritical carbon dioxide. United States: CRC Press, LLC. Free preview at Google Books. Carbon dioxide Gas technologies Industrial gases Inorganic solvents
Supercritical carbon dioxide
[ "Chemistry" ]
2,212
[ "Greenhouse gases", "Carbon dioxide", "Industrial gases", "Chemical process engineering" ]
2,951,670
https://en.wikipedia.org/wiki/Matthew%20Krok
Matthew Krok (born 8 March 1982) is a former Australian child actor best known for playing the role of schoolboy Arthur McArthur on the Australian sitcom Hey Dad...! from 1991 to 1994. He also appeared in a popular Sorbent toilet paper advertising campaign at around the same time. Career During the peak of his stardom in the early 1990s, Krok appeared as a celebrity guest on Wheel of Fortune and was also frequently referred to as "the little fat kid from Hey Dad...!". In an infamous The Late Show skit, "Arnold Schwarzenegger" (played by Tony Martin in heavy prosthetic makeup) jokingly revealed that the plot of the then yet-to-be-released Terminator 3 revolved around killing him and said, "Hasta la vista, little fat kid!". Krok's other credits include the children's films Paws and Joey. His last credited acting appearance was in the 2001 children's television series Outriders. Personal life In 2003, The Sydney Morning Herald revealed that the actor began a two-year stint as a Mormon missionary by the name of Elder Krok. Prior to this, he commenced studying for a degree in civil engineering at the University of Western Sydney and has since transferred to the University of New South Wales where he is undertaking a double degree in civil and environmental engineering. Matthew married Jade Bennallack on 5 July 2008 at the Sydney Australia Temple in Carlingford. Filmography References External links 1982 births Living people Australian Latter Day Saints Male actors from Sydney Australian civil engineers Australian male child actors Australian male television actors Environmental engineers
Matthew Krok
[ "Chemistry", "Engineering" ]
332
[ "Environmental engineers", "Environmental engineering" ]
2,951,818
https://en.wikipedia.org/wiki/Mitomycins
The mitomycins are a family of aziridine-containing natural products isolated from Streptomyces caespitosus or Streptomyces lavendulae. They include mitomycin A, mitomycin B, and mitomycin C. When the name mitomycin occurs alone, it usually refers to mitomycin C, its international nonproprietary name. Mitomycin C is used as a medicine for treating various disorders associated with the growth and spread of cells. Biosynthesis In general, the biosynthesis of all mitomycins proceeds via combination of 3-amino-5-hydroxybenzoic acid (AHBA), D-glucosamine, and carbamoyl phosphate, to form the mitosane core, followed by specific tailoring steps. The key intermediate, AHBA, is a common precursor to other anticancer drugs, such as rifamycin and ansamycin. Specifically, the biosynthesis begins with the addition of phosphoenolpyruvate (PEP) to erythrose-4-phosphate (E4P) with a yet undiscovered enzyme, which is then ammoniated to give 4-amino-3-deoxy-D-arabino heptulosonic acid-7-phosphate (aminoDHAP). Next, DHQ synthase catalyzes a ring closure to give 4-amino3-dehydroquinate (aminoDHQ), which then undergoes a double oxidation via aminoDHQ dehydratase to give 4-amino-dehydroshikimate (aminoDHS). The key intermediate, 3-amino-5-hydroxybenzoic acid (AHBA), is made via aromatization by AHBA synthase. Synthesis of the key intermediate, 3-amino-5-hydroxy-benzoic acid. The mitosane core is synthesized as shown below via condensation of AHBA and D-glucosamine, although no specific enzyme has been characterized that mediates this transformation. Once this condensation has occurred, the mitosane core is tailored by a variety of enzymes. Both the sequence and the identity of these steps are yet to be determined. Complete reduction of C-6 – Likely via F420-dependent tetrahydromethanopterin (H4MPT) reductase and H4MPT:CoM methyltransferase Hydroxylation of C-5, C-7 (followed by transamination), and C-9a. – Likely via cytochrome P450 monooxygenase or benzoate hydroxylase O-Methylation at C-9a – Likely via SAM dependent methyltransferase Oxidation at C-5 and C8 – Unknown Intramolecular amination to form aziridine – Unknown Carbamoylation at C-10 – Carbamoyl transferase, with carbamoyl phosphate (C4P) being derived from L-citrulline or L-arginine Biological effects In the bacterium Legionella pneumophila, mitomycin C induces competence for transformation. Natural transformation is a process of DNA transfer between cells, and is regarded as a form of bacterial sexual interaction. In the fruit fly Drosophila melanogaster, exposure to mitomycin C increases recombination during meiosis, a key stage of the sexual cycle. In the plant Arabidopsis thaliana, mutant strains defective in genes necessary for recombination during meiosis and mitosis are hypersensitive to killing by mitomycin C. Medicinal uses and research Mitomycin C has been shown to have activity against stationary phase persisters caused by Borrelia burgdorferi, a factor in lyme disease. Mitomycin C is used to treat pancreatic and stomach cancer, and is under clinical research for its potential to treat gastrointestinal strictures, wound healing from glaucoma surgery, corneal excimer laser surgery and endoscopic dacryocystorhinostomy. References DNA replication inhibitors IARC Group 2B carcinogens Quinones Carbamates Ethers Aziridines Nitrogen heterocycles Heterocyclic compounds with 4 rings Enones Methoxy compounds
Mitomycins
[ "Chemistry" ]
910
[ "Organic compounds", "Functional groups", "Ethers" ]
2,951,953
https://en.wikipedia.org/wiki/Argo%20%28ROV%29
Argo is an unmanned deep-towed undersea video camera sled developed by Dr. Robert Ballard through Woods Hole Oceanographic Institute's Deep Submergence Laboratory. Argo is most famous for its role in the discovery of the wreck of the RMS Titanic in 1985. Argo would also play the key role in Ballard's discovery of the wreck of the battleship Bismarck in 1989. The towed sled, capable of operating depths of 6,000 meters (20,000 feet), meant 98% of the ocean floor was within reach. The original Argo, used to find Titanic, was long, tall, and wide and weighed about in air. It had an array of cameras looking forward and down, as well as strobes and incandescent lighting to illuminate the ocean floor. It could acquire wide-angle film and television pictures while flying above the sea floor, towed from a surface vessel, and could also zoom in for detailed views. See also Acoustically Navigated Geological Underwater Survey (ANGUS) References Oceanographic instrumentation Unmanned underwater vehicles
Argo (ROV)
[ "Technology", "Engineering" ]
216
[ "Oceanographic instrumentation", "Measuring instruments" ]