id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
17,382,620
https://en.wikipedia.org/wiki/Supercritical%20angle%20fluorescence%20microscopy
Supercritical angle fluorescence microscopy (SAF) is a technique to detect and characterize fluorescent species (proteins, biomolecules, pharmaceuticals, etc.) and their behaviour close or even adsorbed or linked at surfaces. The method is able to observe molecules in a distance of less than 100 to 0 nanometer from the surface even in presence of high concentrations of fluorescent species around. Using an aspheric lens for excitation of a sample with laser light, fluorescence emitted by the specimen is collected above the critical angle of total internal reflection selectively and directed by a parabolic optics onto a detector. The method was invented in 1998 in the laboratories of Stefan Seeger at University of Regensburg/Germany and later at University of Zurich/Switzerland. SAF microscopy principle The principle how SAF Microscopy works is as follows: A fluorescent specimen does not emit fluorescence isotropically when it comes close to a surface, but approximately 70% of the fluorescence emitted is directed into the solid phase. Here, the main part enters the solid body above the critical angle. When the emitter is located just 200 nm above the surface, fluorescent light entering the solid body above the critical angle is decreased dramatically. Hence, SAF Microscopy is ideally suited to discriminate between molecules and particles at or close to surfaces and all other specimen present in the bulk. Typical SAF-setup The typical SAF setup consists of a laser line (typically 450-633 nm), which is reflected into the aspheric lens by a dichroic mirror. The lens focuses the laser beam in the sample, causing the particles to fluoresce. The fluorescent light then passes through a parabolic lens before reaching a detector, typically a photomultiplier tube or avalanche photodiode detector. It is also possible to arrange SAF elements as arrays, and image the output onto a CCD, allowing the detection of multiple analytes. Selected publications Fluorescence techniques Microscopy Laser applications
Supercritical angle fluorescence microscopy
[ "Chemistry", "Biology" ]
407
[ "Analytical chemistry stubs", "Fluorescence techniques", "Microscopy" ]
17,383,070
https://en.wikipedia.org/wiki/Participatory%203D%20modelling
Participatory 3D modelling (P3DM) is a community-based mapping method which integrates local spatial knowledge with data on elevation of the land and depth of the sea to produce stand-alone, scaled and geo-referenced relief models. Essentially based on local spatial knowledge, land use and cover, and other features are depicted by informants on the model by the use of pushpins (points), yarns (lines) and paints (polygons). On completion, a scaled and geo-referenced grid is applied to facilitate data extraction or importation. Data depicted on the model are extracted, digitised and plotted. On completion of the exercise the model remains with the community. Awards On November 5, 2007 at a ceremony which took place during the Global Forum 2007 at the Fondazione Giorgio Cini in Venice, Italy, the CTA-supported project Participatory 3D Modelling (P3DM) for Resource Use, Development Planning and Safeguarding Intangible Cultural Heritage in Fiji was granted the World Summit Award 2007 in the category e-culture. The product, based on the use of P3DM, has been considered as one of the 40 best practice examples of quality e-Content in the world . The product has been delivered by the following organizations: Fiji Locally-Managed Marine Area (FLMMA) Network, WWF South Pacific Programme, Native Lands Trust Board, Secretariat of the Pacific Community, National Trust of Fiji, Lomaiviti Provincial Council and the Technical Centre for Agricultural and Rural Cooperation ACP-EU (CTA). See also Geographic information system (GIS) Neogeography Participatory GIS (PGIS) or public participation geographic information system (PPGIS) Raised-relief map Traditional knowledge GIS References Further reading Rambaldi G., Muchemi J., Crawhall N. and Monaci L. 2007. Through the Eyes of Hunter-gatherers: Participatory 3D Modelling among Ogiek Indigenous Peoples in Kenya. Information Development, Vol. 23, No. 2-3, 113–128 Rambaldi G., Kwaku Kyem A. P.; Mbile P.; McCall M. and Weiner D. 2006. Participatory Spatial Information Management and Communication in Developing Countries. EJISDC 25, 1, 1–9 . Chambers R. 2006. Participatory Mapping and Geographic Information Systems: Whose Map? Who is Empowered and Who Disempowered? Who Gains and Who Loses? EJISDC 25, 2, 1–11 Rambaldi G, Chambers R., McCall M, And Fox J. 2006. Practical ethics for PGIS practitioners, facilitators, technology intermediaries and researchers. PLA 54:106–113, IIED, London, UK Corbett J, Rambaldi G., Kyem P., Weiner D., Olson R., Muchemi J., McCall M And Chambers R. 2006. Mapping for Change: The emergence of a new practice. PLA 54:13–19 IIED, London, UK Rambaldi G., Bugna S., Tiangco A. and de Vera D. 2002 Bringing the Vertical Dimension to the Negotiating Table. Preliminary Assessment of a Conflict Resolution Case in the Philippines. ASEAN Biodiversity, Vol. 2 No. 1, 17–26. ASEAN Centre for Biodiversity Conservation (ARCBC), Los Baños, Philippines. Puginier O. 2002. “Participation” in a conflicting policy framework: Lessons learned from a Thai experience. ASEAN Biodiversity, Vol. 2 No. 1, 35–42. ASEAN Centre for Biodiversity Conservation (ARCBC), Los Baños, Philippines. Rambaldi G., and Le Van Lanh. 2002. The Seventh Helper: the Vertical Dimension. Feedback from a training exercise in Vietnam. ASEAN Centre for Biodiversity Conservation (ARCBC), Los Baños, Philippines. Martin C., Eguienta Y., Castella, J.C., T.T. Hieu and Lecompte P. 2001. Combination of participatory landscape analysis and spatial graphic models as a common language between researchers and local stakeholders. SAM Paper Series. IRRI-IRD. External links Networks Open Forum on Participatory Geographic Information Systems and Technologies – a global network of PGIS/PPGIS practitioners with Spanish, Portuguese and French-speaking chapters. Indigenous Peoples of Africa Coordinating Committee (IPACC) Organizations Integrated Approaches to Participatory Development (IAPAD) – Provides information and case studies on Participatory 3-Dimensional Modelling (P3DM) practice. The Philippine Association for Inter-Cultural Development (PAFID) uses Participatory 3D Modelling, GPS and GIS applications to support Indigenous Cultural Communities throughout the Philippines in claiming for their rights over ancestral domains. ERMIS Africa builds capacities among local communities and development practitioners in using Participatory Geo-spatial Information Management Technologies. The Technical Centre for Agricultural and Rural Cooperation ACP-EU (CTA) supports the dissemination of good PGIS practice, including P3DM in ACP countries. Bibliography Community Mapping, PGIS, PPGIS and P3DM Virtual Library Multimedia Collection of community mapping and participatory GIS multimedia Giving Voice to the Unspoken a 20-minute video production showing the hands-on aspects of Participatory 3D Modelling (P3DM). PGIS Channel on Vimeo, including several documentaries on P3DM in English, French, Spanish and Portuguese Participatory democracy Participatory budgeting Geographic information systems Human geography Collaborative mapping Neogeography
Participatory 3D modelling
[ "Technology", "Environmental_science" ]
1,153
[ "Environmental social science", "Information systems", "Geographic information systems", "Human geography" ]
13,433,854
https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Switzerland
The geographical centre of Switzerland has the coordinates (Swiss Grid: 660158/183641). It is located at Älggi-Alp in the municipality of Sachseln, Obwalden. The point is the centre of mass determined in 1988 by Swisstopo. As the point is difficult to access, a stone was set 500 m further south-east on Älggi Alp (1645 m). This symbolizes the centre of Switzerland and is located at (Swiss Grid: 660557/183338). A plaque on the stone commemorates the winner of the "Swiss of the Year" award. External links https://web.archive.org/web/20120607113752/http://www.swisstopo.admin.ch/internet/swisstopo/en/home/topics/knowledge/center_ch.html https://web.archive.org/web/20111113181702/http://www.ch.ch/schweiz/01865/01885/01904/02135/index.html?lang=en https://web.archive.org/web/20160303224820/http://skatingland.myswitzerland.com/en/sightseeing_detail.cfm?id=340745 https://web.archive.org/web/20120308105359/http://www.wanderland.ch/en/orte_detail.cfm?id=315425 Switzerland Geography of Obwalden Centre Tourist attractions in Obwalden
Geographical centre of Switzerland
[ "Physics", "Mathematics" ]
352
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
13,434,545
https://en.wikipedia.org/wiki/Frequency%20coordination
Frequency Coordination is a technical and regulatory process that removes or mitigates radio-frequency interference between different radio systems that operate on the same frequency. Normally frequency coordination is a function of an administration, such as a governmental spectrum regulator, as part of a formal regulatory process under the procedures of the Radio Regulations (an intergovernmental treaty text that regulates the radio frequency spectrum). Before an administrations lets an operator operate a new radio communications network, it must undergo coordination in the following steps: Inform other operators about the plans Receive comments if appropriate Conduct technical discussions with priority networks Agree on technical and operational parameters Gain international recognition and protection on the Master International Frequency Register Bring the network into use This coordination ensures that: All administrations know the technical plans of other administrations. All operators (satellite and terrestrial) can determine if unacceptable interference to existing and planned “priority” networks is likely, and have an opportunity to: Object Discuss and review Reach technical and operational sharing agreements Coordination is thus closely bound to date of protection or priority, defined by the date when the International Telecommunication Union receives complete coordination data. New planned networks must coordinate with all networks with an earlier date of protection but are protected against all networks with a later date of protection. Planned (but not implemented) networks acquire status under this procedure, but time limits ensure that protection does not last forever if networks are not implemented. Congress Authorizes FCC In 1982, the United States Congress provided the FCC with the authority to use frequency coordinators: Assist in developing and managing spectrum Recommend appropriate frequencies (designated under Part 90). List of Coordinators For Public Safety frequency coordination - AASHTO APCO FCCA IMSA For Business and special emergency - AAA AAR EWA FIT PCIA UTC Micronet Communications, Inc. - Since 1983 Comsearch - Since 1977 References Radio technology
Frequency coordination
[ "Technology", "Engineering" ]
365
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
13,435,393
https://en.wikipedia.org/wiki/Euler%20filter
In computer graphics, an Euler filter is a filter intended to prevent gimbal lock and related discontinuities in animation data sets in which rotation is expressed in terms of Euler angles. These discontinuities are caused by the existence of many-to-one mappings between the Euler angle parameterization of the set of 3D rotations. This allows the data set to flip between different Euler angle combinations which correspond to a single 3D rotation, which, although remaining continuous in the space of rotation, are discontinuous in the Euler angle parameter space. The Euler filter chooses on a sample-by-sample basis between the possible Euler angle representations of each 3D rotation in the data set in such a way as to preserve the continuity of the Euler angle time series, without changing the actual 3D rotations. Euler filtering is available in a number of 3D animation packages. See also Charts on SO(3) Rotation formalisms in three dimensions References External links http://fliponline.blogspot.com/2007/04/quick-trick-gimbal-lock-just-ignore-it.html http://www.xsibase.com/forum/index.php?board=11;action=display;threadid=24434 http://sparks.discreet.com/knowledgebase/sdkdocs_v8/prog/main/sdk_trans_handling_sign_flips.html Computer animation
Euler filter
[ "Mathematics" ]
308
[ "Geometry", "Geometry stubs" ]
13,437,576
https://en.wikipedia.org/wiki/Voltage-gated%20proton%20channel
Voltage-gated proton channels are ion channels that have the unique property of opening with depolarization, but in a strongly pH-sensitive manner. The result is that these channels open only when the electrochemical gradient is outward, such that their opening will only allow protons to leave cells. Their function thus appears to be acid extrusion from cells. Another important function occurs in phagocytes (e.g. eosinophils, neutrophils, and macrophages) during the respiratory burst. When bacteria or other microbes are engulfed by phagocytes, the enzyme NADPH oxidase assembles in the membrane and begins to produce reactive oxygen species (ROS) that help kill bacteria. NADPH oxidase is electrogenic, moving electrons across the membrane, and proton channels open to allow proton flux to balance the electron movement electrically. The functional expression of Hv1 in phagocytes has been well characterized in mammals, and recently in zebrafish, suggesting its important roles in the immune cells of mammals and non-mammalian vertebrates. A group of small molecule inhibitors of the Hv1 channel are shown as chemotherapeutics and anti-inflammatory agents. When activated, the voltage-gated proton channel Hv1 can allow up to 100,000 hydrogen ions across the membrane each second. Whereas most voltage-gated ion channels contain a central pore that is surrounding by alpha helices and the voltage-sensing domain (VSD), voltage-gated hydrogen channels contain no central pore, so their voltage-sensing regions (VSD) carry out the job of bringing acidic protons across the membrane. Because the relative H+ concentrations on each side of the membrane result in a pH gradient, these voltage-gated hydrogen channels only carry outward current, meaning they are used to move acidic protons out of the membrane. As a result, the opening of voltage-gated hydrogen channels usually hyperpolarize the cell membrane, or makes the membrane potential more negative. A recent discovery has shown that the voltage-gated proton channel Hv1 is highly expressed in human breast tumor tissues that are metastatic, but not in non-metastatic breast cancer tissues. Because it has also been found to be highly expressed in other cancer tissues, the study of the voltage-gated proton channel has led many scientists to wonder what its importance is in cancer metastasis. However, much is still being discovered concerning the structure and function of the voltage-gated proton channel. Known types HVCN1 References Ion channels Immunology Voltage-gated ion channels
Voltage-gated proton channel
[ "Chemistry", "Biology" ]
538
[ "Immunology", "Neurochemistry", "Ion channels" ]
13,438,665
https://en.wikipedia.org/wiki/Standing%20crop
A standing crop is the total biomass of the living organisms present in a given environment. This includes both natural ecosystems and agriculture. See also Net Primary Production Standing State Bibliography Boudouresque CF (1973) Les peuplements sciaphiles; Recherches de bionomie analytique, structurale et expérimentale sur les peuplements benthiques sciaphiles de Méditerranée occidentale (fraction algale). Bulletin du Muséum d'histoire naturelle, 33, 147, PDF, 80 pages. Campbell, Reece, Urry, Cain, et al. (2011) 9th ed. Biology. Benjamin Cummings. pg 1221 Fausch, K. D., Hawkes, C. L., & Parsons, M. G. (1988). Models that predict standing crop of stream fish from habitat variables: 1950-85 (http://www.treesearch.fs.fed.us/pubs/8730 résumé]). Jenkins, R. M. (1968). The influence of some environmental factors on standing crop and harvest of fishes in US reservoirs. References External links Models that predict standing crop of stream fish from habitat variables: 1950-85 Habitats Ecosystems Ecological metrics
Standing crop
[ "Mathematics", "Biology" ]
262
[ "Symbiosis", "Metrics", "Ecological metrics", "Quantity", "Ecosystems" ]
13,439,619
https://en.wikipedia.org/wiki/Cable%20railing
Cable railings, or wire rope railings, are safety rails that use horizontal or vertical cables in place of spindles, glass and mesh for infill. Uses Cable railings are often desired in place of traditional pickets to achieve nearly unobstructed views as the cable is much thinner than traditional pickets. It is also a more modern aesthetic and is often chosen for that reason. You can install cable assemblies into an existing railing system (called cable infill ) and eliminate many of the maintenance headaches. Posts construction Due to the excessive load requirements of this type of railing system, post construction is critical to the success of cable railings. Cable railing requires very rigid frames compared to many other types of railings due to the forces applied to the end posts by tensioning the cables. Cables must be tensioned to provide minimum cable deflection using 4-inch sphere, to satisfy building code requirements. Manufacturers use different methods to achieve the same result, one manufacturer uses a thicker wall and a webbed post in their aluminum systems, while using only thicker side walls in their stainless systems. Common frame types are constructed of steel, stainless steel, extruded aluminum or wood. Posts height The total minimum height required varies per building codes depending on the area and target use of either residential or commercial. Local city codes supersede state, national and international code. In most states, the residential code is 36 inches high. There are some exceptions, though, like in California the required height for residential railing is 42 inches. On the other hand, the commercial International Building Code requires the railing to be at a minimum of 42-inch height. Posts can be floor-mounted or fascia/side-mounted, but the height of the railing is measured from the floor to the top of the railing. Spacing between the cables Guidelines for spacing between cable components are straightforward and simple. According to international building codes ICC, openings between cables should not exceed 4". Moreover, a 4" sphere should not be able to pass through the openings. Spacing between posts should be kept consistent (when possible) along the assembly. For 36” posts or 42" posts, 4 feet of spacing (center to center) is recommended to minimize deflection between the cables when pushing a 4" ball in between two cables. To accommodate such standards, railing projects may incorporate 3 ½" or less of spacing between cables taking into account the cable deflection caused by the posts spacing. This configuration would streamline compliance with the 4" sphere requirement. Cables and tensioning Cable is very strong in tensile strength, with a breaking strength in excess of 1000 lbs for these types of uses, and is a suitable in-fill material for a railing ("guard" in ICC codes). Typical diameters are 1/8", 3/16" for residential and 3/16" and 1/4" for commercial applications. There are many different types cable and strand (also referred to as wire rope). Cable and strand is available in galvanized carbon steel, type 304 stainless steel, or the highly corrosion resistant, type 316 stainless steel (best for coastal areas). The most common cable construction is 1x19 type construction strand, which is 19 cables twisted in a single bundle, whereas for example, 7x7 would be 7 cable bundles of 7 cables twisted. This type of stainless strand is designed to have limited stretch, as compared to galvanized, making it a good long term cable railing solution. It has long been used for yacht stays and guy wires, proving its outdoor durability and strength. Cable flexibility Cable flexibility is an important consideration in designing a cable railing. The old UBC (Uniform Building Code) and newer ICC (IBC and IRC) codes state that a 4” sphere shall not pass through any portion of a barrier on a guardrail. In a horizontal or vertical cable rail, the cables, once tensioned must be rigid enough to prevent a 4-inch sphere passing through it. Factors influencing this rigidity are: the tension of the cable, intermediate posts (or cable spacers) spacing, the diameter of the cable, top rail cap material and the cable to cable spacing. The application of the 4" sphere test is usually at the discretion of a code enforcement official who will interpret the force behind the 4" sphere so it is advised that cable spacing not be more than 3" over a 48" space between post. Cable tension: An incredible amount of tension is generated on the end posts when ten or more cables, each tensioned at 200-400 lbs. over a height of 36" to 42” exists. Underestimating the tension of cables applied to end poles can cause a safety hazard. Cable can have too much deflection allowing body parts to slip through, or cables can merely "pull out" of the end fittings, causing the cable rail to fail. Poorly designed end posts will result in a railing where the cables cannot be properly tensioned without an unacceptable amount of cable deflection. End posts to which the tensioning hardware attaches must be constructed so that they will not deflect perceptibly. Post spacing: Intermediate posts are posts which provide mounting for the top rail and have a vertical row of holes to support the cable as it passes through them. Since the post to post spacing is a primary driver of cable rigidity, the post to post spacing is very important. It is generally recommended that post spacing be no more than 5 ft on center Some manufacturers require as little as no more than 3 ft on center. The reason for post spacing is more about the cable end fittings' machine thread loading capacity (how much tension can be put on the threads before they fail), than anything else. The more cable drop in the middle, the more weight on the tensioning device, ergo the more load on the threads. Proof strength must be greater than load. Cable diameter and properties: The next variable is the diameter of the cable. Cables can be any wire rope, which meets load strength requirements by the ICC. The most available types are 1x19 1/8", 1 x 19 5/32", 1x19 3/16", 7x7 3/16" and 1x19 1/4". 1 x 19 cable is the most rigid cable available and per the first paragraph above will have greater resistance to the 4" sphere test and likewise have a lesser chance of allowing objects 4" and over to slip through the cable. 316 Stainless Steel is preferred, due to its inherent nature against stretching, keeping long term maintenance down, as well as having anti-corrosive properties. Top rail: Top rail material must be strong as it is being compressed by the combined cable forces. Common top cap materials are the stronger species of wood or metal. Composite lumber can be used if a support rail is used along with it. The support rail is used between the posts to lend strength to the system, both between the posts, and to the Top Rail. Cable to cable spacing: Spacing of the cables vertically is critical to minimize deflection of the cables. Most manufacturers recommended maximum vertical spacing of no more than 3-inch free opening between cables when they are installed to meet cable deflection requirements as stated above. All of the above factors work together to minimize the deflection of the cable to prevent a 4” sphere from passing between the cables when they are properly tensioned in a well-designed frame. This is a requirement according to a number of building codes. Among the more stringent, including that of California, this requirement may be used in conjunction with a weight being hung from the cable. Cable end fittings Cable end fittings are the pieces that tie the system together. The cable attaches into one side of the fitting, while the other side attaches to the post (frame structure). Cable ends may tension, or just attach to the frame, depending on the individual needs of the project. The requirements needed to decide whether to use tensioning or non-tensioning fittings are generally dependent upon the manufacturer's system requirements, your local building codes, and ICC requirements. To determine the type of cable end fittings needed, you'll need to know the distance you expect a single piece of cable to run without stopping, and the amount of tensioning ability of the fitting you expect to use. Individual manufacturers will help you to determine the rest. Most cable end fittings are made by type 316 stainless steel to avoid rust. See also Handrail Guard rail Baluster References Fine Homebuilding magazine Metal fences Garden features Architectural elements Stairs Stairways
Cable railing
[ "Technology", "Engineering" ]
1,775
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
13,439,882
https://en.wikipedia.org/wiki/Universal%20conductance%20fluctuations
Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length needs be larger than the momentum relaxation length . UCF is more profound when electrical transport is in weak localization regime. where , is the number of conduction channels and is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance regardless of the number of channels. Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF. See also Speckle patterns, the optical analogues of conductance fluctuation patterns. References General references Akkermans and Montambaux, Mesoscopic Physics of Electrons and Photons, Cambridge University Press (2007) Supriyo Datta, Electronic Transport in Mesoscopic Systems, Cambridge University Press (1995) R. Saito, G. Dresselhaus and M. S. Dresselhaus, Physical Properties of Carbon Nanotubes, Imperial College Press (1998) Boris Altshuler (1985), Pis'ma Zh. Eksp. Teor. Fiz. 41: 530 [JETP Lett. 41: 648] . Mesoscopic physics Quantum mechanics
Universal conductance fluctuations
[ "Physics", "Materials_science" ]
357
[ "Materials science stubs", "Theoretical physics", "Quantum mechanics", "Condensed matter physics", "Condensed matter stubs", "Mesoscopic physics", "Quantum physics stubs" ]
13,440,087
https://en.wikipedia.org/wiki/Capsanthin
Capsanthin is a natural red dye of the xanthophyll class of carotenoids. As a food coloring, it has the E number E160c(i). Capsanthin is the main carotenoid in the Capsicum annuum species of plants including red bell pepper, New Mexico chile, and cayenne peppers (Capsicum annuum) and a component of paprika oleoresin. Capsanthin is also found in some species of lily. Of all carotenoids, capsanthin is considered to have the greatest antioxidant capacity due to the presence of eleven conjugated double bonds, a conjugated keto group, and a cyclopentane ring. Research Xanthophyllic carotenoids such as β-carotene, lutein, and zeaxanthin have often been touted for their ability to help eye functionality. Capsanthin may also be able to support eye health and recent research has revealed its potential to help maintain intraocular pressure within a healthy range. A clinical study on Wistar rats explored this effect over the course of 28 days. The rats were induced with higher intraocular pressure and then either given a placebo or capsanthin. At the end of the trial, rats that consumed capsanthin had normalized their eye pressure comparable to the control group that had normal pressure levels. References Carotenoids
Capsanthin
[ "Biology" ]
299
[ "Biomarkers", "Carotenoids" ]
13,440,094
https://en.wikipedia.org/wiki/Capsorubin
Capsorubin is a natural red dye of the xanthophyll class. As a food coloring, it has the E number E160c(ii). Capsorubin is a carotenoid found in red bell pepper (Capsicum annuum) and a component of paprika oleoresin. Capsorubin is also found in some species of lily. References Carotenoids
Capsorubin
[ "Chemistry", "Biology" ]
89
[ "Biomarkers", "Carotenoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
13,440,300
https://en.wikipedia.org/wiki/Too%20cheap%20to%20meter
Too cheap to meter refers to a commodity so inexpensive that it is cheaper and less bureaucratic to simply provide it for a flat fee or even free and make a profit from associated services. Originally applied to nuclear power, the phrase is also used for services that can be provided at such low cost that the additional cost of itemized billing would outweigh the benefits. Origins The phrase was coined by Lewis Strauss, then chairman of the United States Atomic Energy Commission, who, in a 1954 speech to the National Association of Science Writers, said: It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age. It was this statement that caught the eye of most reviewers and was the headline in a New York Times article covering the speech, subtitled "It will be too cheap for our children to meter, Strauss tells science writers." Only a few days later, Strauss was a guest on Meet the Press. When the reporters asked him about the quotation and the viability of "commercial power from atomic piles," Strauss replied that he expected his children and grandchildren would have power "too cheap to be metered, just as we have water today that's too cheap to be metered." The statement was contentious from the start. The U.S. Atomic Energy Commission itself, in testimony to the U.S. Congress only months before, lowered the expectations for fission power, projecting only that the costs of reactors could be brought down to about the same as those for conventional sources. A later survey found dozens of statements from the period that suggested it was widely believed that nuclear energy would be more expensive than coal, at least in the foreseeable future. James T. Ramey, who would later become an AEC Commissioner, noted: "Nobody took Strauss' statement very seriously." The phrase has also been attributed to Walter Marshall, a pioneer of nuclear power in the United Kingdom. There is no documentary evidence that he invented or used the term. Fusion or fission? Strauss's prediction did not come true, and over time it became a target of those pointing to the industry's record of overpromising and underdelivering. In 1980, the Atomic Industrial Forum wrote an article quoting his son, Lewis H. Strauss, claiming that he was talking about not nuclear fission but nuclear fusion. He claimed his father was not specific about this in the speech because the AEC's Project Sherwood was still classified at the time, so he was not allowed to refer to this work directly. Since that time, this claim has been widely repeated, including in 2003 comments by Donald Hintz, chairman of the Nuclear Energy Institute. To support that argument, Strauss and biographer Pfau point to this statement: "industry would have electrical power from atomic furnaces in five to fifteen years." It was claimed that the timeline implies that Strauss was referring to fusion, not fission. Although it is not a direct quote, this version of the statement appeared in the New York Times overview of the speech the next day. The statement in question is originally: Dr. Lawrence Hafstad, whom all of you surely know, happens to be speaking, today, in Brussels before the Congress of Industrial Chemistry. He heads the Reactor Development Division of the Atomic Energy Commission. Therefore, he expects to be asked, "How soon will you have industrial atomic electric power in the United States?" His answer is "from 5 to 15 years depending on the vigor of the development effort." Hafstad was in charge of the development of fission reactors by the AEC, and this statement immediately precedes the "too cheap to meter" statement. The same is true of his statements on Meet the Press, which in direct reply to a question about fission. The speech as a whole contains large sections about the development of fission power and the difficulties that the Commission was having communicating this fact. He wryly notes receiving letters addressed to the "Atomic Bomb Commission" and then quotes a study that demonstrates the public is largely ignorant of the development of atomic power. He goes on to briefly recount the development of fission, noting a letter from Leo Szilard of sixteen years earlier where he speaks of the possibility of a chain reaction. A later examination of the topic concluded: "there is no evidence in Strauss's papers at the Herbert Hoover Presidential Library to indicate fusion was the hidden subject of his speech." Strauss viewed hydrogen fusion as the ultimate power source and was eager to develop the technology as quickly as possible and urged the Project Sherwood researchers to make rapid progress, even suggesting a million-dollar prize to the individual or team that succeeded first. However Strauss was not optimistic about the rapid commercialization of fusion power. In August 1955 after fusion research was made public, he cautioned that "there has been nothing in the nature of breakthroughs that would warrant anyone assuming that this [fusion power] was anything except a very long range—and I would accent the word 'very'—prospect." Other uses The phrase became famous enough that it has been used in other contexts, especially in post-scarcity discussions. For instance, landline (and cable) internet bandwidth is now often billed on a flat monthly fee with no usage limits, and it is predicted that the introduction of 5G will do the same for mobile data, making it "too cheap to meter." The same has been said for technology as a whole. Prior to 1985, water meters were not required in New York City; water and sewage fees were assessed based on building size and number of water fixtures; water metering was introduced as a conservation measure. See also Cornucopianism Free public transport References Sources External links Steve Cohn (1997). Too cheap to meter: an economic and philosophical analysis of the nuclear dream English-language idioms Nuclear power Commodities
Too cheap to meter
[ "Physics" ]
1,246
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
13,440,591
https://en.wikipedia.org/wiki/Onium
An onium (plural: onia) is a bound state of a particle and its antiparticle. These states are usually named by adding the suffix -onium to the name of one of the constituent particles (replacing an -on suffix when present), with one exception for "muonium"; a muon–antimuon bound pair is called "true muonium" to avoid confusion with old nomenclature. Examples Positronium is an onium which consists of an electron and a positron bound together as a long-lived metastable state. Positronium has been studied since the 1950s to understand bound states in quantum field theory. A recent development called non-relativistic quantum electrodynamics (NRQED) used this system as a proving ground. Pionium, a bound state of two oppositely-charged pions, is interesting for exploring the strong interaction. This should also be true of protonium. The true analogs of positronium in the theory of strong interactions are the quarkonium states: they are mesons made of a heavy quark and antiquark (namely, charmonium and bottomonium). Exploration of these states through non-relativistic quantum chromodynamics (NRQCD) and lattice QCD are increasingly important tests of quantum chromodynamics. Understanding bound states of hadrons such as pionium and protonium is also important in order to clarify notions related to exotic hadrons such as mesonic molecules and pentaquark states. See also Exotic atom Exciton — solid-state analog of positronium Footnotes References Particle physics
Onium
[ "Physics" ]
349
[ "Particle physics" ]
13,440,885
https://en.wikipedia.org/wiki/Resource%20contention
In computer science, resource contention is a conflict over access to a shared resource such as random access memory, disk storage, cache memory, internal buses or external network devices. A resource experiencing ongoing contention can be described as oversubscribed. Resolving resource contention problems is one of the basic functions of operating systems. Various low-level mechanisms can be used to aid this, including locks, semaphores, mutexes and queues. The other techniques that can be applied by the operating systems include intelligent scheduling, application mapping decisions, and page coloring. Access to resources is also sometimes regulated by queuing; in the case of computing time on a CPU the controlling algorithm of the task queue is called a scheduler. Failure to properly resolve resource contention problems may result in a number of problems, including deadlock, livelock, and thrashing. Resource contention results when multiple processes attempt to use the same shared resource. Access to memory areas is often controlled by semaphores, which allows a pathological situation called a deadlock, when different threads or processes try to allocate resources already allocated by each other. A deadlock usually leads to a program becoming partially or completely unresponsive. In recent years, research on the contention is focused more on the resources in the memory hierarchy, e.g., last-level caches, front-side bus, and memory socket connection. See also Bus contention Cache coherence Collision avoidance (networking) Resource allocation References Computational resources
Resource contention
[ "Technology" ]
302
[ "Computing stubs", "Computer science", "Computer science stubs" ]
13,441,556
https://en.wikipedia.org/wiki/Soil%20biodiversity
Soil biodiversity refers to the relationship of soil to biodiversity and to aspects of the soil that can be managed in relative to biodiversity. Soil biodiversity relates to some catchment management considerations. Biodiversity According to the Australian Department of the Environment and Water Resources, biodiversity is "the variety of life: the different plants, animals and micro-organisms, their genes and the ecosystems of which they are a part." Biodiversity and soil are strongly linked because soil is the medium for a large variety of organisms, and interacts closely with the wider biosphere. Conversely, biological activity is a primary factor in soil's physical and chemical formation. Soil provides a vital habitat, primarily for microbes (including bacteria and fungi), but also for microfauna (such as protozoa and nematodes), mesofauna (such as microarthropods and enchytraeids), and macrofauna (such as earthworms, termites, and millipedes). The primary role of soil biota is to recycle organic matter that is derived from the "above-ground plant-based food web". Soil is in close cooperation with the broader biosphere. The maintenance of fertile soil is "one of the most vital ecological services the living world performs", and the "mineral and organic contents of soil must be replenished constantly as plants consume soil elements and pass them up the food chain". The correlation of soil and biodiversity can be observed spatially. For example, both natural and agricultural vegetation boundaries correspond closely to soil boundaries, even at continental and global scales. A "subtle synchrony" is how Baskin (1997) describes the relationship between the soil and the diversity of life above and below the ground. It is not surprising that soil management directly affects biodiversity. This includes practices that influence soil volume, structure, biological, and chemical characteristics, and whether soil exhibits adverse effects such as reduced fertility, soil acidification, or salinisation. Process effects Acidification Soil acidity (or alkalinity) is the concentration of hydrogen ions (H+) in the soil. Measured on the pH scale, soil acidity is an invisible condition that directly affects soil fertility and toxicity by determining which elements in the soil are available for absorption by plants. Increases in soil acidity are caused by removal of agricultural product from the paddock, leaching of nitrogen as nitrate below the root zone, inappropriate use of nitrogenous fertilizers, and buildup of organic matter. Many of the soils in the Australian state of Victoria are naturally acidic; however, about 30,000 square kilometres or 23% of Victoria's agricultural soils suffer reduced productivity due to increased acidity. Soil acidity has been seen to damage the roots of the plants. Plants in higher acidity have smaller, less durable roots. Some evidence has shown that the acidity damages the tips of the roots, restricting further growth. The height of the plants has also seen a marked restriction when grown in acidic soils, as seen in American and Russian wheat populations. The number of seeds that are even able to germinate in acidic soil is much lower than the number of seeds that can sprout in a more neutral pH soil. These limitations to the growth of plants can have a very negative effect on plant health, leading to a decrease in the overall plant population. These effects occur regardless of the biome. A study in the Netherlands examined the correlation between soil pH and soil biodiversity in soils with pH below 5. A strong correlation was discovered, wherein the lower the pH the lower the biodiversity. The results were the same in grasslands as well as heathlands. Particularly concerning is the evidence showing that this acidification is directly linked to the decline in endangered species of plants, a trend recognized since 1950. Soil acidification reduces soil biodiversity. It reduces the numbers of most macrofauna, including, for example, earthworm numbers (important in maintaining structural quality of the topsoil for plant growth). Also affected is rhizobium survival and persistence. Decomposition and nitrogen fixation may be reduced, which affects the survival of native vegetation. Biodiversity may further decline as certain weeds proliferate under declining native vegetation. In strongly acidic soils, the associated toxicity may lead to decreased plant cover, leaving the soil susceptible to erosion by water and wind. Extremely low pH soils may suffer from structural decline as a result of reduced microorganisms and organic matter; this brings a susceptibility to erosion under high rainfall events, drought, and agricultural disturbance. Some plants within the same species have shown resistance to the soil acidity their population grows in. Selectively breeding the stronger plants is a way for humans to guard against increasing soil acidity. Further success in combatting soil acidity has been seen in soybean and maize populations suffering from aluminum toxicity. Soil nutrients were restored and acidity decreased when lime was added to the soil. Plant health and root biomass increased in response to the treatment. This is a possible solution for other acidic soil plant populations Structure decline Soil structure is the arrangement of particles and associated pores in soils across the size range from nanometres to centimeters. Biological influences can be demonstrated in the formation and stabilization of the soil aggregates. Still, it is necessary to distinguish clearly between those forces or agencies that create aggregations of particles and those that stabilize or degrade such aggregations. What qualifies as good soil contains the following attributes: optimal soil strength and aggregate stability, which offer resistance to structural degradation (capping/crusting, slaking and erosion, for example); optimal bulk density, which aids root development and contributes to other soil physical parameters such as water and air movement within the soil; optimal water holding capacity and rate of water infiltration. Well-developed, healthy soils are complex systems in which physical soil structure is as important as chemical content. Soil pores—maximized in a well-structured soil—allow oxygen and moisture to infiltrate to depths and plant roots to penetrate to obtain moisture and nutrients. Biological activity helps in the maintenance of relatively open soil structure, as well as facilitating decomposition and the transportation and transformation of soil nutrients. Changing soil structure has been shown to lead to reduced accessibility by plants to necessary substances. It is now uncontested that microbial exudates dominate the aggregation of soil particles and the protection of carbon from further degradation. It has been suggested that microorganisms within the soil "engineer" a superior habitat and provide a more sound soil structure, leading to more productive soil systems. Traditional agricultural practices have generally caused declining soil structure. For example, cultivation causes the mechanical mixing of the soil, compacting and sheering of aggregates and filling of pore spaces—organic matter is also exposed to a greater rate of decay and oxidation. Soil structure is essential to soil health and fertility; soil structure decline has a direct effect on soil and surface food chain and biodiversity as a consequence. Continued crop cultivation eventually results in significant changes within the soil, such as its nutrient status, pH balance, organic matter content, and physical characteristics. While some of these changes can be beneficial to food and crop production, they can also be harmful towards other necessary systems. For example, studies have shown that tilling has had negative consequences towards soil organic matter (SOM), the organic component of soil composed of plant and animal decomposition and substances synthesized by soil organisms. SOM plays an integral role in preserving soil structure. Still, the constant tilling of crops has caused the SOM to shift and redistribute, causing soil structure to deteriorate and altering soil organism populations (such as with earthworms). Yet in many parts of the world, maximizing food production at all costs due to rampant poverty and the lack of food security tends to leave the long term ecological consequences overlooked, despite research and acknowledgment by the academic community. Crop rotation, crop diversification, legume intercrops, and organic inputs are found to correlate with higher soil diversity by McDaniel et al. 2014 and Lori et al. 2017. Sodicity Soil sodicity refers to the soil's content of sodium compared to its content of other cations, such as calcium. In high levels, sodium ions break apart clay platelets and cause swelling and dispersion in soil. This results in reduced soil sustainability. If the concentration occurs repeatedly, the soil becomes cement-like, with little or no structure. Extended exposure to high sodium levels results in a decrease in the amount of water retained and able to flow through the soil and a decrease in decomposition rates (this leaves the soil infertile and prohibits any future growth). This issue is prominent in Australia, where 1/3 of the land is affected by high salt levels. It is a natural occurrence, but farming practices such as overgrazing and cultivation have contributed to the rise of it. The options for managing sodic soils are minimal; one must select sodicity-tolerant plants or change the soil. The latter is the more difficult process. If changing the soil, one must add calcium to displace the excess exchangeable sodium that causes the disaggregation that blocks water flow. Salinisation Soil salinity is the salt concentration within the soil profile or on the soil surface. Excessive salt directly affects the composition of plants and animals due to varying salt tolerance – along with various physical and chemical changes to the soil, including structural decline and, in the extreme, denudation, exposure to soil erosion, and export of salts to waterways. At low soil salinity, there is a lot of microbial activity, that results in an increase in soil respiration, which increases the carbon dioxide levels in the soil, producing a healthier environment for plants. As the salinity of the soil rises, there is more stress on microbes because there is less available water available to them, leading to less respiration. Soil salinity has localised and regional effects on biodiversity, ranging, for example, from changes in plant composition and survival at a local discharge site through to regional changes in water quality and aquatic life. While very saline soil is not preferred for growing crops, it is important to note that many crops can grow in more saline soils than others. This is important in countries where resources such as fresh water are scarce and needed for drinking. Saline water can be used for agriculture. Soil salinity can vary between extremes in a relatively small area; this allows plants to seek areas with less salinity. It is hard to determine which plants can grow in soil with high salinity because the soil salinity is not uniform, even in small areas. However, plants absorb nutrients from areas with lower salinity. Erosion Soil erosion is the removal of the soil's upper layers by water, wind, or ice. Soil erosion occurs naturally, but human activities can greatly increase its severity. Soil that is healthy is fertile and productive. But soil erosion leads to a loss of topsoil, organic matter, and nutrients; it breaks down soil structure and decreases water storage capacity, reducing fertility and water availability to plant roots. Soil erosion is, therefore, a major threat to soil biodiversity. The effects of soil erosion can be lessened by means of various soil conservation techniques. These include changes in agricultural practice (such as moving to less erosion-prone crops), the planting of leguminous nitrogen-fixing trees, or trees that are known to replenish organic matter. Also, jute mats and jute geotextile nets can be used to divert and store runoff and control soil movement. Misconstrued soil conservation efforts can result in an imbalance of soil chemical compounds. For example, attempts at afforestation in the northern Loess Plateau, China, have led to nutrient deprivation of organic materials such as carbon, nitrogen, and phosphorus. Use of fertilizers Potassium (K) is an essential macronutrient for plant development and potassium chloride (KCl) represents the most widely source of K used in agriculture. The use of KCl leads to high concentrations of chloride (Clˉ) in soil which cause increase in soil salinity affecting the development of plants and soil organisms. Chloride has a biocidal effect on the soil ecosystem, causing negative effects on the growth, mortality, and reproduction of organisms, which in turn jeopardizes soil biodiversity. The excessive availability of chloride in soil can trigger physiological disorders in plants and microorganisms by decreasing cells' osmotic potential and stimulating the production of reactive oxygen species. In addition, this ion negatively affects nitrifying microorganisms, thus affecting nutrient availability in the soil. Catchment scale impacts Biological systems—both natural and artificial—depend heavily on healthy soils; it is the maintenance of soil health and fertility in all of its dimensions that sustain life. The interconnection spans vast spatial and temporal scales; the major degradation issues of salinity and soil erosion, for instance, can have anywhere from local to regional effects – it may take decades for the consequences of management actions affecting soil to be realised in terms of biodiversity impact. Maintaining soil health is a regional or catchment-scale issue. Because soils are a dispersed asset, the only effective way to ensure soil health generally is to encourage a broad, consistent, and economically appealing approach. Examples of such approaches as applied to an agricultural setting include the application of lime (calcium carbonate) to reduce acidity so as to increase soil health and production and the transition from conventional farming practices that employ cultivation to limited or no-till systems, which has had a positive impact on improving soil structure. Monitoring and mapping Soils encompass a huge diversity of organisms, which makes biodiversity difficult to measure. It is estimated that a football pitch contains underground as many organisms as equal to the size of 500 sheep. A first step has been taken in identifying areas where soil biodiversity is most under pressure is to find the main proxies which decrease soil biodiversity. Soil biodiversity will be measured in the future, especially thanks to the development of molecular approaches relying on direct DNA extraction from the soil matrix. See also Soil carbon Soil degradation References Biodiversity Land management Soil Soil science
Soil biodiversity
[ "Biology" ]
2,883
[ "Biodiversity" ]
13,441,603
https://en.wikipedia.org/wiki/Situation%2C%20task%2C%20action%2C%20result
The situation, task, action, result (STAR) format is a technique used by interviewers to gather all the relevant information about a specific capability that the job requires. Situation: The interviewer wants you to present a recent challenging situation in which you found yourself. Task: What were you required to achieve? The interviewer will be looking to see what you were trying to achieve from the situation. Some performance development methods use “Target” rather than “Task”. Job interview candidates who describe a “Target” they set themselves instead of an externally imposed “Task” emphasize their own intrinsic motivation to perform and to develop their performance. Action: What did you do? The interviewer will be looking for information on what you did, why you did it and what the alternatives were. Result: What was the outcome of your actions? What did you achieve through your actions? Did you meet your objective? What did you learn from this experience? Have you used this learning since? The STAR technique is similar to the SOARA technique (Situation, Objective, Action, Result, Aftermath). The STAR technique is also often complemented with an additional R on the end STARR or STAR(R) with the last R resembling reflection. This R aims to gather insight and interviewee's ability to learn and iterate. Whereas the STAR reveals how and what kind of result on an objective was achieved, the STARR with the additional R helps the interviewer to understand what the interviewee learned from the experience and how they would assimilate experiences. The interviewee can define what they would do (differently, the same, or better) next time being posed with a situation. Common questions that the STAR technique can be applied to include conflict management, time management, problem solving and interpersonal skills. References External links The ‘STAR’ technique to answer behavioral interview questions The STAR method explained Job interview Logical consequence Schedule (project_management)
Situation, task, action, result
[ "Physics" ]
388
[ "Spacetime", "Physical quantities", "Time", "Schedule (project management)" ]
13,441,698
https://en.wikipedia.org/wiki/Mudcrete
Mudcrete is a structural material (employed, for example, as a basecourse in road construction) made by mixing mud (usually marine mud) with sand and concrete/cement. It is used as a cheaper and more sustainable alternative to rock fill. It is also used in such projects as land reclamation. References Cement Soil-based building materials
Mudcrete
[ "Engineering" ]
72
[ "Civil engineering", "Civil engineering stubs" ]
13,442,133
https://en.wikipedia.org/wiki/Transponder%20%28satellite%20communications%29
A communications satellite's transponder is the series of interconnected units that form a communications channel between the receiving and the transmitting antennas. It is mainly used in satellite communication to transfer the received signals. A transponder is typically composed of: an input band-limiting device (an input band-pass filter), an input low-noise amplifier (LNA), designed to amplify the signals received from the Earth station (normally very weak, because of the large distances involved), a frequency translator (normally composed of an oscillator and a frequency mixer) used to convert the frequency of the received signal to the frequency required for the transmitted signal, an output band-pass filter, a power amplifier (this can be a traveling-wave tube or a solid-state amplifier). Most communication satellites are radio relay stations in orbit and carry dozens of transponders, each with a bandwidth of tens of megahertz. Most transponders operate on a (i.e., u-bend) principle, sending back to Earth what goes into the conduit with only amplification and a shift from uplink to downlink frequency. However, some modern satellites use on-board processing, where the signal is demodulated, decoded, re-encoded and modulated aboard the satellite. This type, called a "regenerative" transponder, is more complex, but has many advantages, such as improving the signal to noise ratio as the signal is regenerated from the digital domain, and also permits selective processing of the data in the digital domain. With data compression and multiplexing, several video (including digital video) and audio channels may travel through a single transponder on a single wideband carrier. Original analog video only had one channel per transponder, with subcarriers for audio and automatic transmission-identification service ATIS. Non-multiplexed radio stations can also travel in single channel per carrier (SCPC) mode, with multiple carriers (analog or digital) per transponder. This allows each station to transmit directly to the satellite, rather than paying for a whole transponder or using landlines to send it to an Earth station for multiplexing with other stations. NASA distinguishes between a "transceiver" and "transponder". A transceiver has an independent transmitter and receiver packaged in the same unit. In a transponder the transmit carrier frequency is derived from the received signal. The frequency linkage allows an interrogating ground station to recover the Doppler shift and thus infer range and speed from a communication signal without allocating power to a separate ranging signal. Transponder equivalent A transponder equivalent (TPE) is a normalized way to refer to transponder bandwidth. It simply means how many transponders would be used if the same total bandwidths used only 36 MHz transponders. So, for example, the ARSAT-1 has 24 IEEE Ku band transponders: 12 with a bandwidth of 36 MHz, 8 with 54 MHz, and 4 with 72 MHz, which totals to 1152 MHz, or 32 TPE (i.e., 1152 MHz divided by 36 MHz). References Communications satellites Satellite broadcasting Radio electronics
Transponder (satellite communications)
[ "Engineering" ]
667
[ "Radio electronics", "Telecommunications engineering", "Satellite broadcasting" ]
13,442,871
https://en.wikipedia.org/wiki/XML/EDIFACT
XML/EDIFACT is an Electronic Data Interchange (EDI) format used in Business-to-business transactions. It allows EDIFACT message types to be used by XML systems. EDIFACT is a formal machine-readable description of electronic business documents. It uses a syntax close to delimiter separated files. This syntax was invented in the 1980s to keep files as small as possible. Because of the Internet boom around 2000, XML started to become the most widely supported file syntax. But for example, an invoice is still an invoice, containing information about buyer, seller, product, due amount. EDIFACT works perfectly from the content viewpoint, but many software systems struggle to handle its syntax. So combining EDIFACT vocabulary and grammar with XML syntax makes XML/EDIFACT. The rules for XML/EDIFACT are defined by ISO TS 20625. Use-cases XML/EDIFACT is used in B2B scenarios as listed below: Newer EAI or B2B systems often cannot handle EDI (Electronic Data Interchange) syntax directly. Simple syntax converters do a 1:1 conversion before. Their input is an EDIFACT transaction file, their output an XML/EDIFACT instance file. XML/EDIFACT keeps XML B2B transactions relatively small. XML element names derived from EDIFACT tags are much shorter and more formal than those derived from natural language since they are simply expressions of the EDIFACT syntax. A company does not want to invest into new vocabularies from scratch. XML/EDIFACT reuses business content defined in UN/EDIFACT. Since 1987, the UN/EDIFACT library was enriched by global business needs for all sectors of industry, transport and public services. Large companies can order goods from small companies via XML/EDIFACT. The small companies use XSLT stylesheets to browse the message content in human readable forms, as shown in Example 3. Example 1: EDIFACT source code A name and address (NAD) segment, containing customer ID and customer address, expressed in EDIFACT syntax: NAD+BY+CST9955::91++Candy Inc+Sirup street 15+Sugar Town++55555' Example 2: XML/EDIFACT source code The same information content in an XML/EDIFACT instance file: <S_NAD> <D_3035>BY</D_3035> <C_C082><D_3039>CST9955</D_3039><D_3055>91</D_3055></C_C082> <C_C080><D_3036>Candy Inc</D_3036></C_C080> <C_C059><D_3042>Sirup street 15</D_3042></C_C059> <D_3164>Sugar Town</D_3164> <D_3251>55555</D_3251> </S_NAD> Example 3: XML/EDIFACT in a browser The same XML/EDIFACT instance presented with help of an XSLT stylesheet: External links UN/EDIFACT Main Page ISO/TS 20625:2002 - This document by the ISO costs CHF 158,00 to access. www.edifabric.com - A .NET framework for converting EDIFACT and X12 messages into XML and vice versa. Edifact-XML - A free complete java parser library for converting UN EDIFACT messages to XML. Edifact<->XML Converter plus Edifact xsd generator - UN/EDIFACT<->ISO/TS 20625 XML. XML-based standards Electronic data interchange
XML/EDIFACT
[ "Technology" ]
808
[ "Computer standards", "XML-based standards" ]
13,443,170
https://en.wikipedia.org/wiki/Chirikov%20criterion
The Chirikov criterion or Chirikov resonance-overlap criterion was established by the Russian physicist Boris Chirikov. Back in 1959, he published a seminal article, where he introduced the very first physical criterion for the onset of chaotic motion in deterministic Hamiltonian systems. He then applied such a criterion to explain puzzling experimental results on plasma confinement in magnetic bottles obtained by Rodionov at the Kurchatov Institute. Description According to this criterion a deterministic trajectory will begin to move between two nonlinear resonances in a chaotic and unpredictable manner, in the parameter range Here is the perturbation parameter, while is the resonance-overlap parameter, given by the ratio of the unperturbed resonance width in frequency (often computed in the pendulum approximation and proportional to the square-root of perturbation), and the frequency difference between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border. See also Chirikov criterion at Scholarpedia Chirikov standard map and standard map Boris Chirikov and Boris Chirikov at Scholarpedia References B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) Springer link References External links website dedicated to Boris Chirikov Special Volume dedicated to 70th of Boris Chirikov: Physica D 131:1-4 vii (1999) and arXiv Chaos theory Chaotic maps
Chirikov criterion
[ "Mathematics" ]
375
[ "Functions and mappings", "Mathematical objects", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
13,443,187
https://en.wikipedia.org/wiki/Galileo%20GDS
Galileo is a computer reservations system (CRS) owned by Travelport. As of 2000, it had a 26.4% share of worldwide CRS airline bookings. In addition to airline reservations, the Galileo CRS is also used to book train travel, cruises, car rental, and hotel rooms. The system was originally known as Apollo, launched in 1971 by United Airlines as their in-house booking system. In 1976, UA began installing Apollo terminals in travel agent offices. Apollo, and the competing American Airlines system Sabre, quickly took over much of the booking market. In response to possible government intervention due to antitrust concerns, UA spun off the system to become its own company in 1992, Covia. That same year, Covia purchased a competitor, Galileo, which had been created by a consortium of European airlines. They merged operatations under the Galileo name. UA remained major customers for Galileo until 2012, when they introduced a new in-house booking system, SHARES. Galileo was later purchased by Travelport, which also purchased the competing Worldspan in 2007. On 28 September 2008, Galileo system was moved from Denver, Colorado, to the Worldspan datacenter in Atlanta, Georgia. Although they now share the same datacenter, they continue to be run as separate systems. Galileo is subject to the Capps II and its successor Secure Flight program for the selection of passengers with a risk profile. Galileo is a member of the International Air Transport Association, of the OpenTravel Alliance and of SITA. History Galileo traces its roots back to 1971 when United Airlines created its first computerized central reservation system under the name Apollo. During the 1980s and early 1990s, a significant proportion of airline tickets were sold by travel agents. Flights by the airline owning the reservation system had preferential display on the computer screen. Due to the high market penetration of the Sabre and Apollo systems, owned by American Airlines and United Airlines, respectively, Worldspan and Galileo were created by other airline groups in an attempt to gain market share in the computer reservation system market and, by inference, the commercial airline market. Galileo was formed in 1987 by nine European carriers -- British Airways, KLM Royal Dutch Airlines, Alitalia, Swissair, Austrian Airlines, Olympic, Sabena, Air Portugal and Aer Lingus. In response and to prevent possible government intervention, United Airlines spun off its Apollo reservation system, which was then controlled by Covia. Galileo International was born when Covia acquired Europe's Galileo and merged it with the Apollo system in 1992. The Apollo reservation system was used by United Airlines until 3 March 2012, when it switched to SHARES, a system used by its former Continental Airlines subsidiary. Apollo is still used by Galileo International (now part of Travelport GDS) travel agency customers in the United States, Canada, Mexico, and Japan. Galileo UK was originally created from Travicom which was the world's first multi-access reservations system using the technology developed by Videcom. Travicom was a company launched by Videcom, British Airways, British Caledonian and CCL in 1976 which in 1988 became Galileo UK. Developments Travel Agents now also book Amtrak Rail on the system and issue the tickets directly. Southwest Airlines has entered into a marketing agreement with Apollo/Galileo and travel agents are now able to book reservations on Southwest. These direct connects offer the possibility to sell ancillary services and to differentiate oneself from the competition. The development team at Travelport has developed an online search tool called ASK Travelport where registered users can go and find out the answers to their frequently asked questions and queries. See also Codeshare agreement Passenger Name Record References External links Travelport.com, "Galileo" Travelport ViewTrip.com, public site for viewing reservations made through Galileo computer reservations system. Galileo.co.in , Galileo in India ITQ.in Computer reservation systems
Galileo GDS
[ "Technology" ]
789
[ "Computer reservation systems", "Computer systems" ]
13,444,144
https://en.wikipedia.org/wiki/Virtual%20facility
A Virtual Facility (VF) is a highly realistic digital representation of a data center, used to model all relevant aspects of a physical data center with a high degree of precision. The term "virtual" in Virtual Facility refers to its use of virtual reality, rather than the abstraction of computer resources as seen in platform virtualization. The VF mirrors the characteristics of a physical facility over time and allows for detailed analysis and modeling. VF Model features A standard VF model includes: Three-dimensional physical facility layout Network connectivity of facility equipment Full inventory of facility equipment, including electronics and electrical systems such as power distribution units (PDUs) and uninterruptible power supplies (UPSs) Full air conditioning system (ACUs) and controls within the room The term Virtual Facility was introduced to address the emerging environmental problems facing modern Mission Critical Facilities (MCFs). This concept combines virtual reality (VR), computer simulation, and expert systems applied to the domain of facilities. The VF type of computer simulation allows for detailed analysis and prototyping of airflow in the data center using computational fluid dynamics (CFD) techniques. This enables the visualization and numerical analysis of airflow and temperatures within the facility, helping to predict real-world outcomes. VF applications The VF model can be used to assist with the following: Greenfield design Asset management Troubleshooting existing data centers Making existing data centers more resilient Making existing data centers more energy efficient Cost prediction Staff training Capacity planning Load growth management Many organizations use VF models to virtually assess scenarios before committing resources to physical changes. This allows for better decision-making regarding the addition or modification of equipment, helping to avoid logistical or thermal problems. References Data management
Virtual facility
[ "Technology" ]
350
[ "Data management", "Data" ]
13,444,838
https://en.wikipedia.org/wiki/S%C3%B6der%20Torn
South Tower (Swedish: "Söder Torn") is a high-rise building located on Fatburstrappan 18, next to Fatbursparken on Södermalm in Stockholm. The building has a height of about above the ground including the "crown" and consists of 25 floors. The Söder Torn complex contains three additional buildings, including one that abuts Medborgarplatsen. Collectively, the buildings contain 172 condominium apartments and 5 businesses. The South Tower itself has 85 apartments and one business. A garage contains parking for both cars and motorcycles. Site History and Building Description The Tower's site was previously a lake called Fatburen, which had formed due to isostatic uplift after the last glaciation. By the 1700s, the lake had become polluted due to urban expansion, and by 1860 the lake had been filled in order to create a rail yard and train station. The rail yard was closed in 1980 and the neighborhood re-developed into a residential district during the period 1985-1995 which features a large number of buildings in the post-modern style. A new train station was built underground in close proximity (300 meters) to the South Tower. The Tower was originally designed by Danish architect Henning Larsen to have 40 floors. However, Larsen left the project in protest after Stockholm's city planning office forced the removal 16 floors from the building plan. The floor plan is octagonal with five apartments on each level. The tower tapers with increasing height. The facades are clad with red granite slabs. In the centrally located stairwell there are two elevators and a spiral staircase. The 23rd and 24th floors have three multi-story apartments, and the top floor is a common party room with glass walls and a panoramic view of the city. Built by construction company JM and finance company SBC, it opened in 1997. Gallery Residential Qualities The top floor of the building is a glass-enclosed party room and terrace with panoramic views of Stockholm. There is an indoor swimming pool and sauna on a lower level. A fountain sculpture at the Tower's base, La Fontaine aux quatre Nanas by French-American artist Niki de Saint Phalle, attracts many viewers due to its styling and location adjacent to a heavily used path between Stockholm South Station and the plaza Medborgarplatsen. Criticism The development at Fatburen district was poorly received by some architectural critics, with one review specifically highlighting the South Tower as "a monument to post-modernism as a playhouse for urban development." See also Bofills båge Medborgarplatsen Södermalm Stockholm South Station References Skyscrapers in Stockholm Residential skyscrapers Residential buildings in Sweden Postmodern architecture
Söder Torn
[ "Engineering" ]
557
[ "Postmodern architecture", "Architecture" ]
5,566,826
https://en.wikipedia.org/wiki/Sfumatura
The sfumatura or slow-folding process is a traditional technique for manually extracting the essential oils from citrus peel using sponges. Dating back to 18th-century Italy, the process is still carried out in Sicily today, although it is increasingly rare. Many controversially claim that modern machinery does not approach the quality of sfumatura-produced oil. Using a rastrello, a special spoon-shaped knife, the fresh peel is de-pulped. It is then thoroughly washed with limewater and drip-dried on woven mats or special baskets for 3 to 24 hours, depending on the ripeness of the fruit, the temperature, and the humidity. These steps harden the peel, causing the oil to spurt from the oil glands more easily, and the lime helps neutralize the acidity of the peel. A series of natural sponges is fixed upon a terracotta basin or concolina and held in place with a wooden bar laid across the rim. The dried peel is folded and pressed against the sponges several times in a circular motion, causing a mixture of essential oil and peel liquids to pass into the concolina. After finishing with the peel, the sponges are squeezed to recover additional oil and liquids. Finally, the oil is decanted away from the heavier watery phase, which contains detritus from breaking the peel. References Angela Di Giacomo. "Development of the citrus industry: historical note". pp. 63–70. Angela Di Giacomo and Giovanni Di Giacomo. "Essential oil production". pp. 114–147. Citrus Essential oils
Sfumatura
[ "Chemistry" ]
321
[ "Essential oils", "Natural products" ]
5,566,829
https://en.wikipedia.org/wiki/Gregor%20and%20the%20Curse%20of%20the%20Warmbloods
Gregor and the Curse of the Warmbloods is an epic fantasy children's novel by Suzanne Collins. It is the third book in The Underland Chronicles, and was first published by Scholastic in 2005. The novel takes place a few months after the events of the preceding book, in the same subterranean world known as the Underland. In this installment, the young protagonist Gregor is once again recruited by the Underland's inhabitants, this time to help cure a rapidly-spreading plague. Gregor and the Curse of the Warmbloods has been published as stand-alone hardcovers and paperbacks, as well as part of a boxed set. It was released as an audiobook on December 13, 2005, read by Paul Boehmer. In August 2010, it was released in ebook form. It has been lauded for "[addressing] a number of political issues ... in a manner accessible to upper elementary and middle school readers". Development Collins has listed two main sources of influence in her writing of The Underland Chronicles. First is her M.F.A. in dramatic writing and her experience as a screenwriter. This writing experience resulted in her structuring books "like a three-act play", and paying close attention to the plot's pacing. Gregor and the Curse of the Warmbloods came third in "a series of narratives that are interrelated yet can stand on their own", a fact not missed by reviewers. Collins' other source of inspiration was her father Michael Collins, a lieutenant colonel in the United States Air Force, who provided her with advice about the war tactics used in her books, and also instilled in her an "impulse to educate young people about the realities of war". Plot summary Despite the difficulties it has caused for his family, Gregor finds it hard to distance himself from the Underland. When he receives word that a plague has broken out and his bond Ares is one of the victims, he heads down to help with yet another of Bartholomew of Sandwich's prophecies. His mother, however, hates the Underland and only allows Boots and Gregor below on the condition that she comes with them. The humans' plague expert, Dr. Neveeve, explains that there is a plant called starshade growing deep in the Vineyard of Eyes which can be distilled into a cure. In the midst of the meeting, a dying bat infected with the plague inadvertently infects one of the delegates–Gregor's mother. Gregor immediately joins a group of creatures on a quest to find the starshade, as described in "The Prophecy of Blood". The current queen, Nerissa, has arranged for Hamnet–the estranged, pacifistic son of Solovet and Vikus–to be their guide. Hamnet, his Halflander son Hazard, and their hisser companion Frill lead the motley crew through the dangerous Jungle and numerous setbacks. During a near-death experience with a pool of quicksand, the group encounters Luxa, the heir apparent of Regalia who was assumed to be dead after the quest in Gregor and the Prophecy of Bane. She and her bond Aurora were trapped in the Jungle when Aurora dislocated her wing, and have been living there with a colony of nibblers (mice). After Hamnet fixes Aurora's wing, the bonds accompany the questers. They arrive at the Vineyard of Eyes, but an army of cutters (ants, who would like to see all warm-blooded creatures gone) destroys the starshade and kill both Hamnet and Frill. The group's hopes are crushed until they realize a new possibility: that the plague was developed by the humans as a biological agent to be used against the rats. The group hastens home, and find their theory proved correct by the humans' new medication, developed without the supposed "cradle cure." Luxa furiously exposes the covert military project. Dr. Neveeve is executed for her participation and Solovet, the project's head, is imprisoned in preparation for a trial. Following up on a promise to Ripred, Luxa sends doses of the cure to the gnawers while the Regalian hospital treats as many human and bat victims as possible. Though she is healing, Gregor's mother is too weak to go home, and so the book ends with Gregor heading home with Boots. Realizing how much help his family needs, he decides to reveal their secret to Mrs. Cormaci. The Prophecy of Blood The "Prophecy of Blood" is unusual in two ways: it is the first of Bartholomew of Sandwich's prophecies to feature a repeating "refrain"; and it is carved backwards in a tight corner of the prophecy room, so that a mirror is required to read it. Nerissa tells Gregor she believes Sandwich purposely made it "difficult to read" in order to emphasize how difficult it is to understand, and Gregor later hypothesizes that Sandwich forced the humans to read it using mirrors so that, as a person read, "[they] would see [themselves]". Ripred similarly points out that an "annoying little dance" Boots makes up to go along with the prophecy's refrain echoes this theme, by forcing the questers to turn and see themselves before they realize that the plague originated with the humans. Boots's "help" deciphering this prophecy leads characters to rely on her to do the same in the series' later books. In Gregor and the Marks of Secret, Boots begins dancing to a song Sandwich carved "in the nursery, not the room of prophecies" after the characters witness the mass execution of a group of nibblers and becomes "totally convinced" that the song is actually yet another prophecy. In Gregor and the Code of Claw, when the "Prophecy of Time" calls for a "princess" to crack a cryptogram, Boots is immediately assigned the role because of her importance to the last two prophecies, despite the fact that she is still a toddler. The repeating refrain goes as follows: Turn and turn and turn again. You see the what but not the when. Remedy and wrong entwine, And so they form a single vine. Gregor hypothesizes that Sandwich included a cryptic repeating segment in the prophecy to drum the meaning of these lines into the heads of his readers, or to emphasize their importance. The prophecy's other stanzas describe the plague and who it affects, call for the warrior's return, explain how to find the cure and win allies amongst the nonhuman species, and warn strongly against allowing a war to start in the Underland. Gregor refers to this final point as "Sandwich's usual prediction that if things didn't work out, there would be total destruction and everybody would end up dead." As with other prophecies in The Underland Chronicles, its meaning "only becomes clear in the later stages of the book". Characters Quest members Gregor: A young Overlander and "rager", said to be the warrior mentioned in "The Prophecy of Gray". Boots (Margaret): Boots is Gregor's toddler sister. She is called the "princess" by the crawlers, and has a knack for recognizing different insects. Hamnet: A former soldier and son of Solovet and Vikus who leads the questers through the Jungle until his death. Hazard: The child of Hamnet and an unnamed Overlander woman. Hazard is gifted with languages. Ripred: A gnawer (rat) and rager like Gregor. Mange and Lapblood: A male and female rat, respectively, who are trying to save their pups from the plague. Mange is eaten by a carnivorous plant, and his death deeply upsets Lapblood. Temp: The crawlers' representative on the quest. He is endlessly patient and brave, especially with his "princess." He also has an uncanny knack to recognize danger before other questers, though his warnings are often ignored. Frill: A hisser who has been living with Hamnet and Hazard. She dies fighting the cutters. Nike: A black and white flier (bat) who helps Gregor while Ares is incapacitated. She is the daughter of the fliers' queen, and has a permanently optimistic disposition. Luxa and Aurora: Two unofficial members of the quest who join the group after learning of the plague. The two were trapped in the Jungle in Gregor and the Prophecy of Bane when Aurora's wing was dislocated. Publication The book was originally released as an individual hardcover in 2005, then as a paperback in July 2006. In 2013, a new edition of the novel was published as part of a paperback boxed set of the five books in The Underland Chronicles, featuring new cover art by Vivienne To. Other sets have been released by Scholastic as well. The first was in the US on September 1, 2009, and a second on August 1, 2013, in the UK, again with new art. Random House Audio released an audiobook version in December 2005. It was read by actor and narrator Paul Boehmer. A School Library Journal review praised Boehmer's "distinct voice" for each character and called the edition a "good purchase for both school and public libraries". A Booklist review also lauded Boehmer for "keeping his narrative pace even, [helping] listeners keep the complex story straight". The book's first ebook version was released in August 2010. Since its first printing in 2005, a number of alternate editions have been produced. Scholastic has signed rights to publishers working in a total of 19 different languages. , editions have appeared in German, French, Chinese, Polish, Swedish, Norwegian, Dutch, Italian, Finnish, Bulgarian, Spanish, Portuguese, and Turkish. Multiple editions with unique cover art have been published for most of these languages. Scholastic advertised the second English edition, released exactly one year after the first, as having "fresh new cover art" by August Hall. Reception Gregor and the Curse of the Warmbloods has been positively reviewed by professional and amateur critics alike. Many reviews focus on the book particularly as a sequel to the first two of the series. In the words of Tasha Saecker of School Library Journal, for example, "Collins maintains the momentum, charm, and vivid settings of the original title." The Horn Book Magazine review went further, saying that "This immensely readable installment won't disappoint fans of the first two books. In fact, Collins seems to have hit her stride with this page-turner." Kirkus mentioned the novel's more serious plot and themes with the review's comment, "This offering takes on an even darker tone than the earlier ones, delving into meaty questions of territorial expansion and its justification." A review in the Library Media Connection, on the other hand, said that "Collins's subtle messages about the horrors of war and the benefits of peace" make the book "worthy of discussion" by readers of all ages. Collins herself has said that she would "like to take topics like war and introduce them at an earlier age. If you look at 'Gregor', it has all kinds of topics. There's biological warfare, there's genocide, there's military intelligence. But it's in a fantasy." Collins has also stated that she approaches her books the same way her father approached explaining his military service to her as a child: at a level understandable to children, but not without the honest descriptions needed to show the true gravity of the situation. A review published in The Bulletin of the Center for Children's Books makes the claim that Gregor's "evolution from a scared, unwilling combatant in the first book to a morally responsible, talented warrior ... here ... makes his character realistic and appealing", and thus that the increased violence in Gregor and the Curse of the Warmbloods is a necessary part of his character development. Reviews published in Library Media Connection and VOYA also praise the novel's more serious nature as providing better insight into the politics of the Underland. The novel was a New York Times bestseller and a Book Sense bestseller and Top-Ten Children's pick. It was awarded an Oppenheim Toy Portfolio Gold Award in 2006. References 2005 American novels 2005 fantasy novels 2005 children's books American fantasy novels The Underland Chronicles American children's novels Novels set in New York (state) Biological weapons in popular culture Sequel novels Scholastic Corporation books Novels about diseases and disorders Children's books set in New York City
Gregor and the Curse of the Warmbloods
[ "Biology" ]
2,583
[ "Biological weapons in popular culture", "Biological warfare" ]
5,567,144
https://en.wikipedia.org/wiki/Law%20for%20the%20Prevention%20of%20Hereditarily%20Diseased%20Offspring
Law for the Prevention of Genetically Diseased Offspring () or "Sterilisation Law" was a statute in Nazi Germany enacted on July 14, 1933, (and made active in January 1934) which allowed the compulsory sterilisation of any citizen who in the opinion of a "Genetic Health Court" () suffered from a list of alleged genetic disorders – many of which were not, in fact, genetic. The elaborate interpretive commentary on the law was written by three dominant figures in the racial hygiene movement: Ernst Rüdin, and the lawyer . While it has close resemblances with the American Model Eugenical Sterilization Law developed by Harry H. Laughlin, the law itself was initially drafted in 1932, at the end of the Weimar Republic period, by a committee led by the Prussian health board. Operation of the law The basic provisions of the 1933 law stated that: The law applied to anyone in the general population, making its scope significantly larger than the compulsory sterilisation laws in the United States, which generally were only applicable on people in psychiatric hospitals or prisons. The 1933 law created a large number of "Genetic Health Courts" (, EGG), consisting of a judge, a medical officer, and medical practitioner, which "shall decide at its own discretion after considering the results of the whole proceedings and the evidence tendered". If the court decided that the person in question was to be sterilised, the decision could be appealed to the "Higher Genetic Health Court" (, EGOG). If the appeal failed, the sterilization was to be carried out, with the law specifying that "the use of force is permissible". The law also required that people seeking voluntary sterilizations also go through the courts. There were three amendments by 1935, most making minor adjustments to how the statute operated or clarifying bureaucratic aspects (such as who paid for the operations). The most significant changes allowed the Higher Court to renounce a patient's right to appeal, and to fine physicians who did not report patients who they knew would qualify for sterilisation under the law. The law also enforced sterilization on the so-called "Rhineland bastards," the mixed-race children of German civilians and French African soldiers who helped occupy the Rhineland. At the time of its enaction, the German government pointed to the success of sterilisation laws elsewhere, especially the work in California documented by the American eugenicists E. S. Gosney and Paul Popenoe, as evidence of the humaneness and efficacy of such laws. Eugenicists abroad admired the German law for its legal and ideological clarity. Popenoe himself wrote that "the German law is well drawn and, in form, may be considered better than the sterilization laws of most American states", and trusted in the German government's "conservative, sympathetic, and intelligent administration" of the law, praising the "scientific leadership" of the Nazis. The German mathematician Otfrid Mittmann defended the law against "unfavorable judgements". In the first year of the law's operation, 1934, 84,600 cases were brought to Genetic Health Courts, with 62,400 forced sterilisations. Nearly 4,000 people appealed against the decisions of sterilisation authorities; 3,559 of the appeals failed. In 1935, it was 88,100 trials and 71,700 sterilizations. By the end of the Nazi regime, over 200 "Genetic Health Courts" were created, and under their rulings over 400,000 people were sterilized against their will. Along with the law, Adolf Hitler personally decriminalised abortion in case of fetuses having racial or hereditary defects for doctors, while the abortion of healthy "pure" German, "Aryan" unborn remained strictly forbidden. See also Life unworthy of life Aktion T4 Nazi eugenics Eugenics in the United States Rhineland Bastard Notes External links "Eugenics in Germany : 'The Law for the Prevention of Hereditarily Diseased Offspring'" article from Facing History and Ourselves United States Holocaust Memorial Museum – The Biological State: Nazi Racial Hygiene, 1933–1939 1933 establishments in Germany 1933 in law Law of Nazi Germany Nazi eugenics Race and intelligence controversy Racial antisemitism Scientific racism Compulsory sterilization
Law for the Prevention of Hereditarily Diseased Offspring
[ "Biology" ]
880
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
5,568,223
https://en.wikipedia.org/wiki/Ecoprovince
An ecoprovince is a biogeographic unit smaller than an ecozone that contains one or more ecoregions. According to Demarchi (1996), an ecoprovince encompasses areas of uniform climate, geological history and physiography (i.e. mountain ranges, large valleys, plateaus). Their size and broad internal uniformity make them ideal units for the implementation of natural resource policies. See also Bioregion Ecological land classification References Biogeography Ecology terminology Ecoregions
Ecoprovince
[ "Biology" ]
103
[ "Ecology terminology", "Biogeography" ]
5,568,961
https://en.wikipedia.org/wiki/Leucine%20%28data%20page%29
References Chemical data pages Chemical data pages cleanup
Leucine (data page)
[ "Chemistry" ]
10
[ "Chemical data pages", "nan" ]
5,569,055
https://en.wikipedia.org/wiki/First-order%20hold
First-order hold (FOH) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH (or, more commonly, the zero-order hold) is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, xs(t), representing the discrete samples, x(nT), is low-pass filtered to recover the original signal that was sampled, x(t). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH. Even though this is not what is physically done, an identical output can be generated by applying the hypothetical sequence of Dirac impulses, xs(t), to a linear time-invariant system, otherwise known as a linear filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct piecewise linear function in the output. Basic first-order hold First-order hold is the hypothetical filter or LTI system that converts the ideally sampled signal {| |- | | |- | | |} to the piecewise linear signal resulting in an effective impulse response of where is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the normalized sinc function. The Laplace transform transfer function of FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} This is an acausal system in that the linear interpolation function moves toward the value of the next sample before such sample is applied to the hypothetical FOH filter. Delayed first-order hold Delayed first-order hold, sometimes called causal first-order hold, is identical to FOH above except that its output is delayed by one sample period resulting in a delayed piecewise linear output signal resulting in an effective impulse response of where is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the sinc function. The Laplace transform transfer function of the delayed FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} The delayed output makes this a causal system. The impulse response of the delayed FOH does not respond before the input impulse. This kind of delayed piecewise linear reconstruction is physically realizable by implementing a digital filter of gain H(z) = 1 − z−1, applying the output of that digital filter (which is simply x[n]−x[n−1]) to an ideal conventional digital-to-analog converter (that has an inherent zero-order hold as its model) and integrating (in continuous-time, H(s) = 1/(sT)) the DAC output. Predictive first-order hold Lastly, the predictive first-order hold is quite different. This is a causal hypothetical LTI system or filter that converts the ideally sampled signal {| |- | | |- | | |} into a piecewise linear output such that the current sample and immediately previous sample are used to linearly extrapolate up to the next sampling instance. The output of such a filter would be {| |- | | |- | | |} resulting in an effective impulse response of {| |- | | |- | | |} where is the rectangular function and is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the sinc function. The Laplace transform transfer function of the predictive FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} This a causal system. The impulse response of the predictive FOH does not respond before the input impulse. This kind of piecewise linear reconstruction is physically realizable by implementing a digital filter of gain H(z) = 1 − z−1, applying the output of that digital filter (which is simply x[n]−x[n−1]) to an ideal conventional digital-to-analog converter (that has an inherent zero-order hold as its model) and applying that DAC output to an analog filter with transfer function H(s) = (1+sT)/(sT). See also Nyquist–Shannon sampling theorem Zero-order hold Bilinear interpolation External links Digital signal processing Electrical engineering Control theory Signal processing
First-order hold
[ "Mathematics", "Technology", "Engineering" ]
1,048
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Control theory", "Electrical engineering", "Dynamical systems" ]
5,569,282
https://en.wikipedia.org/wiki/Isomorph
An isomorph is an organism that does not change in shape during growth. The implication is that its volume is proportional to its cubed length, and its surface area to its squared length. This holds for any shape it might have; the actual shape determines the proportionality constants. The reason why the concept is important in the context of the Dynamic Energy Budget (DEB) theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. Since volume grows faster than surface area, this controls the ultimate size of the organism. Alfred Russel Wallace wrote this in a letter to E. B. Poulton in 1865. The surface area that is of importance is the part that is involved in substrate uptake (e.g. the gut surface), which is typically a fixed fraction of the total surface area in an isomorph. The DEB theory explains why isomorphs grow according to the von Bertalanffy curve if food availability is constant. Organisms can also change in shape during growth, which affects the growth curve and the ultimate size, see for instance V0-morphs and V1-morphs. Isomorphs can also be called V2/3-morphs. Most animals approximate isomorphy, but plants in a vegetation typically start as V1-morphs, then convert to isomorphs, and end up as V0-morphs (if neighbouring plants affect their uptake). See also Dynamic energy budget V0-morph V1-morph shape correction function References Developmental biology
Isomorph
[ "Biology" ]
327
[ "Behavior", "Developmental biology", "Reproduction" ]
5,569,486
https://en.wikipedia.org/wiki/Definable%20set
In mathematical logic, a definable set is an n-ary relation on the domain of a structure whose elements satisfy some formula in the first-order language of that structure. A set can be defined with or without parameters, which are elements of the domain that can be referenced in the formula defining the relation. Definition Let be a first-order language, an -structure with domain , a fixed subset of , and a natural number. Then: A set is definable in with parameters from if and only if there exists a formula and elements such that for all , if and only if The bracket notation here indicates the semantic evaluation of the free variables in the formula. A set is definable in without parameters if it is definable in with parameters from the empty set (that is, with no parameters in the defining formula). A function is definable in (with parameters) if its graph is definable (with those parameters) in . An element is definable in (with parameters) if the singleton set is definable in (with those parameters). Examples The natural numbers with only the order relation Let be the structure consisting of the natural numbers with the usual ordering. Then every natural number is definable in without parameters. The number is defined by the formula stating that there exist no elements less than x: and a natural number is defined by the formula stating that there exist exactly elements less than x: In contrast, one cannot define any specific integer without parameters in the structure consisting of the integers with the usual ordering (see the section on automorphisms below). The natural numbers with their arithmetical operations Let be the first-order structure consisting of the natural numbers and their usual arithmetic operations and order relation. The sets definable in this structure are known as the arithmetical sets, and are classified in the arithmetical hierarchy. If the structure is considered in second-order logic instead of first-order logic, the definable sets of natural numbers in the resulting structure are classified in the analytical hierarchy. These hierarchies reveal many relationships between definability in this structure and computability theory, and are also of interest in descriptive set theory. The field of real numbers Let be the structure consisting of the field of real numbers. Although the usual ordering relation is not directly included in the structure, there is a formula that defines the set of nonnegative reals, since these are the only reals that possess square roots: Thus any is nonnegative if and only if . In conjunction with a formula that defines the additive inverse of a real number in , one can use to define the usual ordering in : for , set if and only if is nonnegative. The enlarged structure is called a definitional extension of the original structure. It has the same expressive power as the original structure, in the sense that a set is definable over the enlarged structure from a set of parameters if and only if it is definable over the original structure from that same set of parameters. The theory of has quantifier elimination. Thus the definable sets are Boolean combinations of solutions to polynomial equalities and inequalities; these are called semi-algebraic sets. Generalizing this property of the real line leads to the study of o-minimality. Invariance under automorphisms An important result about definable sets is that they are preserved under automorphisms. Let be an -structure with domain , , and definable in with parameters from . Let be an automorphism of that is the identity on . Then for all , if and only if This result can sometimes be used to classify the definable subsets of a given structure. For example, in the case of above, any translation of is an automorphism preserving the empty set of parameters, and thus it is impossible to define any particular integer in this structure without parameters in . In fact, since any two integers are carried to each other by a translation and its inverse, the only sets of integers definable in without parameters are the empty set and itself. In contrast, there are infinitely many definable sets of pairs (or indeed n-tuples for any fixed n > 1) of elements of : (in the case n = 2) Boolean combinations of the sets for . In particular, any automorphism (translation) preserves the "distance" between two elements. Additional results The Tarski–Vaught test is used to characterize the elementary substructures of a given structure. References Hinman, Peter. Fundamentals of Mathematical Logic, A K Peters, 2005. Marker, David. Model Theory: An Introduction, Springer, 2002. Rudin, Walter. Principles of Mathematical Analysis, 3rd. ed. McGraw-Hill, 1976. Slaman, Theodore A. and Woodin, W. Hugh. Mathematical Logic: The Berkeley Undergraduate Course. Spring 2006. Model theory Logic Mathematical logic
Definable set
[ "Mathematics" ]
1,010
[ "Mathematical logic", "Model theory" ]
5,569,576
https://en.wikipedia.org/wiki/Logic%20Control
Logic Control is a control surface originally designed by Emagic in cooperation with Mackie. History Logic Control was designed by Emagic as a dedicated control surface for their Logic digital audio workstation software. It was manufactured by Mackie, but distributed by Emagic. About 6 months later, Mackie introduced a physically identical product called "Mackie Control" which included support for most major DAW applications, but not Logic. The Emagic Logic Control was still available and would only work with Logic. Later, Mackie Control's firmware was revised to include compatibility with Logic, combining together Mackie Control, Logic Control and Human User Interface (HUI) into a single protocol. As a result, the name was changed to "Mackie Control Universal" (MCU). Out of the box, MCU included Lexan overlays with different button legends to support control of other DAWs such as Pro Tools and Cubase. Description Logic Control (and now MCU) allows control of almost all Logic parameters with hardware faders, buttons and "V-Pots" (rotary knobs). Its touch-sensitive, motorized faders react to track automation. All transport functions and wheel scrubbing are also available. The unit also controls plug-in parameters. Visual feedback including current parameters being edited, parameter values, project location (SMPTE time code or bars/beats/divisions/ticks) are conveyed by a two-line LCD and red 7-segment LED displays. See also Logic Pro Mackie References Computer peripherals Electronic musical instruments Music hardware
Logic Control
[ "Technology" ]
321
[ "Computer peripherals", "Components" ]
5,569,756
https://en.wikipedia.org/wiki/V1-morph
An V1-morph is an organism that changes in shape during growth such that its surface area is proportional to its volume. In most cases both volume and surface area are proportional to length The reason the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Since uptake is proportional to maintenance for V1-morphs, there is no size control, and an organism grows exponentially at constant food (substrate) availability. Filaments, such as fungi that form hyphae growing in length, but not in diameter, are examples of V1-morphs. Sheets that extend, but do not change in thickness, like some colonial bacteria and algae, are another example. An important property of V1-morphs is that the distinction between the individual and the population level disappears; a single long filament grows as fast as many small ones of the same diameter and the same total length. See also Dynamic Energy Budget V0-morph isomorph shape correction function References Developmental biology Metabolism
V1-morph
[ "Chemistry", "Biology" ]
244
[ "Behavior", "Developmental biology", "Reproduction", "Cellular processes", "Biochemistry", "Metabolism" ]
5,569,849
https://en.wikipedia.org/wiki/V0-morph
A V0-morph is an organism whose surface area remains constant as the organism grows. The reason why the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Biofilms on a flat solid substrate are examples of V0-morphs; they grow in thickness, but not in surface area that is involved in nutrient exchange. Other examples are dinophyta and diatoms that have a cell wall that does not change during the cell cycle. During cell-growth, when the amounts of protein and carbohydrates increase, the vacuole shrinks. The outer membrane that is involved in nutrient uptake remains constant. At cell division, the daughter cells rapidly take up water, complete a new cell wall and the cycle repeats. Rods (bacteria that have the shape of a rod and grow in length, but not in diameter) are a static mixture between a V0- and a V1-morph, where the caps act as V0-morphs and the cylinder between the caps as V1-morph.The mixture is called static because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function are constant during growth. Crusts, such as lichens that grow on a solid substrate, are a dynamic mixture between a V0- and a V1-morph, where the inner part acts as V0-morph, and the outer annulus as V1-morph.The mixture is called dynamic because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function change during growth. The Dynamic Energy Budget theory explains why the diameter of crusts grow linearly in time at constant substrate availability. References See also Dynamic energy budget isomorph V1-morph shape correction function Developmental biology Metabolism
V0-morph
[ "Chemistry", "Biology" ]
418
[ "Behavior", "Developmental biology", "Reproduction", "Cellular processes", "Biochemistry", "Metabolism" ]
5,569,863
https://en.wikipedia.org/wiki/GRE%20Biochemistry%2C%20Cell%20and%20Molecular%20Biology%20Test
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g., membranes, ribosomes and multienzyme complexes) C Catalysis and Binding Enzyme reaction mechanisms and kinetics Ligand-protein interaction (e.g., hormone receptors, substrates and effectors, transport proteins and antigen-antibody interactions) D Major Metabolic Pathways Carbon, nitrogen and sulfur assimilation Anabolism Catabolism Synthesis and degradation of macromolecules E Bioenergetics (including respiration and photosynthesis) Energy transformations at the substrate level Electron transport Proton and chemical gradients Energy coupling (e.g., phosphorylation and transport) F Regulation and Integration of Metabolism Covalent modification of enzymes Allosteric regulation Compartmentalization Hormones G Methods Biophysical approaches (e.g., spectroscopy, x-ray, crystallography, mass spectroscopy) Isotopes Separation techniques (e.g., centrifugation, chromatography and electrophoresis) Immunotechniques Cell biology (28%) Methods of importance to cellular biology, such as fluorescence probes (e.g., FRAP, FRET and GFP) and imaging, will be covered as appropriate within the context of the content below. A. Cellular Compartments of Prokaryotes and Eukaryotes: Organization, Dynamics and Functions Cellular membrane systems (e.g., structure and transport across membrane) Nucleus (e.g., envelope and matrix) Mitochondria and chloroplasts (e.g., biogenesis and evolution) B. Cell Surface and Communication Extracellular matrix (including cell walls) Cell adhesion and junctions Signal transduction Receptor function Excitable membrane systems C. Cytoskeleton, Motility and Shape Regulation of assembly and disassembly of filament systems Motor function, regulation and diversity D. Protein, Processing, Targeting and Turnover Translocation across membranes Posttranslational modification Intracellular trafficking Secretion and endocytosis Protein turnover (e.g., proteosomes, lysosomes, damaged protein response) E. Cell Division, Differentiation and Development Cell cycle, mitosis and cytokinesis Meiosis and gametogenesis Fertilization and early embryonic development (including positional information, homeotic genes, tissue-specific expression, nuclear and cytoplasmic interactions, growth factors and induction, environment, stem cells and polarity) Molecular biology (36%) A. Genetic Foundations Mendelian and non-Mendelian inheritance Transformation, transduction and conjugation Recombination and complementation Mutational analysis Genetic mapping and linkage analysis B. Chromatin and Chromosomes Karyotypes Translocations, inversions, deletions and duplications Aneuploidy and polyploidy Structure Epigenetics C. Genomics Genome structure Physical mapping Repeated DNA and gene families Gene identification Transposable elements Bioinformatics Proteomics Molecular evolution D. Genome Maintenance DNA replication DNA damage and repair DNA modification DNA recombination and gene conversion E. Gene Expression/Recombinant DNA Technology The genetic code Transcription/transcriptional profiling RNA processing Translation F. Gene Regulation Positive and negative control of the operon Promoter recognition by RNA polymerases Attenuation and antitermination Cis-acting regulatory elements Trans-acting regulatory factors Gene rearrangements and amplifications Small non-coding RNA (e.g., siRNA, microRNA) G. Viruses Genome replication and regulation Virus assembly Virus-host interactions H. Methods Restriction maps and PCR Nucleic acid blotting and hybridization DNA cloning in prokaryotes and eukaryotes Sequencing and analysis Protein-nucleic acid interaction Transgenic organisms Microarrays See also Graduate Record Examination GRE Biology Test GRE Chemistry Test GRE Literature in English Test GRE Mathematics Test GRE Physics Test GRE Psychology Test Graduate Management Admission Test (GMAT) Graduate Aptitude Test in Engineering (GATE) References Biochemistry education GRE standardized tests
GRE Biochemistry, Cell and Molecular Biology Test
[ "Chemistry", "Biology" ]
1,233
[ "Biochemistry", "Biochemistry education" ]
5,570,077
https://en.wikipedia.org/wiki/Rich%20Page
Richard Page is an alumnus of Apple Inc. He was an Apple Fellow at Apple Computer in the 1980s, and later joined Steve Jobs at NeXT. Rich was one of the first four Apple Fellows. He was awarded the Apple Fellow position for his efforts in graphics software development tools including compilers and hardware development. As an Apple Fellow, Rich prototyped Apple's first portable, color and 68020 based Macintosh computers. He was responsible for the decision to use the Motorola MC68000 family of microprocessors for Apple's Lisa and Macintosh computers and was instrumental in the initial design of the Lisa. Rich was the second Fellow at Rambus contributing to lighting, RRAM and new memory technologies. Rich was President and founder of Next Sierra. Next Sierra was a fabless semiconductor company which developed display drivers for active matrix OLED. Rich Page was President and Founder of Sierra Research and Technology, Inc. Sierra provided 622M ATM, 10/100 Ethernet and Giga-bit Ethernet designs to more than 50 semiconductor and system companies. Sierra also provided a number of custom designs for large system companies. Sierra was acquired by TDK Semiconductor in 2000 to substantially expand TDK's networking engineering team. Before founding Sierra, Rich was a co-founder and the Vice President of Hardware Engineering at NeXT Computer, Inc. He led the development of the Cube, NeXTstation and Turbo NeXTstation products. Later Rich became the general manager of the NeXT Hardware Division which included design engineering, materials management, manufacturing, service, order management and distribution. In 1992, Rich Page left NeXT. Within weeks of his resignation, several NeXT VPs also left. His experience in hardware and software design includes the development of microcode for Hewlett-Packard's HP 3000 minicomputer series. The HP3000 minicomputer is still in use today. At Fairchild Semiconductor, Rich developed test programs for Fairchild's microprocessors, memory products and custom chips. Rich is chairman of the board at Chowbotics Inc, a start-up building food robotics products. The first product is Sally, a salad making robot. External links Big Mac Living people Apple Inc. employees Apple Fellows Year of birth missing (living people) NeXT people
Rich Page
[ "Technology" ]
454
[ "Computing stubs", "Computer specialist stubs" ]
5,570,119
https://en.wikipedia.org/wiki/Push%20Pin%20Studios
Push Pin Studios is a graphic design and illustration studio founded by the influential graphic designers Milton Glaser and Seymour Chwast in New York City in 1954. The firm's work, and distinctive illustration style, featuring "bulgy" three-dimensional "interpretations of historical styles (Victorian, art nouveau, art deco),"made their mark by departing from what the firm refers to as the "numbing rigidity of modernism, and the rote sentimental realism of commercial illustration." Eye magazine contextualized the results in a 1995 article for their "Reputations" column: In an era dominated by Swiss rationalism, the Push Pin style celebrated the eclectic and eccentric design of the passé past while it introduced a distinctly contemporary design vocabulary, with a wide range of work that included record sleeves, books, posters, corporate logotypes, font design and magazine formats. History After graduating from Cooper Union, Sorel and Chwast worked for a short time at Esquire magazine, both being fired on the same day. Joining forces to form an art studio, they called it "Push Pin" after a mailing piece, The Push Pin Almanack, which they self-published during their time at Esquire. Sorel and Chwast used their unemployment checks to rent a cold-water flat on East 17th Street in Manhattan. A few months later, Glaser returned from a Fulbright Fellowship year in Italy and joined the studio. Sorel left Push Pin in 1956, the same day the studio moved into a much nicer space on East 57th Street. For twenty years Glaser and Chwast directed Push Pin, along with graphic designers and illustrators such as John Alcorn (in the late 1950s), Paul Davis (1959–1963), Barry Zaid (1969–1975), Paul Degen (1970s) among others. Today, Chwast is principal of The Pushpin Group, Inc. Over the last six decades, the firm's work, and that of the founding designers, along with Reynold Ruffins, Edward Sorel and several other designers who have been associated with it, has led to several books, as well as publication in The New York Times, The New Yorker, The Wall Street Journal, Vanity Fair, The Atlantic, and Print (magazine) and traveling exhibitions, such as "The Push Pin Style," which traveled to the Museum of Decorative Arts of the Louvre, as well as numerous cities in Europe, Brazil, and Japan in 1970–72. Related publications The firm's in-house publications included The Push Pin Almanack and The Push Pin Graphic. Out of house, the founding team served as art directors of Audience magazine, a high-end, subscription-only bimonthly arts and literature periodical, for whom Glaser and Chwast "used photographs, drawings, big pictures and lavish colors to accompany articles by Donald Barthelme, Herbert Gold, Martin Mayer, Thomas Whiteside and Frank Capra, among others." Founded in 1971, under Glaser and Chwast's direction, it won the top award of the Society of Publication Designers in 1972. In 1973, however, it folded due to lack of funding. Gallery (Selection was limited by availability.) Bibliography Chwast, Seymour. Push Pin Graphic: A Quarter Century of Innovative Design and Illustration. Chronicle Books, 2004. Exhibitions 1970 "The Push Pin Style" — Musée des Arts décoratifs, Paris (March 18 – May 18, 1970); later traveled to Brazil and Japan 2021 "The Push Pin Legacy" — Poster House (September 2, 2021–February 6, 2022) References External links Advertisement for Audience magazine Graphic design studios 1954 establishments in New York City Design companies established in 1954 American companies established in 1954
Push Pin Studios
[ "Engineering" ]
777
[ "Design stubs", "Design" ]
5,570,175
https://en.wikipedia.org/wiki/Susan%20Barnes%20%28computing%29
Susan Kelly Barnes is an alumna of Apple Inc. She was Controller of the Macintosh Division at Apple Computer. When Steve Jobs left Apple Computer in 1985, she joined Jobs and other Apple managers to cofound NeXT Computer, Inc. She served as Vice President and Chief Financial Officer of NeXT Computer from 1985 to 1991. As NeXT's Chief Financial Officer, Ms. Barnes helped raise significant funding that helped NeXT weather its slow start. The most notable transaction was a $100 million investment by Canon Inc. in 1989 for a 16.7 percent stake in NeXT. That gave NeXT an implied valuation of $600 million, astonishingly high for a company that was not yet shipping any products. After leaving NeXT Computer, Susan Barnes was Chief Financial Officer of Intuitive Surgical from 1997 to 2005. References Living people Women chief financial officers American chief financial officers Year of birth missing (living people) Apple Inc. employees NeXT people
Susan Barnes (computing)
[ "Technology" ]
185
[ "Computing stubs", "Computer specialist stubs" ]
5,570,638
https://en.wikipedia.org/wiki/Tamil%20numerals
The Tamil language has number words and dedicated symbols for them in the Tamil script. Basic numbering in Tamil Zero Old Tamil possesses a special numerical character for zero (see Old Tamil numerals below), which is read as (literally, no/nothing); yet Modern Tamil renounces the use of its native character and uses the Indian symbol '0' for Shunya meaning nothingness in Indic thought. Modern Tamil words for zero include () or (). First ten numbers () Transcribing other numbers Reproductive and attributive prefixes Tamil has a numeric prefix for each number from 1 to 9, which can be added to the words for the powers of ten (ten, hundred, thousand, etc.) to form multiples of them. For instance, the word for fifty, () is a combination of (, the prefix for five) and (, which is ten). The prefix for nine changes with respect to the succeeding base 10. + the unvoiced consonant of the succeeding base 10 forms the prefix for nine. For instance, 90 is + ( being the unvoiced version of ), hence, ). These are typically void in the Tamil language except for some Hindu references; for example, (the eight Lakshmis). Even in religious contexts, the Tamil language is usually more preferred for its more poetic nature and relatively low incidence of consonant clusters. Specific characters Unlike other modern Indian number systems, Tamil has distinct digits for 10, 100, and 1000. It also has distinct characters for other number-based aspects of day-to-day life. Powers of ten () There are two numeral systems that can be used in the Tamil language: the Tamil system which is as follows The following are the traditional numbers of the region. Original Tamil system Current Tamil system Partitive numerals () Fractions () Proposals to encode Tamil fractions and symbols to Unicode were submitted. As of version 12.0, Tamil characters used for fractional values in traditional accounting practices were added to the Unicode Standard. Transcribing fractions () Any fraction can be transcribed by affixing - (-il) after the denominator followed by the numerator. For instance, 1/41 can be said as (). The suffixing of the - () requires the last consonant of the number to be changed to its () form. For example, + ( + ) becomes (); note the () has been omitted. Common fractions () have names already allocated to them, hence, these names are often used rather than the above method. Other fractions include: Aṇu was considered as the lowest fraction by ancient Tamils as size of smallest physical object (similar to an atom). Later, this term went to Sanskrit to refer directly to atoms. Decimals () Decimal point is called () in Tamil. For example, 1.1 would be read as (). In Sri Lankan Tamil, Thasam தசம். Percentage () Percentage is known as () in Tamil or (). These words are simply added after a number to form percentages. For instance, four percent is () or (). Percentage symbol (%) is also recognised and used. Ordinal numbers () Ordinal numbers are formed by adding the suffix - () after the number, except for 'First'. Collective numerals () As always, when blending two words into one, an unvoiced form of the consonant as the one that the second starts with, is placed in between to blend. Traditional Tamil counting song This song is a list of each number with a concept its primarily associated with. Influence on other dravidian languages As the ancient classical language of the Dravidian languages, Tamil numerals influenced and shaped the numerals of the others in the family. The following table compares the main Dravidian languages. Also, Tamil through the Pallava script which itself through the Kawi script, Khmer script and other South-east Asian scripts has shaped the numeral grapheme of most South-east Asian languages. History Before the Government of India unveiled as the new rupee symbol, people in Tamil Nadu used the Tamil letter as the symbol. This symbol continues to be used occasionally as rupee symbol by Indian Tamils. It is also used by Tamils in Sri Lanka. The symbol is also known as the (), a symbol that most Tamil Hindus will start off any auspicious document with. It is written to invoke the god , known otherwise as Ganesha, who is the remover of obstacles. See also Kaṇita Tīpikai Tamil script Tamil units of measurement References Tamil culture Tamil language Tamil numerals
Tamil numerals
[ "Mathematics" ]
973
[ "Numeral systems", "Numerals" ]
5,570,754
https://en.wikipedia.org/wiki/Terricolous%20lichen
A terricolous lichen is a lichen that grows on the soil as a substrate. An example is some members of the genus Peltigera. References Lichenology
Terricolous lichen
[ "Biology" ]
38
[ "Lichenology" ]
5,571,005
https://en.wikipedia.org/wiki/Compatibility%20%28geochemistry%29
Compatibility is a term used by geochemists to describe how elements partition themselves in the solid and melt within Earth's mantle. In geochemistry, compatibility is a measure of how readily a particular trace element substitutes for a major element within a mineral. Compatibility of an ion is controlled by two things: its valence and its ionic radius. Both must approximate those of the major element for the trace element to be compatible in the mineral. For instance, olivine (an abundant mineral in the upper mantle) has the chemical formula . Nickel, with very similar chemical behaviour to iron and magnesium, substitutes readily for them and hence is very compatible in the mantle. Compatibility controls the partitioning of different elements during melting. The compatibility of an element in a rock is a weighted average of its compatibility in each of the minerals present. By contrast, an incompatible element is one that is least stable within its crystal structure. If an element is incompatible in a rock, it partitions into a melt as soon as melting begins. In general, when an element is referred to as being “compatible” without mentioning what rock it is compatible in, the mantle is implied. Thus incompatible elements are those that are enriched in the continental crust and depleted in the mantle. Examples include: rubidium, barium, uranium, and lanthanum. Compatible elements are depleted in the crust and enriched in the mantle, with examples nickel and titanium. Compatibility is commonly described by an element's distribution coefficient. A distribution coefficient describes how the solid and liquid phases of an element will distribute themselves in a mineral. Current studies of Earth's rare trace elements seek to quantify and examine the chemical composition of elements in the Earth's crust. There are still uncertainties in the understanding of the lower crust and upper mantle region of Earth's interior. In addition, numerous studies have focused on looking at the partition coefficients of certain elements in the basaltic magma to characterize the composition of oceanic crust. By having a way to measure the composition of elements in the crust and mantle given a mineral sample, compatibility allows relative concentrations of a particular trace element to be determined. From a petrological point of view, the understanding of how major and rare trace elements differentiate in the melt provides deeper understanding of Earth's chemical evolution over the geologic time scale. Quantifying compatibility Distribution (Partition) coefficient In a mineral, nearly all elements distribute unevenly between the solid and liquid phase. This phenomenon known as chemical fractionation and can be described by an equilibrium constant, which sets a fixed distribution of an element between any two phases at equilibrium. A distribution constant is used to define the relationship between the solid and liquid phase of a reaction. This value is essentially a ratio of the concentration of an element between two phases, typically between the solid and liquid phase in this context. This constant is often referred to as when dealing with trace elements, where for trace elements The equilibrium constant is an empirically determined value. These values depend on temperature, pressure, and composition of the mineral melt. values differ considerably between major elements and trace elements. By definition, incompatible trace elements have an equilibrium constant value of less than one because trace elements have higher concentrations in the melt than solids. This means that compatible elements have a value of . Thus, incompatible elements are concentrated in the melt, whereas compatible elements tend to be concentrated in the solid. Compatible elements with are strongly fractionated and have very low concentrations in the liquid phase. Bulk distribution coefficient The bulk distribution coefficient is used to calculate the elemental composition for any element that makes up a mineral in a rock. The bulk distribution coefficient, , is defined as where is the element of interest in the mineral, and is the weight fraction of mineral in the rock. is the distribution coefficient for the element in mineral . This constant can be used to describe how individual elements in a mineral is concentrated in two different phases. During chemical fractionation, certain elements may become more or less concentrated, which can allow geochemists to quantify the different stages of magma differentiation. Ultimately, these measurements can be used to provide further understanding of elemental behavior in different geologic settings. Applications One of the main sources of information about the Earth's composition comes from understanding the relationship between peridotite and basalt melting. Peridotite makes up most of Earth's mantle. Basalt, which is highly concentrated in the Earth's oceanic crust, is formed when magma reaches the Earth's surface and cools down at a very fast rate. When magma cools, different minerals crystallize at different times depending on the cooling temperature of that respective mineral. This ultimately changes the chemical composition of the melt as different minerals begin to crystallize. Fractional crystallization of elements in basaltic liquids has also been studied to observe the composition of lava in the upper mantle. This concept can be applied by scientists to give insight on the evolution of Earth's mantle and how concentrations of lithophile trace elements have varied over the last 3.5 billion years. Understanding the Earth's interior Previous studies have used compatibility of trace elements to see the effect it would have on the melt structure of the peridotite solidus. In such studies, partition coefficients of specific elements were examined and the magnitude of these values gave researchers some indication about the degree of polymerization of the melt. A study conducted in East China in 1998 looked at the chemical composition of various elements found in the crust in China. One of the parameters used to characterize and describe the crustal structure in this region was compatibility of various element pairs. Essentially, studies like this showed how compatibility of certain elements can change and be affected by the chemical compositions and conditions of Earth's interior. Oceanic volcanism is another topic that commonly incorporates the use of compatibility. Since the 1960s, the structure of Earth's mantle started being studied by geochemists. The oceanic crust, which is rich in basalts from volcanic activity, show distinct components that provides information about the evolution of the Earth's interior over the geologic timescale. Incompatible trace elements become depleted when mantle melts and become enriched in oceanic or continental crust through volcanic activity. Other times, volcanism can produce enriched mantle melt onto the crust. These phenomena can be quantified by looking at radioactive decay records of isotopes in these basalts, which is a valuable tool for mantle geochemists. More specifically, the geochemistry of serpentinites along the ocean floor, specifically subduction zones, can be examined using compatibility of specific trace elements. The compatibility of lead (Pb) into zircons under different environments can also be an indication of zircons in rocks. When observing levels of non-radiogenic lead in zircons, this can be a useful tool for radiometric dating of zircons. References Geochemistry Geology
Compatibility (geochemistry)
[ "Chemistry" ]
1,383
[ "nan" ]
5,571,012
https://en.wikipedia.org/wiki/Indefinite%20inner%20product%20space
In mathematics, in the field of functional analysis, an indefinite inner product space is an infinite-dimensional complex vector space equipped with both an indefinite inner product and a positive semi-definite inner product where the metric operator is an endomorphism of obeying The indefinite inner product space itself is not necessarily a Hilbert space; but the existence of a positive semi-definite inner product on implies that one can form a quotient space on which there is a positive definite inner product. Given a strong enough topology on this quotient space, it has the structure of a Hilbert space, and many objects of interest in typical applications fall into this quotient space. An indefinite inner product space is called a Krein space (or -space) if is positive definite and possesses a majorant topology. Krein spaces are named in honor of the Soviet mathematician Mark Grigorievich Krein. Inner products and the metric operator Consider a complex vector space equipped with an indefinite hermitian form . In the theory of Krein spaces it is common to call such an hermitian form an indefinite inner product. The following subsets are defined in terms of the square norm induced by the indefinite inner product: ("neutral") ("positive") ("negative") ("non-negative") ("non-positive") A subspace lying within is called a neutral subspace. Similarly, a subspace lying within () is called positive (negative) semi-definite, and a subspace lying within () is called positive (negative) definite. A subspace in any of the above categories may be called semi-definite, and any subspace that is not semi-definite is called indefinite. Let our indefinite inner product space also be equipped with a decomposition into a pair of subspaces , called the fundamental decomposition, which respects the complex structure on . Hence the corresponding linear projection operators coincide with the identity on and annihilate , and they commute with multiplication by the of the complex structure. If this decomposition is such that and , then is called an indefinite inner product space; if , then is called a Krein space, subject to the existence of a majorant topology on (a locally convex topology where the inner product is jointly continuous). The operator is called the (real phase) metric operator or fundamental symmetry, and may be used to define the Hilbert inner product : On a Krein space, the Hilbert inner product is positive definite, giving the structure of a Hilbert space (under a suitable topology). Under the weaker constraint , some elements of the neutral subspace may still be neutral in the Hilbert inner product, but many are not. For instance, the subspaces are part of the neutral subspace of the Hilbert inner product, because an element obeys . But an element () which happens to lie in because will have a positive square norm under the Hilbert inner product. We note that the definition of the indefinite inner product as a Hermitian form implies that: (Note: This is not correct for complex-valued Hermitian forms. It only gives the real part.) Therefore the indefinite inner product of any two elements which differ only by an element is equal to the square norm of their average . Consequently, the inner product of any non-zero element with any other element must be zero, lest we should be able to construct some whose inner product with has the wrong sign to be the square norm of . Similar arguments about the Hilbert inner product (which can be demonstrated to be a Hermitian form, therefore justifying the name "inner product") lead to the conclusion that its neutral space is precisely , that elements of this neutral space have zero Hilbert inner product with any element of , and that the Hilbert inner product is positive semi-definite. It therefore induces a positive definite inner product (also denoted ) on the quotient space , which is the direct sum of . Thus is a Hilbert space (given a suitable topology). Properties and applications Krein spaces arise naturally in situations where the indefinite inner product has an analytically useful property (such as Lorentz invariance) which the Hilbert inner product lacks. It is also common for one of the two inner products, usually the indefinite one, to be globally defined on a manifold and the other to be coordinate-dependent and therefore defined only on a local section. In many applications the positive semi-definite inner product depends on the chosen fundamental decomposition, which is, in general, not unique. But it may be demonstrated (e. g., cf. Proposition 1.1 and 1.2 in the paper of H. Langer below) that any two metric operators and compatible with the same indefinite inner product on result in Hilbert spaces and whose decompositions and have equal dimensions. Although the Hilbert inner products on these quotient spaces do not generally coincide, they induce identical square norms, in the sense that the square norms of the equivalence classes and into which a given if they are equal. All topological notions in a Krein space, like continuity, closed-ness of sets, and the spectrum of an operator on , are understood with respect to this Hilbert space topology. Isotropic part and degenerate subspaces Let , , be subspaces of . The subspace for all is called the orthogonal companion of , and is the isotropic part of . If , is called non-degenerate; otherwise it is degenerate. If for all , then the two subspaces are said to be orthogonal, and we write . If where , we write . If, in addition, this is a direct sum, we write . Pontryagin space If , the Krein space is called a Pontryagin space or -space. (Conventionally, the indefinite inner product is given the sign that makes finite.) In this case is known as the number of positive squares of . Pontryagin spaces are named after Lev Semenovich Pontryagin. Pesonen operator A symmetric operator A on an indefinite inner product space K with domain K is called a Pesonen operator if (x,x) = 0 = (x,Ax) implies x = 0. References Azizov, T.Ya.; Iokhvidov, I.S. : Linear operators in spaces with an indefinite metric, John Wiley & Sons, Chichester, 1989, . Bognár, J. : Indefinite inner product spaces, Springer-Verlag, Berlin-Heidelberg-New York, 1974, . Langer, H. : Spectral functions of definitizable operators in Krein spaces, Functional Analysis Proceedings of a conference held at Dubrovnik, Yugoslavia, November 2–14, 1981, Lecture Notes in Mathematics, 948, Springer-Verlag Berlin-Heidelberg-New York, 1982, 1-46, . Hassibi B, Sayed AH, Kailath T. Indefinite-Quadratic estimation and control: a unified approach to H 2 and H∞ theories. Society for Industrial and Applied Mathematics; 1999, . Topological vector spaces Operator theory
Indefinite inner product space
[ "Mathematics" ]
1,441
[ "Topological vector spaces", "Vector spaces", "Space (mathematics)" ]
5,571,489
https://en.wikipedia.org/wiki/Instability%20strip
The unqualified term instability strip usually refers to a region of the Hertzsprung–Russell diagram largely occupied by several related classes of pulsating variable stars: Delta Scuti variables, SX Phoenicis variables, and rapidly oscillating Ap stars (roAps) near the main sequence; RR Lyrae variables where it intersects the horizontal branch; and the Cepheid variables where it crosses the supergiants. RV Tauri variables are also often considered to lie on the instability strip, occupying the area to the right of the brighter Cepheids (at lower temperatures), since their stellar pulsations are attributed to the same mechanism. Position on the HR diagram The Hertzsprung–Russell diagram plots the real luminosity of stars against their effective temperature (their color, given by the temperature of their photosphere). The instability strip intersects the main sequence, (the prominent diagonal band that runs from the upper left to the lower right) in the region of A and F stars (1–2 solar mass ()) and extends to G and early K bright supergiants (early M if RV Tauri stars at minimum are included). Above the main sequence, the vast majority of stars in the instability strip are variable. Where the instability strip intersects the main sequence, the vast majority of stars are stable, but there are some variables, including the roAp stars and the Delta Scuti variables. Pulsations Stars in the instability strip pulsate due to He III (doubly ionized helium), in a process based on the Kappa–mechanism. In normal A-F-G class stars, He in the stellar photosphere is neutral. Deeper below the photosphere, where the temperature reaches 25,000–, begins the He II layer (first He ionization). Second ionization of helium (He III) starts at depths where the temperature is 35,000–. When the star contracts, the density and temperature of the He II layer increases. The increased energy is sufficient to remove the lone remaining electron in the He II, transforming it into He III (second ionization). This causes the opacity of the He layer to increase and the energy flux from the interior of the star is effectively absorbed. The temperature of the star's core increases, which causes it to expand. After expansion, the He III cools and begins to recombine with free electrons to form He II and the opacity of the star decreases. This allows the trapped heat to propagate to the surface of the star. When the sufficient energy has been radiated away, overlying the stellar material once again causes the He II layer to contract, and the cycle starts from the beginning. This results in the observed increase and decrease in the surface temperature of the star. In some stars, the pulsations are caused by the opacity peak of metal ions at about . The phase shift between a star's radial pulsations and brightness variations depends on the distance of He II zone from the stellar surface in the stellar atmosphere. For most Cepheids, this creates a distinctly asymmetrical observed light curve, increasing rapidly to maximum and slowly decreasing back down to minimum. Other pulsating stars There are several types of pulsating star not found on the instability strip and with pulsations driven by different mechanisms. At cooler temperatures are the long period variable AGB stars. At hotter temperatures are the Beta Cephei and PV Telescopii variables. Right at the edge of the instability strip near the main sequence are Gamma Doradus variables. The band of White dwarfs has three separate regions and types of variable: DOV, DBV, and DAV (= ZZ Ceti variables) white dwarfs. Each of these types of pulsating variable has an associated instability strip created by variable opacity partial ionisation regions other than helium. Most high luminosity supergiants are somewhat variable, including the Alpha Cygni variables. In the specific region of more luminous stars above the instability strip are found the yellow hypergiants which have irregular pulsations and eruptions. The hotter luminous blue variables may be related and show similar short- and long-term spectral and brightness variations with irregular eruptions. References Hertzsprung–Russell classifications Stellar evolution
Instability strip
[ "Physics" ]
892
[ "Astrophysics", "Stellar evolution" ]
5,571,591
https://en.wikipedia.org/wiki/Hertzsprung%20gap
The Hertzsprung gap is a feature of the Hertzsprung–Russell diagram for a star cluster. This diagram is a plot of effective temperature versus luminosity for a population of stars. The gap is named after Ejnar Hertzsprung, who first noticed the absence of stars in the region of the Hertzsprung–Russell diagram between A5 and G0 spectral type and between +1 and −3 absolute magnitudes. This gap lies between the top of the main sequence and the base of red giants for stars above roughly 1.5 solar mass. When a star during its evolution crosses the Hertzsprung gap, it means that it has finished core hydrogen burning. Stars do exist in the Hertzsprung gap region, but because they move through this section of the Hertzsprung–Russell diagram very quickly in comparison to the lifetime of the star (thousands of years, compared to millions or billions of years for the lifetime of the star), that portion of the diagram is less densely populated. Full Hertzsprung–Russell diagrams of the 11,000 Hipparcos mission targets show a handful of stars in that region. Well-known stars inside of or towards the end of the Hertzsprung gap include: Epsilon Pegasi Pi Puppis Epsilon Geminorum Beta Arae Gamma Cygni Capella B Canopus, Iota Carinae, and Upsilon Carinae are also starting to enter the gap. See also Subgiant References Stellar evolution Subgiant stars
Hertzsprung gap
[ "Physics", "Astronomy" ]
320
[ "Astronomy stubs", "Astrophysics", "Stellar evolution", "Stellar astronomy stubs", "Astrophysics stubs" ]
5,571,751
https://en.wikipedia.org/wiki/Universal%20wavefunction
The universal wavefunction or the wavefunction of the universe is the wavefunction or quantum state of the entire universe. It is regarded as the basic physical entity in the many-worlds interpretation of quantum mechanics, and finds applications in quantum cosmology. It evolves deterministically according to a wave equation. The concept of universal wavefunction was introduced by Hugh Everett in his 1956 PhD thesis draft The Theory of the Universal Wave Function. It later received investigation from James Hartle and Stephen Hawking who derived the Hartle–Hawking solution to the Wheeler–deWitt equation to explain the initial conditions of the Big Bang cosmology. Role of observers Hugh Everett's universal wavefunction supports the idea that observed and observer are all mixed together: Eugene Wigner and John Archibald Wheeler take issue with this stance. Wigner wrote: Wheeler wrote: See also Heisenberg cut References Quantum measurement
Universal wavefunction
[ "Physics" ]
187
[ "Quantum measurement", "Quantum mechanics" ]
5,571,869
https://en.wikipedia.org/wiki/Capital%20recovery%20factor
A capital recovery factor is the ratio of a constant annuity to the present value of receiving that annuity for a given length of time. Using an interest rate i, the capital recovery factor is: where is the number of annuities received. This is related to the annuity formula, which gives the present value in terms of the annuity, the interest rate, and the number of annuities. If , the reduces to . Also, as , the . Example With an interest rate of i = 10%, and n = 10 years, the CRF = 0.163. This means that a loan of $1,000 at 10% interest will be paid back with 10 annual payments of $163. Another reading that can be obtained is that the net present value of 10 annual payments of $163 at 10% discount rate is $1,000. References External links Wolfram|Alpha Capital Recovery Factor Calculator Financial ratios Annuities
Capital recovery factor
[ "Mathematics" ]
196
[ "Financial ratios", "Quantity", "Metrics" ]
5,572,019
https://en.wikipedia.org/wiki/Water%20reducer
Water reducers are special chemical products added to a concrete mixture before it is poured. They are from the same family of products as retarders. The first class of water reducers was the lignosulfonates which has been used since the 1930s. These inexpensive products were derived from wood and paper industry, but are now advantageously replaced by other synthetic sulfonate and polycarboxylate, also known as superplasticizers. Water reducers offer several advantages in their use, listed below: reduces the water content by 5-10% decreases the concrete porosity increases the concrete strength by up to 25% (as less water is required for the concrete mixture to remain workable) increases the workability (assuming the amount of free water remains constant) reduces the water permeability (due to less water being used) reduces the diffusivity of aggressive agents in the concrete and so improves the durability of concrete gives a better finish to surfaces (due to all of the above) See also Plasticizer Superplasticizer Cement Admixture Building materials Cement Concrete Concrete admixtures
Water reducer
[ "Physics", "Engineering" ]
230
[ "Structural engineering", "Building engineering", "Materials stubs", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
5,572,410
https://en.wikipedia.org/wiki/Sabantuy
Sabantuy is a Tatar, Idel-Uralian, Bashkir and Kazakh ('Sabantoy') summer festival, that dates back to the Volga Bulgarian epoch. At first Sabantuy was a festival of farmers in rural areas, but it later became a national holiday and now is widely celebrated in the cities. In 2012, Kazan Sabantuy was celebrated on June 23. Nomenclature Tatar-speakers call the holiday Sabantuy (Сабантуй, ), or, more correctly, Saban tuyı (Сабан туе, ) - plural form: Sabantuylar . Other Turkic peoples living along the Volga also celebrate the holiday. Bashkir-speakers call it Habantuy (Һабантуй), Chuvash-speakers — Akatuy (Акатуй). The holiday's name means "plough's feast" in Turkic languages. The synonym "plough's holiday", or Saban bäyräme (Сабан бәйрәме ) also occurs. History Sabantuy traces its origins to the pre-Islamic epoch, when it was celebrated before the sowing season. The presence of Sabantuy was noticed by ibn Fadlan as early as in 921. Traditional songs and other customs of the Sabantuy probably had a religious connotation at that time. Later, with the spread of Islam among Tatars and Bashkirs and Christianity among Chuvashs, it became a secular holiday. In each region, villages took turns to celebrate the holiday. In the beginning of the 20th century Sabantuy gained recognition as the national festival of the Tatars. The Soviet authorities approved of this festival probably due to its humble rural origin. However, they moved Sabantuy to the after-sowing season, thus merging it with the ancient summer festival Cıyın (Cyrillic: Җыен, ). Recently, Moscow announced plans to nominate Sabantuy for the inclusion into the Masterpieces of the Oral and Intangible Heritage of Humanity list in 2007. Traditions The main distinctive elements of Sabantuy include the traditional sporting competitions such as köräş (Tatar wrestling), horse racing, race-in-sack, pillar-climbing, egg-in-spoon-in-mouth-racing, sacks-battle on the crossbar, pot smashing , finding a coin in a qatıq (a beverage made from sour milk), and other contests. Such activities take place on the mäydan, which would usually be located at the edge of a forest. A tradition, called sörän, was held to collect a fare for guests of the festival and prizes for the winners of the contests. Qarğa botqası (Rook's porridge), a ritual porridge, was cooked before the Sabantuy to treat children in the village. Another tradition was praying at the cemetery. In the recent years Sabantuy is also often combined with the folk and pop music festivals, as well as accordion music festivals, named Play, accordion! (Uyna, ğarmun!). Kurash The wrestling Kurash, is the main competition of Sabantuy. Wrestlers use towels and the aim is to knock down the opponent. Usually young boys start the competition. At the end of Sabantuy, the main event of the festival is the final of köräş. The winner becomes the batır, the hero of the Sabantuy. The prize varies from a ram in small villages to a car at big cities' celebrations. Calendar of the festival Sabantuylar do not have a set date. The festivities take place approximately from June 15 to July 1, and usually fall on a Sunday. Initially, Sabantuylar are arranged in villages, followed by Sabantuylar in rural districts, and the final ones taking place in major cities. The last Sabantuy is held in Kazan, the capital of Tatarstan. A similar schedule is applied for Akatuy in Chuvashia and Habantuy in Bashkortostan. In the last few years the Russian government arranged federal Sabantuylar in Moscow. Many cities in Europe and Asia that have major Tatar diasporas, such as Moscow, Saint Petersburg, Tallinn, Prague, Istanbul, Kyiv and Tashkent, also hold Sabantuylar. Today Sabantuy can be characterized as an international festival attracting many people of various ethnicities who participate in Sabantuylar, both in Tatarstan, and all over the world. Political traditions Sabantuy is a symbol of Tatarstan. This is why every Russian president visiting the republic takes part in the Sabantuy held in Kazan. During his visit to Kazan in the mid-1990s Boris Yeltsin became the center of attention at a Sabantuy when he took part in a traditional competition in which the participants try to crash a clay pot while being blindfolded. Vladimir Putin took part in a humorous competition during which he tried to dip his face into a jar full of sour milk in order to fish out a coin without using his hands. Notes References 2012 Sabantuy celebrations in Kazan Traditions of Sabantuy, Ogonyok, photos Photos of Sabantuy History of Habantuy Photos of Akatuy Традиции Сабантуя, Огонёк Video on YouTube: Sabantui in Prague Bashkir culture Tatar culture Indigenous peoples days Summer festivals June observances Cultural festivals in Russia Summer holidays (Northern Hemisphere) Summer solstice
Sabantuy
[ "Astronomy" ]
1,134
[ "Time in astronomy", "Summer solstice" ]
5,573,084
https://en.wikipedia.org/wiki/Macrophage%20inflammatory%20protein
Macrophage Inflammatory Proteins (MIP) belong to the family of chemotactic cytokines known as chemokines. In humans, there are two major forms, MIP-1α and MIP-1β, renamed CCL3 and CCL4 respectively, since 2000. However, other names are sometimes encountered in older literature, such as LD78α, AT 464.1 and GOS19-1 for human CCL3 and AT 744, Act-2, LAG-1, HC21 and G-26 for human CCL4. Other macrophage inflammatory proteins include MIP-2, MIP-3 and MIP-5. MIP-1 MIP-1α and MIP-1β are major factors produced by macrophages and monocytes after they are stimulated with bacterial endotoxin or proinflammatory cytokines such as IL-1β. But it appears that they can be expressed by all hematopoietic cells and some tissue cells such as fibroblasts, epithelial cells, vascular smooth muscle cells or platelets upon activation. They are crucial for immune responses towards infection and inflammation. CCL3 and CCL4 can bind to extracellular proteoglycans, which is not necessary for their function but it can enhance their bioactivity. The biological effect is carried out through ligation of chemokine receptors CCR1 (ligand CCL3) and CCR5 (ligands CCL3 and CCL4) and the signal is then transferred into the cell, thus these cytokines affect any cell that has these receptors. The main effect is inflammatory and mainly consists of chemotaxis and transendothelial migration but cells can be activated to release of some bioactive molecules also. These chemokines affect monocytes, T lymphocytes, dendritic cells, NK cells and platelets. They, too, activate human granulocytes (neutrophils, eosinophils and basophils) which can lead to acute neutrophilic inflammation. They also induce the synthesis and release of other pro-inflammatory cytokines such as interleukin 1 (IL-1), IL-6 and TNF-α from fibroblasts and macrophages. The genes for CCL3 and CCL4 are both located on human chromosome 17 and on murine chromosome 11. They are produced by many cells, particularly macrophages, dendritic cells, and lymphocytes. MIP-1 are best known for their chemotactic and proinflammatory effects but can also promote homeostasis. Biophysical analyses and mathematical modelling has shown that MIP-1 reversibly forms a polydisperse distribution of rod-shaped polymers in solution. Polymerization buries receptor-binding sites of MIP-1, thus depolymerization mutations enhance MIP-1 to arrest monocytes onto activated human endothelium. MIP-1γ is another macrophage inflammatory protein and according to the new nomenclature is named CCL9. It is produced mainly by follicle-associated epithelial cells and is responsible for chemotaxis of dendritic cells and macrophages into Peyer's patches in gut through binding of CCR1. MIP-1δ or MIP-5 (CCL15) binds also CCR1 and CCR3. MIP-2 MIP-2 belongs to the CXC chemokine family, is named CXCL2 and acts through binding of CXCR1 and CXCR2. It is produced mainly by macrophages, monocytes and epithelial cells and is responsible for chemotaxis to the source of inflammation and activation of neutrophils. MIP-3 There are two chemokines in the MIP-3 group. MIP-3α (CCL20) and MIP-3β (CCL19). MIP-3α is binding to receptor CCR6. CCL20 is produced by mucosa and skin by activated epithelial cells and attracts Th17 cells to the site of inflammation. It is also produced by Th17 cells themselves. It further attracts activated B cells, memory T cells and immature dendritic cells and has part in migration of these cells in secondary lymphoid organs. Mature dendritic cells down-regulate CCR6 and up-regulate CCR7, which is receptor for MIP-3β. MIP-3β (CCL19) is produced by stromal cells in T-cell zones of secondary lymphoid organs and binds to CCR7 receptor through which attracts mature dendritic cells to lymph nodes. It is also produced by dendritic cells and attracts also naive T lymphocytes and B lymphocytes to homing into the lymph node, where antigens can be presented to them by dendritic cells. MIP-5 MIP-5 (sometimes called MIP-1δ) or CCL15 binds to receptors CCR1 and CCR3. It has chemotactic properties for monocytes and eosinophils and is expressed by macrophages, basophils and some tissue cells. It is proposed to have a role in pathology of asthma. See also Chemokine References External links Cytokines
Macrophage inflammatory protein
[ "Chemistry" ]
1,146
[ "Cytokines", "Signal transduction" ]
5,573,115
https://en.wikipedia.org/wiki/Siltation
Siltation is water pollution caused by particulate terrestrial clastic material, with a particle size dominated by silt or clay. It refers both to the increased concentration of suspended sediments and to the increased accumulation (temporary or permanent) of fine sediments on bottoms where they are undesirable. Siltation is most often caused by soil erosion or sediment spill. It is sometimes referred to by the ambiguous term "sediment pollution", which can also refer to a chemical contamination of sediments accumulated on the bottom, or to pollutants bound to sediment particles. Although "siltation" is not perfectly stringent, since it also includes particle sizes other than silt, it is preferred for its lack of ambiguity. Causes The origin of the increased sediment transport into an area may be erosion on land or activities in the water. In rural areas, the erosion source is typically soil degradation by intensive or inadequate agricultural practices, leading to soil erosion, especially in fine-grained soils such as loess. The result will be an increased amount of silt and clay in the water bodies that drain the area. In urban areas, the erosion source is typically construction activities, which involve clearing the original land-covering vegetation and temporarily creating something akin to an urban desert from which fines are easily washed out during rainstorms. In water, the main pollution source is sediment spill from dredging, the transportation of dredged material on barges, and the deposition of dredged material in or near water. Such deposition may be made to get rid of unwanted material, such as the offshore dumping of material dredged from harbours and navigation channels. The deposition may also be to build up the coastline, for artificial islands, or for beach replenishment. Climate change also affects siltation rates. Another important cause of siltation is the septage and other sewage sludges that are discharged from households or business establishments with no septic tanks or wastewater treatment facilities to bodies of water. Vulnerabilities While the sediment in transport is in suspension, it acts as a pollutant for those who require clean water, such as for cooling or in industrial processes, and it includes aquatic life that are sensitive to suspended material in the water. While nekton have been found to avoid spill plumes in the water (e.g. the environmental monitoring project during the building of the Øresund Bridge), filtering benthic organisms have no way of escape. Among the most sensitive organisms are coral polyps. Generally speaking, hard bottom communities and mussel banks (including oysters) are more sensitive to siltation than sand and mud bottoms. Unlike in the sea, in a stream, the plume will cover the entire channel, except possibly for backwaters, and so fish will also be directly affected in most cases. Siltation can also affect navigation channels or irrigation channels. It refers to the undesired accumulation of sediments in channels intended for vessels or for distributing water. Measurement and monitoring One may distinguish between measurements at the source, during transport, and within the affected area. Source measurements of erosion may be very difficult since the lost material may be a fraction of a millimeter per year. Therefore, the approach taken is typically to measure the sediment in transport in the stream, by measuring the sediment concentration and multiplying that with the discharge; for example, times gives . Also, sediment spill is better measured in transport than at the source. The sediment transport in open water is estimated by measuring the turbidity, correlating turbidity to sediment concentration (using a regression developed from water samples that are filtered, dried, and weighed), multiplying the concentration with the discharge as above, and integrating over the entire plume. To distinguish the spill contribution, the background turbidity is subtracted from the spill plume turbidity. Since the spill plume in open water varies in space and time, an integration over the entire plume is required, and repeated many times to get acceptably low uncertainty in the results. The measurements are made close to the source, in the order of a few hundred meters. Anything beyond a work area buffer zone for sediment spill is considered the potential impact area. In the open sea, the impact of concern is almost exclusively with the sessile bottom communities since empirical data show that fish effectively avoid the impacted area. The siltation affects the bottom community in two main ways. The suspended sediment may interfere with the food gathering of filtering organisms, and the sediment accumulation on the bottom may bury organisms to the point that they starve or even die. It is only if the concentration is extreme that it decreases the light level sufficiently for impacting primary productivity. An accumulation of as little as may kill coral polyps. While the effect of the siltation on the biota (once the harm is already done) can be studied by repeated inspection of selected test plots, the magnitude of the siltation process in the impact area may be measured directly by monitoring in real time. Parameters to measure are sediment accumulation, turbidity at the level of the filtering biota, and optionally incident light. Siltation of the magnitude that it affects shipping can also be monitored by repeated bathymetric surveys. Mitigation In rural areas, the first line of defense is to maintain land cover and prevent soil erosion in the first place. The second line of defense is to trap the material before it reaches the stream network (known as sediment control). In urban areas, the defenses are to keep land uncovered for as short a time as possible during construction and to use silt screens to prevent the sediment from getting released in water bodies. During dredging, the spill can be minimized but not eliminated completely by the way the dredger is designed and operated. If the material is deposited on land, efficient sedimentation basins can be constructed. If it is dumped into relatively deep water, there will be a significant spill during dumping but not thereafter, and the spill that arises has minimal impact if there are only fine-sediment bottoms nearby. One of the most difficult conflicts of interest to resolve, as regards siltation mitigation, is perhaps beach nourishment. When sediments are placed on or near beaches in order to replenish an eroding beach, any fines in the material will continue to be washed out for as long as the sand is being reworked. Since all replenished beaches are eroding or they would not need replenishment, they will contribute to nearshore siltation almost for as long as it takes to erode away what was added, albeit with somewhat decreasing intensity over time. Since the leakage is detrimental to coral reefs, the practice leads to a direct conflict between the public interest of saving beaches, and preserving any nearshore coral reefs. To minimize the conflict, beach replenishment should not be done with sand containing any silt or clay fractions. In practice the sand is often taken from offshore areas, and since the proportion of fines in sediments typically increases in the offshore direction, the deposited sand will inevitably contain a significant percentage of siltation-contributing fines. It is desirable to minimize the siltation of irrigation channels by hydrologic design, the objective being not to create zones with falling sediment transport capacity, as that is conducive to sedimentation. Once sedimentation has occurred, in irrigation or navigation channels, dredging is often the only remedy. References Earth sciences Sediments Water pollution
Siltation
[ "Chemistry", "Environmental_science" ]
1,521
[ "Water pollution" ]
5,573,710
https://en.wikipedia.org/wiki/Witt%20vector
In mathematics, a Witt vector is an infinite sequence of elements of a commutative ring. Ernst Witt showed how to put a ring structure on the set of Witt vectors, in such a way that the ring of Witt vectors over the finite field of prime order p is isomorphic to , the ring of p-adic integers. They have a highly non-intuitive structure upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standard p-adic integers. The main idea behind Witt vectors is that instead of using the standard p-adic expansionto represent an element in , an expansion using the Teichmüller character can be considered instead;,which sends each element in the solution set of in to an element in the solution set of in . That is, the elements in can be expanded out in terms of roots of unity instead of as profinite elements in . A p-adic integer can then be expressed as an infinite sum,which gives a Witt vector.Then, the non-trivial additive and multiplicative structure in Witt vectors comes from using this map to give an additive and multiplicative structure such that induces a commutative ring homomorphism. History In the 19th century, Ernst Eduard Kummer studied cyclic extensions of fields as part of his work on Fermat's Last Theorem. This led to the subject known as Kummer theory. Let be a field containing a primitive -th root of unity. Kummer theory classifies degree cyclic field extensions of . Such fields are in bijection with order cyclic groups , where corresponds to . But suppose that has characteristic . The problem of studying degree extensions of , or more generally degree extensions, may appear superficially similar to Kummer theory. However, in this situation, cannot contain a primitive -th root of unity. If is a -th root of unity in , then it satisfies . But consider the expression . By expanding using binomial coefficients, the operation of raising to the -th power, known here as the Frobenius homomorphism, introduces the factor to every coefficient except the first and the last, and so modulo these equations are the same. Therefore . Consequently, Kummer theory is never applicable to extensions whose degree is divisible by the characteristic. The case where the characteristic divides the degree is today called Artin–Schreier theory because the first progress was made by Artin and Schreier. Their initial motivation was the Artin–Schreier theorem, which characterizes the real closed fields as those whose absolute Galois group has order two. This inspired them to ask what other fields had finite absolute Galois groups. In the midst of proving that no other such fields exist, they proved that degree extensions of a field of characteristic were the same as splitting fields of Artin–Schreier polynomials. These are by definition of the form By repeating their construction, they described degree extensions. Abraham Adrian Albert used this idea to describe degree extensions. Each repetition entailed complicated algebraic conditions to ensure that the field extension was normal. Schmid generalized further to non-commutative cyclic algebras of degree . In the process of doing so, certain polynomials related to the addition of -adic integers appeared. Witt seized on these polynomials. By using them systematically, he was able to give simple and unified constructions of degree field extensions and cyclic algebras. Specifically, he introduced a ring now called , the ring of -truncated -typical Witt vectors. This ring has as a quotient, and it comes with an operator which is called the Frobenius operator since it reduces to the Frobenius operator on . Witt observed that the degree analog of Artin–Schreier polynomials is , where . To complete the analogy with Kummer theory, define to be the operator Then the degree extensions of are in bijective correspondence with cyclic subgroups of order , where corresponds to the field . Motivation Any -adic integer (an element of , not to be confused with ) can be written as a power series , where the are usually taken from the integer interval . It can be difficult to provide an algebraic expression for addition and multiplication using this representation, as one faces the problem of carrying between digits. However, taking representative coefficients is only one of many choices, and Hensel himself (the creator of -adic numbers) suggested the roots of unity in the field as representatives. These representatives are therefore the number together with the roots of unity; that is, the solutions of in , so that . This choice extends naturally to ring extensions of in which the residue field is enlarged to with , some power of . Indeed, it is these fields (the fields of fractions of the rings) that motivated Hensel's choice. Now the representatives are the solutions in the field to . Call the field , with an appropriate primitive root of unity (over ). The representatives are then and for . Since these representatives form a multiplicative set they can be thought of as characters. Some thirty years after Hensel's works, Teichmüller studied these characters, which now bear his name, and this led him to a characterisation of the structure of the whole field in terms of the residue field. These Teichmüller representatives can be identified with the elements of the finite field of order by taking residues modulo in , and elements of are taken to their representatives by the Teichmüller character . This operation identifies the set of integers in with infinite sequences of elements of . Taking those representatives, the expressions for addition and multiplication can be written in closed form. The following problem (stated for the simplest case: ): given two infinite sequences of elements of , describe their sum and product as p-adic integers explicitly. This problem was solved by Witt using Witt vectors. Detailed motivational sketch the ring of -adic integers is derived from the finite field using a construction which naturally generalizes to the Witt vector construction. The ring of p-adic integers can be understood as the inverse limit of the rings taken along the projections. Specifically, it consists of the sequences with , such that for . That is, each successive element of the sequence is equal to the previous elements modulo a lower power of p; this is the inverse limit of the projections . The elements of can be expanded as (formal) power series in , where the coefficients are taken from the integer interval . This power series usually will not converge in using the standard metric on the reals, but it will converge in with the p-adic metric. Letting be denoted by , the following definition can be considered for addition: and a similar definition for multiplication can be made. However, this is not a closed formula, since the new coefficients are not in the allowed set . Representing elements in Fp as elements in the ring of Witt vectors W(Fp) There is a coefficient subset of which does yield closed formulas, the Teichmüller representatives: zero together with the roots of unity. They can be explicitly calculated (in terms of the original coefficient representatives ) as roots of through Hensel lifting, the p-adic version of Newton's method. For example, in , to calculate the representative of 2, one starts by finding the unique solution of in with ; one gets 7. Repeating this in , with the conditions and , gives 57, and so on; the resulting Teichmüller representative of 2, denoted , is the sequence.The existence of a lift in each step is guaranteed by the greatest common divisor in every . This algorithm shows that for every , there is one Teichmüller representative with , which is denoted . This defines the Teichmüller character as a (multiplicative) group homomorphism, which moreover satisfies if one lets denote the canonical projection. Note however that is not additive, as the sum need not be a representative. Despite this, if in then in Representing elements in Zp as elements in the ring of Witt vectors W(Fp) Because of this one-to-one correspondence given by , one can expand every p-adic integer as a power series in p with coefficients taken from the Teichmüller representatives. An explicit algorithm can be given, as follows. Write the Teichmüller representative as . Then, if one has some arbitrary p-adic integer of the form , one takes the difference , leaving a value divisible by . Hence, . The process is then repeated, subtracting and proceed likewise. This yields a sequence of congruences so that and implies for . This obtains a power series for each residue of modulo powers of , but with coefficients in the Teichmüller representatives rather than . , since for all as , so the difference tends to 0 with respect to the p-adic metric. The resulting coefficients will typically differ from the modulo except the first one. Additional properties of elements in the ring of Witt vectors motivating general definition The Teichmüller coefficients have the key additional property that which is missing for the numbers in . This can be used to describe addition, as follows. Consider the equation in and let the coefficients now be as in the Teichmüller expansion. Since the Teichmüller character is not additive, is not true in , but it holds in , as the first congruence implies. In particular, and thus . Since the binomial coefficient is divisible by , this gives . This completely determines by the lift. Moreover, the congruence modulo indicates that the calculation can actually be done in satisfying the basic aim of defining a simple additive structure. For this step can be cumbersome. Write . Just as for , a single th power is not enough: one must take However, is not in general divisible by , but it is divisible when , in which case combined with similar monomials in will make a multiple of . At this step, one works with addition of the form This motivates the definition of Witt vectors. Construction of Witt rings Fix a prime number p. A Witt vector over a commutative ring (relative to the prime ) is a sequence of elements of . The Witt polynomials can be defined by and in general . The are called the ghost components of the Witt vector , and are usually denoted by ; taken together, the define the ghost map to . If is p-torsionfree, then the ghost map is injective and the ghost components can be thought of as an alternative coordinate system for the -module of sequences (though note that the ghost map is not surjective unless is p-divisible). The ring of (p-typical) Witt vectors is defined by componentwise addition and multiplication of the ghost components. That is, that there is a unique way to make the set of Witt vectors over any commutative ring into a ring such that: the sum and product are given by polynomials with integer coefficients that do not depend on , and projection to each ghost component is a ring homomorphism from the Witt vectors over , to . In other words, and are given by polynomials with integer coefficients that do not depend on R, and and The first few polynomials giving the sum and product of Witt vectors can be written down explicitly. For example, These are to be understood as shortcuts for the actual formulas: if for example the ring has characteristic , the division by in the first formula above, the one by that would appear in the next component and so forth, do not make sense. However, if the -power of the sum is developed, the terms are cancelled with the previous ones and the remaining ones are simplified by , no division by remains and the formula makes sense. The same consideration applies to the ensuing components. Examples of addition and multiplication As would be expected, the identity element in the ring of Witt vectors is the elementAdding this element to itself gives a non-trivial sequence, for example in ,sincewhich is not the expected behavior, since it doesn't equal . But, when the map is reduced with, one gets . Note if there is an element and an element , thenshowing that multiplication also behaves in a highly non-trivial manner. Examples The Witt ring of any commutative ring in which is invertible, is isomorphic to (the product of a countable number of copies of ). The Witt polynomials always give a homomorphism from the ring of Witt vectors to , and if is invertible this homomorphism is an isomorphism. The Witt ring of the finite field of order is the ring of -adic integers written in terms of the Teichmüller representatives, as demonstrated above. The Witt ring of a finite field of order is the ring of integers of the unique unramified extension of degree of the ring of -adic numbers . Note for the -st root of unity, hence . The truncated Witt ring can be described as The Witt vectors are the inverse limit along the canonical projections Here the transition homomorphisms are induced by reduction . Universal Witt vectors The Witt polynomials for different primes are special cases of universal Witt polynomials, which can be used to form a universal Witt ring (not depending on a choice of prime ). Define the universal Witt polynomials for by and in general . Again, is called the vector of ghost components of the Witt vector , and is usually denoted by . These polynomials can be used to define the ring of universal Witt vectors or big Witt ring of any commutative ring in much the same way as above (so the universal Witt polynomials are all homomorphisms to the ring ). Generating functions Witt also provided another approach using generating functions. Definition Let be a Witt vector and define For let denote the collection of subsets of whose elements add up to . Then One can get the ghost components by taking the logarithmic derivative: Sum Now one can see if . So that if are the respective coefficients in the power series . Then Since is a polynomial in and likewise for , one can show by induction that is a polynomial in Product If is set, then . but . Now 3-tuples with are in bijection with 3-tuples with , via ( is the least common multiple), the series becomes so that , where are polynomials of So by similar induction, then can be solved as polynomials of . Ring schemes The map taking a commutative ring to the ring of Witt vectors over (for a fixed prime ) is a functor from commutative rings to commutative rings, and is also representable, so it can be thought of as a ring scheme, called the Witt scheme, over The Witt scheme can be canonically identified with the spectrum of the ring of symmetric functions. Similarly, the rings of truncated Witt vectors, and the rings of universal Witt vectors correspond to ring schemes, called the truncated Witt schemes and the universal Witt scheme. Moreover, the functor taking the commutative ring to the set is represented by the affine space , and the ring structure on makes into a ring scheme denoted . From the construction of truncated Witt vectors, it follows that their associated ring scheme is the scheme with the unique ring structure such that the morphism given by the Witt polynomials is a morphism of ring schemes. Commutative unipotent algebraic groups Over an algebraically closed field of characteristic 0, any unipotent abelian connected algebraic group is isomorphic to a product of copies of the additive group . The analogue of this for fields of characteristic is false: the truncated Witt schemes are counterexamples. (They are made into algebraic groups by using the additive structure instead of multiplication.) However, these are essentially the only counterexamples: over an algebraically closed field of characteristic , any unipotent abelian connected algebraic group is isogenous to a product of truncated Witt group schemes. Universal property André Joyal explicated the universal property of the (p-typical) Witt vectors. The basic intuition is that the formation of Witt vectors is the universal way to deform a characteristic p ring to characteristic 0 together with a lift of its Frobenius endomorphism. To make this precise, define a -ring to consist of a commutative ring together with a map of sets that is a p-derivation, so that satisfies the relations ; ; . The definition is such that given a -ring , if one defines the map by the formula , then is a ring homomorphism lifting Frobenius on . Conversely, if is p-torsionfree, then this formula uniquely defines the structure of a -ring on from that of a Frobenius lift. One may thus regard the notion of -ring as a suitable replacement for a Frobenius lift in the non-p-torsionfree case. The collection of -rings and ring homomorphisms thereof respecting the -structure assembles to a category . One then has a forgetful functorwhose right adjoint identifies with the functor of Witt vectors. The functor creates limits and colimits and admits an explicitly describable left adjoint as a type of free functor; from this, it can be shown that inherits local presentability from so that one can construct the functor by appealing to the adjoint functor theorem. One further has that restricts to a fully faithful functor on the full subcategory of perfect rings of characteristic p. Its image then consists of those -rings that are perfect (in the sense that the associated map is an isomorphism) and whose underlying ring is p-adically complete. See also p-derivation Formal group Artin–Hasse exponential Necklace ring References Introductory Notes on Witt vectors: a motivated approach - Basic notes giving the main ideas and intuition. Best to start here! The Theory of Witt Vectors - Elementary introduction to the theory. Complexe de de Rham-Witt et cohomologie cristalline - Note he uses a different but equivalent convention as in this article. Also, the main points in the introduction are still valid. Applications , section II.6 References Ring theory Algebraic groups Combinatorics on words
Witt vector
[ "Mathematics" ]
3,781
[ "Fields of abstract algebra", "Ring theory", "Combinatorics on words", "Combinatorics" ]
5,574,263
https://en.wikipedia.org/wiki/Carboxypeptidase
A carboxypeptidase (EC number 3.4.16 - 3.4.18) is a protease enzyme that hydrolyzes (cleaves) a peptide bond at the carboxy-terminal (C-terminal) end of a protein or peptide. This is in contrast to an aminopeptidases, which cleave peptide bonds at the N-terminus of proteins. Humans, animals, bacteria and plants contain several types of carboxypeptidases that have diverse functions ranging from catabolism to protein maturation. At least two mechanisms have been discussed. Functions Initial studies on carboxypeptidases focused on pancreatic carboxypeptidases A1, A2, and B in the digestion of food. Most carboxypeptidases are not, however, involved in catabolism. Instead they help to mature proteins, for example post-translational modification. They also regulate biological processes, such as the biosynthesis of neuroendocrine peptides such as insulin requires a carboxypeptidase. Carboxypeptidases also function in blood clotting, growth factor production, wound healing, reproduction, and many other processes. Mechanism Carboxypeptidases hydrolyze peptides at the first amide or polypeptide bond on the C-terminal end of the chain. Carboxypeptidases act by replacing the substrate water with a carbonyl (C=O) group. The carboxypeptidase A hydrolysis reaction has two mechanistic hypotheses, via a nucleophilic water and via an anhydride. In the first proposed mechanism, a promoted-water pathway is favoured as Glu270 deprotonates the nucleophilic water. The Zn2+ ion, along with positively charged residues, decreases the pKa of the bound water to approximately 7. Glu 270 has a dual role in this mechanism as it acts as a base to allow for the attack at the amide carbonyl group during nucleophilic addition. It acts as an acid during elimination when the water proton is transferred to the leaving nitrogen group. The oxygen on the amide carbonyl group does not coordinate to the Zn2+ until the addition of the water. The deprotonation of the Zn2+ coordinated water by Glu 270 provides an activated hydroxide nucleophile which attacks the amide carbonyl group in the peptide bond in a nucleophilic addition. The negatively charged intermediates that are formed during hydrolysis are stabilized by the Zn2+ ion. The interaction between the carbonyl group and the neighbouring arginine, Arg 217, also stabilizes the negatively charged intermediates. The zinc-bound hydroxide interacts with the amide with the electrostatic stabilization of the transition state provided by the Zn2+ ion and the neighbouring arginine. The second proposed mechanism via an anhydride has similar steps but there is a direct attack of Glu270 on the carbonyl group, and then the interaction of Glu270 on the Zn2+-bound amide forms an anhydride instead which can subsequently be hydrolyzed by water. Classifications By active site mechanism Carboxypeptidases are usually classified into one of several families based on their active site mechanism. Enzymes that use a metal in the active site are called "metallo-carboxypeptidases" (EC number 3.4.17). Other carboxypeptidases that use active site serine residues are called "serine carboxypeptidases" (EC number 3.4.16). Those that use an active site cysteine are called "cysteine carboxypeptidase" (or "thiol carboxypeptidases")(EC number 3.4.18). These names do not refer to the selectivity of the amino acid that is cleaved. By substrate preference Another classification system for carboxypeptidases refers to their substrate preference. In this classification system, carboxypeptidases that have a stronger preference for those amino acids containing aromatic or branched hydrocarbon chains are called carboxypeptidase A (A for aromatic/aliphatic). Carboxypeptidases that cleave positively charged amino acids (arginine, lysine) are called carboxypeptidase B (B for basic). A metallo-carboxypeptidase that cleaves a C-terminal glutamate from the peptide N-acetyl-L-aspartyl-L-glutamate is called "glutamate carboxypeptidase". A serine carboxypeptidase that cleaves the C-terminal residue from peptides containing the sequence -Pro-Xaa (Pro is proline, Xaa is any amino acid on the C-terminus of a peptide) is called "prolyl carboxypeptidase". Activation Some, but not all, carboxypeptidases are initially produced in an inactive form; this precursor form is referred to as a procarboxypeptidase. In the case of pancreatic carboxypeptidase A, the inactive zymogen form - pro-carboxypeptidase A - is converted to its active form - carboxypeptidase A - by the enzyme trypsin. This mechanism ensures that the cells wherein pro-carboxypeptidase A is produced are not themselves digested. See also Carboxypeptidase E Carboxypeptidase A Enzyme category EC number 3.4 Thrombin-activatable fibrinolysis inhibitor aka plasma carboxypeptidase B2 Bacterial transpeptidase, an alanine carboxypeptidase Bradykinin is broken down among other enzymes by carboxypeptidase N DD-Ala carboxypeptidase is a penicillin-binding protein Phenylalanine might inhibit carboxypeptidase A Martha L. Ludwig References Further reading External links Proteins Enzymes Metabolism pl:Karboksypeptydaza
Carboxypeptidase
[ "Chemistry", "Biology" ]
1,313
[ "Biomolecules by chemical classification", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Metabolism" ]
5,574,320
https://en.wikipedia.org/wiki/Respect%20agenda
The Respect agenda was launched in September 2005 by Tony Blair, then Prime Minister of the United Kingdom. Tony Blair described it as being about "putting the law-abiding majority back in charge of their communities". Its aim was to help central government, local agencies, local communities, and citizens to tackle anti-social behaviour collaboratively and more effectively. In late December 2007, shortly after Gordon Brown succeeded Blair as prime minister, it was reported that the government had effectively ended the Respect programme by closing down the Respect Task Force and moving its head to another job inside the Cabinet Office. However, much of the Respect Agenda was incorporated into a Youth Taskforce Action Plan in the Department for Children, Schools and Families. Respect Task Force The agenda was co-ordinated by the Respect Task Force, a cross-governmental unit based at the Home Office. Louise Casey, former director of the Anti-Social Behaviour Unit, headed the Task Force. Respect Action Plan The key policies of the Task Force were published in the Respect Action Plan in January 2006. The report advised tackling the underlying causes of anti-social behaviour, intervening early where problems occur and broadening efforts to address other areas of poor behaviour. Anti-social behaviour The agenda promoted a range of tools including Anti-Social Behaviour Orders (ASBOs), Parenting Orders, Family Intervention Projects and Dispersal Orders. The Task Force claimed use of a combination of the available tools can be effective when tackling the problem, although ASBOs have encountered some controversy. References External links Respect website Tony Blair 2005 introductions Programmes of the Government of the United Kingdom Anti-social behaviour Criminology Social policy
Respect agenda
[ "Biology" ]
331
[ "Anti-social behaviour", "Behavior", "Human behavior" ]
5,574,420
https://en.wikipedia.org/wiki/GPS%C2%B7C
GPS·C (GPS Correction) was a Differential GPS data source for most of Canada maintained by the Canadian Active Control System, part of Natural Resources Canada. When used with an appropriate receiver, GPS·C improved real-time accuracy to about 1–2 meters, from a nominal 15 m accuracy. Real-time data was collected at fourteen permanent ground stations spread across Canada, and forwarded to the central station, "NRC1", in Ottawa for processing. Visiting the external webpage for this service on 2011-11-04, there is only a note saying that the service had been discontinued on 2011-04-01. There is a PDF link on that page to possible alternatives. CDGPS GPS·C information was broadcast Canada-wide on MSAT by the Canada-Wide DGPS Correction Service (CDGPS). CDGPS required a separate MSAT receiver, which output correction information in the RTCM format for input into any suitably equipped GPS receiver. The need for a separate receiver made it less cost-effective than solutions like WAAS or StarFire, which receive their correction information using the same antenna and receiver. Shutdown On April 9, 2010, it was announced that the service would be discontinued by March 31, 2011. The service was decommissioned on March 31, 2011 and finally terminated on April 1, 2011, at 9:00 EDT. References External links CDGPS (Canada-Wide DGPS Correction Service) GPS·C Distribution Using NTRIP — PDF format Global Positioning System Natural Resources Canada Lists of coordinates Satellite-based augmentation systems
GPS·C
[ "Technology", "Engineering" ]
325
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
5,574,517
https://en.wikipedia.org/wiki/MSAT
MSAT (Mobile Satellite) is a satellite-based mobile telephony service developed by the National Research Council Canada (NRC). Supported by a number of companies in the US and Canada, MSAT hosts a number of services, including the broadcast of CDGPS signals. The MSAT satellites were built by Hughes (now owned by Boeing) with a 3 kilowatt solar array power capacity and sufficient fuel for a design life of twelve years. TMI Communications of Canada referred to its MSAT satellite as MSAT-1, while American Mobile Satellite Consortium (now Ligado Networks) referred to its MSAT as AMSC-1, with each satellite providing backup for the other. History April 7, 1995 - MSAT-2 (a.k.a. AMSC-1, COSPAR 1995-019A, SATCAT 23553) launched from Cape Canaveral, Launch Complex 36, Pad A, aboard Atlas IIA May 1995 - testing causes overheating and damage to one of eight hybrid matrix amplifier output ports aboard MSAT-2 April 20, 1996 - MSAT-1 (sometimes AMSC 2, COSPAR 1996-022A, SATCAT 23846) launched from Kourou, French Guiana aboard Ariane 42P May 15, 1996 - Reported failures of two solid state power amplifiers (SSPAs) and one L-band receiver on separate occasions aboard MSAT-2. May 4, 2003 - MSAT-1 loses two power amplifiers. Phaseout MSAT-1 and MSAT-2 have had their share of problems. Mobile Satellite Ventures placed the AMSC-1 satellite into a 2.5 degree inclined orbit operations mode in November 2004, reducing station-keeping fuel usage and extending the satellite's useful life. On January 11, 2006, Mobile Satellite Ventures (MSVLP) (changed name to SkyTerra, then became by acquisition LightSquared, then after bankruptcy Ligado Networks) announced plans to launch a new generation of satellites (in a 3 satellite configuration) to replace the MSAT satellites by 2010. MSV has said that all old MSAT gear would be compatible with the new satellites. MSV-1 (U.S.) MSV-2 (Canada) MSV-SA (South America) Services Delivered via MSAT The following services are singularly dependent upon the continued operation of the MSAT satellite: CDGPS - a differential correction signal system for improved GPS navigation accuracy Trailer Tracking - by SkyWave Mobile Communications Trailer Tracking - by SkyBitz EMERGNET - by Glentel See also Mobile-satellite service Satellite phone References External links Mobile Satellite Ventures Mobile Satellite System for Canada, U.S. Mobile Satellite Systems Communications satellites Global Positioning System
MSAT
[ "Technology", "Engineering" ]
559
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
5,574,547
https://en.wikipedia.org/wiki/Cathepsin%20C
Cathepsin C (CTSC) also known as dipeptidyl peptidase I (DPP-I) is a lysosomal exo-cysteine protease belonging to the peptidase C1 protein family, a subgroup of the cysteine cathepsins. In humans, it is encoded by the CTSC gene. Function Cathepsin C appears to be a central coordinator for activation of many serine proteases in immune/inflammatory cells. Cathepsin C catalyses excision of dipeptides from the N-terminus of protein and peptide substrates, except if (i) the amino group of the N-terminus is blocked, (ii) the site of cleavage is on either side of a proline residue, (iii) the N-terminal residue is lysine or arginine, or (iv) the structure of the peptide or protein prevents further digestion from the N-terminus. Structure The cDNAs encoding rat, human, murine, bovine, dog and two Schistosome cathepsin Cs have been cloned and sequenced and show that the enzyme is highly conserved. The human and rat cathepsin C cDNAs encode precursors (prepro-cathepsin C) comprising signal peptides of 24 residues, pro-regions of 205 (rat cathepsin C) or 206 (human cathepsin C) residues and catalytic domains of 233 residues which contain the catalytic residues and are 30-40% identical to the mature amino acid sequences of papain and a number of other cathepsins including cathepsins, B, H, K, L, and S. The translated prepro-cathepsin C is processed into the mature form by at least four cleavages of the polypeptide chain. The signal peptide is removed during translocation or secretion of the pro-enzyme (pro-cathepsin C) and a large N-terminal proregion fragment (also known as the exclusion domain), which is retained in the mature enzyme, is separated from the catalytic domain by excision of a minor C-terminal part of the pro-region, called the activation peptide. A heavy chain of about 164 residues and a light chain of about 69 residues are generated by cleavage of the catalytic domain. Unlike the other members of the papain family, mature cathepsin C consists of four subunits, each composed of the N-terminal proregion fragment, the heavy chain and the light chain. Both the pro-region fragment and the heavy chain are glycosylated. Clinical significance Defects in the encoded protein have been shown to be a cause of Papillon-Lefevre disease, an autosomal recessive disorder characterized by palmoplantar keratosis and periodontitis. Cathepsin C functions as a key enzyme in the activation of granule serine peptidases in inflammatory cells, such as elastase and cathepsin G in neutrophils cells and chymase and tryptase in mast cells. In many inflammatory diseases, such as rheumatoid arthritis, chronic obstructive pulmonary disease (COPD), inflammatory bowel disease, asthma, sepsis, and cystic fibrosis, a significant portion of the pathogenesis is caused by increased activity of some of these inflammatory proteases. Once activated by cathepsin C, the proteases are capable of degrading various extracellular matrix components, which can lead to tissue damage and chronic inflammation. References Further reading External links The MEROPS online database for peptidases and their inhibitors: C01.070 Proteins Proteases EC 3.4.14 Cathepsins
Cathepsin C
[ "Chemistry" ]
783
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
5,574,761
https://en.wikipedia.org/wiki/Microsoft%20Interface%20Definition%20Language
Microsoft Interface Definition Language (MIDL) is a text-based interface description language from Microsoft, based on the DCE/RPC IDL which it extends for use with the Microsoft Component Object Model. Its compiler is also called MIDL. Version History MIDL 1.0 is a standard DCE/RPC IDL with enhancements made for defining COM coclasses and interfaces. MIDL 2.0 (also known as MIDLRT) is a updated version of syntax that was developed in-house by Microsoft for use on the Windows platform that allowed for declaring Windows Runtime APIs. Various built in Windows Runtime APIs are written with MIDL 2.0 syntax and are available in the Windows SDK folder. The most recent version of MIDL is MIDL 3.0 released on December 30, 2021. Version 3.0 is a more streamlined version of MIDL 2.0, utilizing more modern and simplified syntax familiar to C, C++, C#, or Java. MIDL 3.0 is also more concise than the previous versions, allowing for programs to be reduced by almost two thirds in length due to using built-in reasonable defaults for attributes and being more concise. References stevewhims (2021-10-21). "Microsoft Interface Definition Language - Win32 apps". learn.microsoft.com. Retrieved 2024-10-29. stevewhims (2022-07-12). "Introduction to Microsoft Interface Definition Language 3.0 - Windows UWP applications". learn.microsoft.com. Retrieved 2024-10-29. stevewhims (2021-12-30). "Microsoft Interface Definition Language 3.0 reference - Windows UWP applications". learn.microsoft.com. Retrieved 2024-10-29. See also Object Description Language External links Microsoft Docs reference Interface Definition Language Component-based software engineering Microsoft application programming interfaces Object-oriented programming Object models
Microsoft Interface Definition Language
[ "Technology" ]
406
[ "Component-based software engineering", "Components" ]
5,574,880
https://en.wikipedia.org/wiki/Minix%203
Minix 3 is a small, Unix-like operating system. It is published under a BSD-3-Clause license and is a successor project to the earlier versions, Minix 1 and 2. The project's main goal is for the system to be fault-tolerant by detecting and repairing its faults on the fly, with no user intervention. The main uses of the system are envisaged to be embedded systems and education. , Minix 3 supports IA-32 and ARM architecture processors. It can also run on emulators or virtual machines, such as Bochs, VMware Workstation, Microsoft Virtual PC, Oracle VirtualBox, and QEMU. A port to PowerPC architecture is in development. The distribution comes on a live CD and does not support live USB installation. The project has been dormant since late 2018, and the latest release is 3.4.0 rc6 from 2017, although the Minix 3 discussion group is still active. Minix 3 is believed to have inspired the Intel Management Engine (ME) OS found in Intel's Platform Controller Hub, starting with the introduction of ME 11, which is used with Skylake and Kaby Lake processors. It was debated that Minix could have been the most widely used OS on x86/AMD64 processors, with more installations than Microsoft Windows, Linux, or macOS, because of its use in the Intel ME. Goals of the project Reflecting on the nature of monolithic kernel based systems, where a driver (which has, according to Minix creator Tanenbaum, approximately 3–7 times as many bugs as a usual program) can bring down the whole system, Minix 3 aims to create an operating system that is a "reliable, self-healing, multiserver Unix clone". To achieve that, the code running in kernel must be minimal, with the file server, process server, and each device driver running as separate user-mode processes. Each driver is carefully monitored by a part of the system named the reincarnation server. If a driver fails to respond to pings from this server, it is shut down and replaced by a fresh copy of the driver. In a monolithic system, a bug in a driver can easily crash the whole kernel. This is far less likely to occur in Minix 3. History Minix 3 was publicly announced on 24 October 2005 by Andrew Tanenbaum during his keynote speech on top of the Association for Computing Machinery (ACM) Symposium Operating Systems Principles conference. Although it still serves as an example for the new edition of Tanenbaum and Woodhull's textbook, it is comprehensively redesigned to be "usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability." Initially released under the same BSD-3-Clause license that Minix was licensed under since 2000. In late 2005, the copyright owner was changed and a fourth clause was added. Reliability policies One of the main goals of Minix 3 is reliability. Below, some of the more important principles that enhance its reliability are discussed. Reduce kernel size Monolithic operating systems such as Linux and FreeBSD and hybrids like Windows have millions of lines of kernel code. In contrast, Minix 3 has about 6,000 lines of executable kernel code, which can make problems easier to find in the code. Cage the bugs In monolithic kernels, device drivers reside in the kernel. Thus, when a new peripheral is installed, unknown, untrusted code is inserted in the kernel. One bad line of code in a driver can bring down the system. Instead, in Minix 3, each device driver is a separate user-mode process. Drivers cannot execute privileged instructions, change the page tables, perform arbitrary input/output (I/O), or write to absolute memory. They must make kernel calls for these services and the kernel checks each call for authority. Limit drivers' memory access In monolithic kernels, a driver can write to any word of memory and thus accidentally corrupt user programs. In Minix 3, when a user expects data from, for example, the file system, it builds a descriptor telling who has access and at what addresses. It then passes an index to this descriptor to the file system, which may pass it to a driver. The file system or driver then asks the kernel to write via the descriptor, making it impossible for them to write to addresses outside the buffer. Survive bad pointers Dereferencing a bad pointer within a driver will crash the driver process, but will have no effect on the system as a whole. The reincarnation server will restart the crashed driver automatically. Users will not notice recovery for some drivers (e.g., disk and network) but for others (e.g., audio and printer), they might. In monolithic kernels, dereferencing a bad pointer in a driver normally leads to a system crash. Tame infinite loops If a driver gets into an infinite loop, the scheduler will gradually lower its priority until it becomes idle. Eventually the reincarnation server will see that it is not responding to status requests, so it will kill and restart the looping driver. In a monolithic kernel, a looping driver could hang the system. Limit damage from buffer overflows Minix 3 uses fixed-length messages for internal communication, which eliminates certain buffer overflows and buffer management problems. Also, many exploits work by overrunning a buffer to trick the program into returning from a function call using an overwritten stack return address pointing into attacker controlled memory, usually the overrun buffer. In Minix 3, this attack is mitigated because instruction and data space are split and only code in (read-only) instruction space can be executed, termed executable space protection. However, attacks which rely on running legitimately executable memory in a malicious way (return-to-libc, return-oriented programming) are not prevented by this mitigation. Restrict access to kernel functions Device drivers obtain kernel services (such as copying data to users' address spaces) by making kernel calls. The Minix 3 kernel has a bit map for each driver specifying which calls it is authorized to make. In monolithic kernels, every driver can call every kernel function, authorized or not. Restrict access to I/O ports The kernel also maintains a table telling which I/O ports each driver may access. Thus, a driver can only touch its own I/O ports. In monolithic kernels, a buggy driver can access I/O ports belonging to another device. Restrict communication with OS components Not every driver and server needs to communicate with every other driver and server. Accordingly, a per-process bit map determines which destinations each process may send to. Reincarnate dead or sick drivers A special process, called the reincarnation server, periodically pings each device driver. If the driver dies or fails to respond correctly to pings, the reincarnation server automatically replaces it with a fresh copy. Detecting and replacing non-functioning drivers is automatic, with no user action needed. This feature does not work for disk drivers at present, but in the next release the system will be able to recover even disk drivers, which will be shadowed in random-access memory (RAM). Driver recovery does not affect running processes. Integrate interrupts and messages When an interrupt occurs, it is converted at a low level to a notification sent to the appropriate driver. If the driver is waiting for a message, it gets the interrupt immediately; otherwise it gets the notification the next time it does a RECEIVE to get a message. This scheme eliminates nested interrupts and makes driver programming easier. Architecture As can be seen, at the bottom level is the microkernel, which is about 4,000 lines of code (mostly in C, plus a small amount of assembly language). It handles interrupts, scheduling, and message passing. It also supports an application programming interface (API) of about 30 kernel calls that authorized servers and drivers can make. User programs cannot make these calls. Instead, they can issue POSIX system calls which send messages to the servers. The kernel calls perform functions such as setting interrupts and copying data between address spaces. At the next level up, there are the device drivers, each one running as a separate userland process. Each one controls some I/O device, such as a disk or printer. The drivers do not have access to the I/O port space and cannot issue I/O instructions directly. Instead, they must make kernel calls giving a list of I/O ports to write to and the values to be written. While there is a small amount of overhead in doing this (typically 500 ns), this scheme makes it possible for the kernel to check authorization, so that, for example, the audio driver cannot write on the disk. At the next level there are the servers. This is where nearly all the operating system functionality is located. User processes obtain file service, for example, by sending messages to the file server to open, close, read, and write files. In turn, the file server gets disk I/O performed by sending messages to the disk driver, which controls the disk. One of the key servers is the reincarnation server. Its job is to poll all the other servers and drivers to check on their health periodically. If a component fails to respond correctly, or exits, or gets into an infinite loop, the reincarnation server (which is the parent process of the drivers and servers) kills the faulty component and replaces it with a fresh copy. In this way the system is automatically made self-healing without interfering with running programs. Currently the reincarnation server, the process server, and the microkernel are part of the trusted computing base. If any of them fail, the system crashes. Nevertheless, reducing the trusted computing base from 3-5 million lines of code, as in Linux and Windows systems, to about 20,000 lines greatly enhances system reliability. Differences between Minix 3 and prior versions Minix 1.0, 1.5, and 2.0 were developed as tools to help people learn about the design of operating systems. Minix 1.0, released in 1987, was 12,000 lines of C and some x86 assembly language. Source code of the kernel, memory manager, and file system of Minix 1.0 are printed in the book. Tanenbaum originally developed Minix for compatibility with the IBM PC and IBM PC/AT microcomputers available at the time. Minix 1.5, released in 1991, included support for MicroChannel IBM PS/2 systems and was also ported to the Motorola 68000 and SPARC architectures, supporting the Atari ST, Commodore Amiga, Apple Macintosh and Sun Microsystems SPARCstation computer platforms. A version of Minix running as a user process under SunOS was also available. Minix 2.0, released in 1997, was only available for the x86 and Solaris-hosted SPARC architectures. Minix-vmd was created by two Vrije Universiteit researchers, and added virtual memory and support for the X Window System. Minix 3 does the same, and provides a modern operating system with many newer tools and many Unix applications. Prof. Tanenbaum once said: Many improvements have also been made in the structure of the kernel since the Minix 2 release, making the system more reliable. Minix version 3.1.5 was released 5 Nov 2009. It contains X11, Emacs, vi, cc, GCC, Perl, Python, Almquist shell, Bash, Z shell, FTP client, SSH client, Telnet client, Pine, and over 400 other common Unix utility programs. With the addition of X11, this version marks the transition away from a text-only system. Another feature of this version, which will be improved in future ones, is the ability of the system to withstand device driver crashes, and in many cases having them automatically replaced without affecting running processes. In this way, Minix is self-healing and can be used in applications demanding high reliability. Minix 3.2.0 was released in February 2012. This version has many new features, including the Clang compiler, experimental symmetric multiprocessing support, procfs and ext2fs filesystem support, and GNU Debugger (GDB). Several parts of NetBSD are also integrated in the release, including the bootloader, libc and various utilities and other libraries. Minix 3.3.0 was released in September 2014. This release is the first version to support the ARM architecture in addition to x86. It also supports a NetBSD userland, with thousands of NetBSD packages running right out of the box. Mascot Rocky Raccoon is the mascot of Minix 3. MINIXCon MINIXCon is a conference on sharing talks, efforts and researches related to Minix. It was held once in 2016. MINIXCon2017 was cancelled due to lack of talks submitted. See also MINIX file system Xinu xv6 Comparison of operating system kernels List of computing mascots :Category:Computing mascots Notes References Further reading Building a dependable operating system: fault tolerance in MINIX 3 by Jorrit N. Herder (PDF) Reorganizing Unix for Reliability by Jorrit N. Herder, Herbert Bos, Ben Gras, Philip Homburg, and Andrew S. Tanenbaum (PDF) Modular system programming in MINIX 3 by Jorrit N. Herder, Herbert Bos, Ben Gras, Philip Homburg, and Andrew S Tanenbaum (PDF) J. N. Herder et al., Modular System Programming in MINIX 3, ;Login, April 2006 (PDF) Pablo A Pessolani. MINIX4RT: A Real-Time Operating System Based on MINIX Building Performance Measurement Tools for the MINIX 3 Operating System, by Rogier Meurs (PDF) Design and implementation of the MINIX virtual file system (PDF) Reference manual for MINIX 3 Kernel API (PDF) Towards a true microkernel operating system (PDF) Construction of a Highly Dependable Operating System (PDF) Minix 3 and the microkernel experience: Smart Kernel by Rüdiger Weis (PDF) Safe and Automatic Live Update by Cristiano Giuffrida (PDF) External links 2005 software Computer science in the Netherlands Computing platforms Educational operating systems Information technology in the Netherlands Microkernels Minix Operating system distributions bootable from read-only media
Minix 3
[ "Technology" ]
3,014
[ "Computing platforms" ]
5,574,966
https://en.wikipedia.org/wiki/Split-cycle%20engine
The split-cycle engine is a type of internal combustion engine. Design In a conventional Otto cycle engine, each cylinder performs four strokes per cycle: intake, compression, power, and exhaust. This means that two revolutions of the crankshaft are required for each power stroke. The split-cycle engine divides these four strokes between two paired cylinders: one for intake and compression, and another for power and exhaust. Compressed air is transferred from the compression cylinder to the power cylinder through a crossover passage. Fuel is then injected and fired to produce the power stroke. History The Backus Water Motor Company of Newark, New Jersey was producing an early example of a split cycle engine as far back as 1891. The engine, of "a modified A form, with the crank-shaft at the top", was water-cooled and consisted of one working cylinder and one compressing cylinder of equal size and utilized a hot-tube ignitor system. It was produced in sizes ranging from 1/2 to and the company had plans to offer a scaled-up version capable of or more. The Atkinson differential engine was a two piston, single cylinder four-stroke engine that also used a displacer piston to provide the fuel air mixture for use by the power piston. However, the power piston did the compression. The twingle engine (U.S. English) or split-single engine (British English) is a twin cylinder (or more) two-stroke engine; more precisely, it has one or more U-tube cylinders that each use a pair of pistons, one in each arm of the U. However, both pistons in each pair are used for power (and the underside of both supplies fuel air mixture, if crankcase scavenging is used), and they only differ in that one piston works the transfer port to provide the fuel air mixture for use in both cylinders and the other piston works the exhaust port, so that the burnt mixture is exhausted via that cylinder. Unlike the Scuderi both cylinders are connected to the combustion chamber. As neither piston works as a displacer piston at all, this engine has nothing whatsoever to do with the split cycle engine apart from a purely coincidental similarity of the names. The Scuderi engine is a design of a split-cycle, internal combustion engine invented by Carmelo J. Scuderi. The Scuderi Group, an engineering and licensing company based in West Springfield, Massachusetts and founded by Carmelo Scuderi’s children, said that the prototype was completed and was unveiled to the public on April 20, 2009. The Tour Engine is an opposed-cylinder split-cycle internal combustion engine that uses a novel Spool Shuttle Crossover Valve (SSCV) to transfer fuel/air charge from the cold to hot cylinder. The first prototype was completed in June 2008. Tour Engine was funded by grants from the Israel Ministry of National Infrastructures, Energy and Water Resources and ARPA-E Another split-cycle design, using an external combustion chamber, is the Zajac engine. New Zealand scam - Rick Mayne's Split Cell engine In 2009 investigative journalist Gerard Ryle reported a scam by New Zealander Rick Mayne that lost investors 100Ms of NZ dollars. Rick Mayne claimed success with a Split Cycle engine that used a multitude of small cylinders arranged in a radial arrangement with pistons operated by a Geneva mechanism. This scam engine was never successfully run in a meaningful demonstration, but significant capital was raised from unsuspecting investors and lost, through a share plan. Ryle reported on the Rick Mayne scam, along with other scams involving fuel saving, in his book Firepower; and on ABC radio in 2009: Even British newspaper the Independent was taken in by the scam, as was British racing driver Jack Brabham References Engine technology Piston engines
Split-cycle engine
[ "Technology" ]
777
[ "Engine technology", "Piston engines", "Engines" ]
5,575,030
https://en.wikipedia.org/wiki/WURFL
WURFL (Wireless Universal Resource FiLe) is a set of proprietary application programming interfaces (APIs) and an XML configuration file which contains information about device capabilities and features for a variety of mobile devices, focused on mobile device detection. Until version 2.2, WURFL was released under an "open source / public domain" license. Prior to version 2.2, device information was contributed by developers around the world and the WURFL was updated frequently, reflecting new wireless devices coming on the market. In June 2011, the founder of the WURFL project, Luca Passani, and Steve Kamerman, the author of Tera-WURFL, a popular PHP WURFL API, formed ScientiaMobile, Inc to provide commercial mobile device detection support and services using WURFL. As of August 30, 2011, the ScientiaMobile WURFL APIs are licensed under a dual-license model, using the AGPL license for non-commercial use and a proprietary commercial license. The current version of the WURFL database itself is no longer open source. Solution approaches There have been several approaches to this problem, including developing very primitive content and hoping it works on a variety of devices, limiting support to a small subset of devices or bypassing the browser solution altogether and developing a Java ME or BREW client application. WURFL solves this by allowing development of content pages using abstractions of page elements (buttons, links and textboxes for example). At run time, these are converted to the appropriate, specific markup types for each device. In addition, the developer can specify other content decisions be made at runtime based on device specific capabilities and features (which are all in the WURFL). WURFL Cloud In March 2012, ScientiaMobile has announced the launch of the WURFL Cloud. While the WURFL Cloud is a paid service, a free offer is made available to hobbyists and micro-companies for use on mobile sites with limited traffic. Currently, the WURFL Cloud supports Java, Microsoft .NET, PHP, Ruby, Python, Node.js and the Perl programming languages WURFL and Apache, NGINX, Varnish Cache, and HAProxy In October 2012, ScientiaMobile has announced the availability of a C++ API, an Apache module, an NGINX module and Varnish Cache module. Later in November 2016, ScientiaMobile provided a module for the HAProxy load balancer. Differently from other WURFL APIs, the C++ API and the modules are distributed commercially exclusively. Several popular Linux distribution are supported through RPM and DEB packages. WURFL.io In 2014, WURFL.io was launched. WURFL.io features non-commercial products and services from ScientiaMobile: WURFL.js: a JavaScript device detection service that makes Server-Side detected properties (WURFL capabilities) available to the JavaScript in web pages. ImageEngine: a WURFL-based Image CDN for optimizing image delivery on the web. The MOVR (Mobile OverView Report) providing the latest in mobile and web statistics. WALL, Wireless Abstraction Library WALL (Wireless Abstraction Library by Luca Passani) is a JSP tag library that lets a developer author mobile pages similar to plain HTML, while delivering WML, C-HTML and XHTML Mobile Profile to the device from which the HTTP request originates, depending on the actual capabilities of the device itself. Device capabilities are queried dynamically using the WURFL API. A WALL port to PHP (called WALL4PHP) is also available. Supported implementations WURFL is currently supported using the following. Java (via WALL) PHP (via Tera-WURFL (database driven), the new WURFL PHP API and WALL4PHP) .NET Framework (via Visual Basic / C# / any .NET language API and Somms.NWURFL(C#)) Perl Ruby Python (via Python Tools) XSLT C++ Apache Mobile Filter The PHP/MySQL based Tera-WURFL API comes with a remote webservice that allows you to query the WURFL from any language that supports XML webservices and includes clients for the following languages out of the box: PHP Perl Python JavaScript ActionScript 3 (Flash / Flex / AIR / ECMAScript) License update The August 29, 2011 update of WURFL included a new set of licensing terms. These terms set forth a number of licenses under which WURFL could be used. The free version of the license does not allow derivative works, and prevents direct access to the wurfl.xml file. As a result of the "no-derivates" clause, users are no longer permitted to add new device capabilities to the WURFL file either directly or through the submissions of "patches". A commercial license is required to utilize third-party API's with the WURFL Repository. On January 3, 2012, ScientiaMobile filed a DMCA takedown notice against the open-source device database OpenDDR that contains data from a previous version of WURFL. According to OpenDDR, these data were available under GPL. On March 22, 2012 it was announced by Matthew Weier O'Phinney that Zend Framework would be dropping support for WURFL as of version 1.12. This was due to the licence change which makes it incompatible with the Zend Framework's licensing as the new licensing now requires that you "open-source the full source code of your web site, irrespective of the fact that you may modify the WURFL API or not." See also UAProf User agent References External links Wireless networking Web development Software using the GNU Affero General Public License
WURFL
[ "Technology", "Engineering" ]
1,226
[ "Software engineering", "Wireless networking", "Computer networks engineering", "Web development" ]
5,575,149
https://en.wikipedia.org/wiki/Iodine%20monochloride
Iodine monochloride is an interhalogen compound with the formula . It is a red-brown chemical compound that melts near room temperature. Because of the difference in the electronegativity of iodine and chlorine, this molecule is highly polar and behaves as a source of I+. Discovered in 1814 by Gay-Lussac, iodine monochloride is the first interhalogen compound discovered. Preparation Iodine monochloride is produced simply by combining the halogens in a 1:1 molar ratio, according to the equation When chlorine gas is passed through iodine crystals, one observes the brown vapor of iodine monochloride. Dark brown iodine monochloride liquid is collected. Excess chlorine converts iodine monochloride into iodine trichloride in a reversible reaction: Polymorphs has two polymorphs; α-ICl, which exists as black needles (red by transmitted light) with a melting point of 27.2 °C, and β-ICl, which exists as black platelets (red-brown by transmitted light) with a melting point 13.9 °C. In the crystal structures of both polymorphs the molecules are arranged in zigzag chains. β-ICl is monoclinic with the space group P21/c. Reactions and uses Iodine monochloride is soluble in acids such as HF and HCl but reacts with pure water to form HCl, iodine, and iodic acid: + O2 + O2 ICl is a useful reagent in organic synthesis. It is used as a source of electrophilic iodine in the synthesis of certain aromatic iodides. It also cleaves C–Si bonds. ICl will also add to the double bond in alkenes to give chloro-iodo alkanes. RCH=CHR′ + ICl → RCH(I)–CH(Cl)R′ When such reactions are conducted in the presence of sodium azide, the iodo-azide RCH(I)–CH(N3)R′ is obtained. The Wijs solution, iodine monochloride dissolved in acetic acid, is used to determine the iodine value of a substance. It can also be used to prepare iodates, by reaction with a chlorate. Chlorine is released as a byproduct. Iodine monochloride is a Lewis acid that forms 1:1 adducts with Lewis bases such as dimethylacetamide and benzene. References Iodine compounds Chlorides Interhalogen compounds Diatomic molecules Oxidizing agents
Iodine monochloride
[ "Physics", "Chemistry" ]
565
[ "Chlorides", "Inorganic compounds", "Redox", "Molecules", "Interhalogen compounds", "Oxidizing agents", "Salts", "Diatomic molecules", "Matter" ]
5,575,192
https://en.wikipedia.org/wiki/Social%20psychology%20%28sociology%29
In sociology, social psychology (also known as sociological social psychology) studies the relationship between the individual and society. Although studying many of the same substantive topics as its counterpart in the field of psychology, sociological social psychology places relatively more emphasis on the influence of social structure and culture on individual outcomes, such as personality, behavior, and one's position in social hierarchies. Researchers broadly focus on higher levels of analysis, directing attention mainly to groups and the arrangement of relationships among people. This subfield of sociology is broadly recognized as having three major perspectives: Symbolic interactionism, social structure and personality, and structural social psychology. Some of the major topics in this field include social status, structural power, sociocultural change, social inequality and prejudice, leadership and intra-group behavior, social exchange, group conflict, impression formation and management, conversation structures, socialization, social constructionism, social norms and deviance, identity and roles, and emotional labor. The primary methods of data collection are sample surveys, field observations, vignette studies, field experiments, and controlled experiments. History Sociological social psychology is understood to have emerged in 1902 with a landmark study by sociologist Charles Cooley, entitled Human Nature and the Social Order, in which he introduces the concept of the looking-glass self. Sociologist Edward Alsworth Ross would subsequently publish the first sociological textbook in social psychology, known as Social Psychology, in 1908. Following a few decades later, Jacob L. Moreno would go on to found the field's major academic journal in 1937, entitled Sociometry—though its name would change in 1978 to Social Psychology and to its current title, Social Psychology Quarterly, the year after. Foundational concepts Symbolic interactionism In the 1920s, William and Dorothy Thomas introduced what would become not only a basic tenet of sociological social psychology, but of sociology in general. In 1923, the two proposed the concept of definition of the situation, followed in 1928 by the Thomas theorem (or Thomas axiom): This subjective definition of situation by social actors, groups, or subcultures would be interpreted by Robert K. Merton as a 'self-fulfilling prophecy' (re ‘mind over matter’), becoming a core concept of what would form the theory of symbolic interactionism. Generally credited as the founder of symbolic interactionism is University of Chicago philosopher and sociologist George Herbert Mead, whose work greatly influences the area of social psychology in general. However, it would be sociologist Herbert Blumer, Mead's colleague and disciple at Chicago, who coined the name of the framework in 1937. Action theory At Harvard University, sociologist Talcott Parsons began developing a cybernetic theory of action in 1927, which would subsequently be adapted to small group research by Parsons' student and colleague, Robert Freed Bales. Using Bales' behavior coding scheme, interaction process analysis, would result in a body of observational studies in social interactions in groups. During his 41-year tenure at Harvard, Bales mentored a distinguished group of sociological social psychologists concerned with group processes and other topics in sociological social psychology. Major frameworks Symbolic interactionism The contemporary notion of symbolic interactionism originates from the work of George Herbert Mead and Max Weber. In this circular framework, social interactions are considered to be the basis from which meanings are constructed; meanings that then influence the process of social interaction itself. Many symbolic interactionists see the self as a core meaning that is both constructed through and influential in social relations. The structural school of symbolic interactionism uses shared social knowledge from a macro-level culture, natural language, social institution, or organization to explain relatively enduring patterns of social interaction and psychology at the micro-level, typically investigating these matters with quantitative methods. The Iowa School, along with identity theory and affect control theory, are major programs of research in this tradition. The latter two theories, in particular, focus on the ways in which actions control mental states, which demonstrates the underlying cybernetic nature of the approach that is also evident in Mead's writings. Moreover, affect control theory provides a mathematical model of role theory and of labeling theory. Stemming from the Chicago School, process symbolic interactionism considers the meanings that underlie social interactions to be situated, creative, fluid, and often contested. As such, researchers in this tradition frequently use qualitative and ethnographic methods. Symbolic Interaction, an academic journal founded by the Society for the Study of Symbolic Interaction, emerged in 1977 as a central outlet for the empirical research and conceptual studies produced by scholars in this area. Postmodern symbolic interactionism, which understands the notion of self and identity as increasingly fragmented and illusory, considers attempts at theory to be meta-narrative with no more authority than other conversations. The approach is presented in detail by The SAGE Handbook of Qualitative Research. Social structure and personality This research perspective deals with relationships between large-scale social systems and individual behaviors and mental states including feelings, attitudes and values, and mental faculties. Some researchers focus on issues of health and how social networks bring useful social support to the ill. Another line of research deals with how education, occupation, and other components of social class impact values. Some studies assess emotional variations, especially in happiness versus alienation and anger, among individuals in different structural positions. Structural social psychology Structural social psychology diverges from the other two dominant approaches to sociological social psychology in that its theories seek to explain the emergence and maintenance of social structures by actors (whether people, groups, or organizations), generally assuming greater stability in social structure (especially compared to symbolic interactionism), and most notably assuming minimal differences between individual actors. Whereas the other two approaches to social psychology attempt to model social reality closely, structural social psychology strives for parsimony, aiming to explain the widest range of phenomena possible, while making the fewest assumptions possible. Structural social psychology makes greater use of formal theories with explicitly stated propositions and scope conditions, to specify the intended range of application. Social exchange Social exchange theory emphasizes the notion that social action is the result of personal choices that are made in order to maximize benefit while minimizing cost. A key component of this theory is the postulation of the "comparison level of alternatives": an actor's sense of the best possible alternative in a given situation (i.e. the choice with the highest net benefits or lowest net costs; similar to the concept of a "cost-benefit analysis"). Theories of social exchange share many essential features with classical economic theories, such as rational choice theory. However, social exchange theories differ from classical economics in that social exchange makes predictions about the relationships between persons, rather than just the evaluation of goods. For example, social exchange theories have been used to predict human behavior in romantic relationships by taking into account each actor's subjective sense of cost (e.g., financial dependence), benefit (e.g. attraction, chemistry, attachment), and comparison level of alternatives (e.g. whether or not there are any viable alternative mates available). Expectation states and Status characteristics Expectation states theory—as well as its popular sub-theory, status characteristics theory—proposes that individuals use available social information to form expectations for themselves and others. Group members, for instance, use stereotypes about competence in attempting to determine who will be comparatively more skilled in a given task, which then indicates one's authority and status in the group. In order to determine everyone else's relative ability and assign rank accordingly, such members use one's membership in social categories (e.g. race, gender, age, education, etc.); their known ability on immediate tasks; and their observed dominant behaviors (e.g. glares, rate of speech, interruptions, etc.). Although exhibiting dominant behaviors and, for example, belonging to a certain race has no direct connection to actual ability, implicit cultural beliefs about who possesses how much social value will drive group members to "act as if" they believe some people have more useful contributions than others. As such, the theory has been used to explain the rise, persistence, and enactment of status hierarchies. Substantive topics Social influence Social influence is a factor in every individual's life. Social influence takes place when one's thoughts, actions and feelings are affected by other people. It is a way of interaction that affects individual behavior and can occur within groups and between groups. It is a fundamental process that affects ways of socialization, conformity, leadership and social change. Dramaturgy Another aspect of microsociology aims to focus on individual behavior in social settings. One specific researcher in the field, Erving Goffman, claims that humans tend to believe that they are actors on a stage, which he explains in the book The Presentation of Self in Everyday Life. He argues that as a result, individuals will further proceed with their actions based on the response of that individual's 'audience' or in other words, the people to whom he is speaking. Much like a play, Goffman believes that rules of conversing and communication exist: to display confidence, display sincerity, and avoid infractions which are otherwise known as embarrassing situations. Breaches of such rules are what make social situations awkward. Group dynamics (group processes) From a sociological perspective, group dynamics refers to the ways in which power, status, justice, and legitimacy impact the structure and interactions that take place within groups. A particular area of study, in which scholars examine how group size affects the type and quality of interactions that take place between group members, was introduced by the work of German social theorist, Georg Simmel. Those who study group processes also study interactions between groups, such as in the case of Muzafer Sherif's Robbers Cave Experiment. Initially, groups can be characterized as either dyads (two people) or triads (three people), where the essential difference is that, if one person were to leave a dyad, that group would dissolve completely, while the same is not true of a triad. What this difference indicates is the fundamental nature of group size: every additional member of a group increases the group's stability while decreasing the possible amount of intimacy or interactions between any two members. A group can also be distinguished in terms of how and why its members know each other. In this sense, individual group members belong to one of the following: Primary group: Consists of close friends and family who are held together by expressive ties; Secondary group: Consists of coworkers, colleagues, classmates, and so on, who are held together by instrumental ties; or Reference group: Consists of people who do not necessarily know or interact with each other, but who use each other for standards of comparison for appropriate behaviors. See also Behavioral economics List of social psychologists Political psychology Social psychology (discipline within psychology) Socialization Sociobiology Sociology Socionics References External links Social Psychology Network Society for Personality and Social Psychology Society of Experimental Social Psychology Journal of Personality and Social Psychology Current Research in Social Psychology Social Psychology - brief introduction Social Psychology basics Social Psychology forum Scapegoating Processes in Groups Introduction to Social Psychology PsychWiki Social philosophy Interdisciplinary subfields of sociology Behavioural sciences Social constructionism
Social psychology (sociology)
[ "Biology" ]
2,266
[ "Behavioural sciences", "Behavior" ]
4,167,519
https://en.wikipedia.org/wiki/Biological%20effects%20of%20high-energy%20visible%20light
High-energy visible light (HEV light) is short-wave light in the violet/blue band from 400 to 450 nm in the visible spectrum, which has a number of purported negative biological effects, namely on circadian rhythm and retinal health (blue-light hazard), which can lead to age-related macular degeneration. Increasingly, blue blocking filters are being designed into glasses to avoid blue light's purported negative effects. However, there is no good evidence that filtering blue light with spectacles has any effect on eye health, eye strain, sleep quality or vision quality. Background Blue LEDs are often the target of blue-light research due to the increasing prevalence of LED displays and Solid-state lighting (e.g. LED illumination), as well as the blue appearance (higher color temperature) compared with traditional sources. However, natural sunlight has a relatively high spectral density of blue light, so exposure to high levels of blue light is not a new or unique phenomenon despite the relatively recent emergence of LED display technologies. While LED displays emit white by exciting all RGB LEDs, white light from lighting is generally produced by pairing a blue LED emitting primarily near 450 nm combined with a phosphor for down-conversion of some of the blue light to longer wavelengths, which then combine to form white light. This is often considered “the next generation of illumination” as SSL technology dramatically reduces energy resource requirements. Blue LEDs, particularly those used in white LEDs, operate at around 450 nm, where V(λ)=0.038. This means that blue light at 450 nm requires about 25 times the radiant flux (energy) for one to perceive the same luminous flux as green light at 555 nm. For comparison, UV-A at 380  nm (V(λ)=0.000 039) requires 25 641 times the amount of radiometric energy to be perceived at the same intensity as green, three orders of magnitude greater than blue LEDs. Studies often compare animal trials using identical luminous flux rather than radiance meaning comparative levels of perceived light at different frequencies rather than total emitted energy. Physiological effects Blue light hazard A 2019 report by France's Agency for Food, Environmental and Occupational Health & Safety (ANSES) highlights short-term effects on the retina linked to intense exposure to blue LED light, and long-term effects linked to the onset of age-related macular degeneration. Although few studies have examined occupational causes of macular degeneration, they show that long-term sunlight exposure, specifically its blue-light component, is associated with macular degeneration in outdoor workers. However, the CIE published its position on the low risk of blue-light hazard resulting from the use of LED technology in general lighting bulbs in April 2019. The international standard IEC 62471 assesses the photobiological safety of light sources. A proposed standard, IEC 62778, provides additional guidance in the assessment of blue-light hazard of all lighting products. Circadian rhythm The circadian rhythm is a mechanism that regulates sleep patterns. One of the primary factors affecting the circadian rhythm is the excitation of melanopsin, a light sensitive protein that absorbs maximally at 480 nm, but has at least 10% efficiency in the range of 450-540 nm. The periodic (daily) exposure to sunlight generally tunes the circadian rhythm to a 24-hour cycle. However, exposure to light sources that excite melanopsin in the retina during nighttime can interfere with the circadian rhythm. Harvard Health Publishing asserts that exposure to blue light at night has a strong negative effect on sleep. The aforementioned ANSES report "highlights [the] disruptive effects to biological rhythms and sleep, linked to exposure to even very low levels of blue light in the evening or at night, particularly via screens". A 2016 press release by the American Medical Association concludes that there are negative effects on the circadian rhythm from the unrestrained use of LED street lighting and white LED lamps have 5 times greater impact on circadian sleep rhythms than conventional street lamps. However, they also indicate that street lamp brightness is more strongly correlated to sleep outcomes. Blue light is essential for regulating the circadian rhythm, because it stimulates melanopsin receptors in the eye. This suppresses daytime melatonin, enabling wakefulness. Working in blue-free light (aka yellow light) for long periods of time disrupts circadian patterns because there is no melatonin suppression during the day, and reduced melatonin rebound at night. Eye strain Blue light has been implicated as the cause of digital eye strain, but there is no robust evidence to support this hypothesis. Dermatology As with other types of light therapy, there is no good evidence that blue light is of use in treating acne vulgaris. Blue light blocking Concerns over exposure to blue light has predicated several solutions to decreasing blue light exposure, including disabling or attenuating blue LEDs in displays, color shifting displays towards yellow, or wearing glasses that filter out blue light. Digital filters Apple's and Microsoft's operating systems and even the preset settings of standalone computer monitors include options to reduce blue-light emissions by adjusting color temperature to a warmer gamut. However, these settings dramatically reduce the size of the color gamut of the display, as they essentially simulate tritan color blindness, thereby sacrificing the usability of the displays. The filters can be set on a schedule to activate only when the sun is down. Intraocular lenses During cataract surgery, the opaque natural crystalline lens is replaced with a synthetic intraocular lens (IOL). The IOL may be designed to filter out equal, more or less UV light than the natural lens (have a higher or lower cutoff), and therefore attenuate or accentuate the blue-light hazard function. The effects of long term exposure of UV, violet and blue light on the retina can then be studied. However, it has been argued that IOLs that remove more blue light than natural lenses negatively affect color vision and the circadian rhythm while not offering significant photoprotection. Systematic reviews found no evidence of any effect in IOLs filtering blue light, and none provided any reliable statistical evidence to suggest any effect regarding contrast sensitivity, macular degeneration, vision, color-discrimination or sleep disturbances. One study claimed a large difference in observed fluorescein angiography examinations and observed markedly less "progression of abnormal fundus autofluorescence"; however the authors failed to discuss the fact that the excitation beam is filtered light between 465 and 490 nm, is largely blocked by blue light filtering IOLs but not clear IOLs present in the control patients. Blue light blocking lenses Lenses that filter blue light have been on the market for a long time in the form of brown-, orange-, and yellow-tinted sunglasses. These tinted lenses were popular for the belief that they enhanced contrast and depth perception, but after early research showing the health risks of blue light exposure, became more popular for the purported health benefits of blocking blue light. The efficacy of blue-blocking lenses in blocking blue light is not disputed, but whether typical exposure to blue light is hazardous enough to require blue blocking lenses is highly disputed. One problem with the glasses is that they cannot achieve positive outcomes in blue-light hazard and sleep simultaneously. To be effective against blue-light hazard, the glasses must be worn continuously, especially during the day when exposure is higher. However, to force blue-light exposure that mimics the normal daylight cycle, the glasses must only be worn at night, when the exposure is already quite low from a photoprotective perspective. Regardless, some evidence shows that lenses that block blue light before bedtime may be particularly useful for people with insomnia, bipolar disorder, delayed sleep phase disorder, or ADHD, though less beneficial for healthy sleepers. The small number of studies contributing to those conclusions to date have methodological flaws or risks of bias, so further research is warranted. Aggressive advertisements may contribute to the incorrect public perception of the purported dangers of blue light. Even when research has shown no evidence to support the use of blue-blocking filters as a clinical treatment for digital eye strain, ophthalmic lens manufacturers continue to market them as lenses that reduce digital eye strain. The UK's General Optical Council has criticised Boots Opticians for their unsubstantiated claims regarding their line of blue-light filtering lenses; and the Advertising Standards Authority fined them £40,000. Boots Opticians sold the lenses for a £20 markup. Trevor Warburton, speaking on behalf of the UK Association of Optometrists stated: "...current evidence does not support making claims that they prevent eye disease.". In July 2022, a Gamer Advantage advert on Twitch channel BobDuckNWeave was banned by the Advertising Standards Authority for making claims that blue light glasses could improve sleep without substantiation. See also Fluorescent lamps and health Phase response curve Ultraviolet light References Ophthalmology Optical spectrum Technology hazards Circadian rhythm Sleeplessness and sleep deprivation
Biological effects of high-energy visible light
[ "Physics", "Technology", "Biology" ]
1,878
[ "Behavior", "Spectrum (physical sciences)", "Sleeplessness and sleep deprivation", "Electromagnetic spectrum", "Optical spectrum", "Circadian rhythm", "nan", "Sleep" ]
4,167,561
https://en.wikipedia.org/wiki/Federico%20Capasso
Federico Capasso (born 1949) is an Italian-American applied physicist and is one of the inventors of the quantum cascade laser during his work at Bell Laboratories. He is currently on the faculty of Harvard University. Biography Federico Capasso received the Doctor of Physics degree, summa cum laude, from the University of Rome, Italy, in 1973 and after doing research in fiber optics at Fondazione Ugo Bordoni in Rome, joined Bell Labs in 1976. In 1984, he was made a distinguished member of technical staff and in 1997 a Bell Labs Fellow. In addition to his research activity, Capasso has held several management positions at Bell Labs, including head of the quantum phenomena and device research department and the semiconductor physics research department (1987–2000) and vice president of physical research (2000–2002). He joined Harvard on 1 January 2003. He and his collaborators made many wide-ranging contributions to semiconductor devices, pioneering the design technique known as band-structure engineering. He applied it to novel low noise quantum well avalanche photodiodes, heterojunction transistors, memory devices and lasers. He and his collaborators invented and demonstrated the quantum cascade laser (QCL). Unlike conventional semiconductor lasers, known as diode lasers, which rely on the band gap of the semiconductor to emit light, the wavelength of QCLs is determined by the energy separation between conduction band quantized states in quantum wells. In 1971 researchers postulated that such an emission process could be used for laser amplification in a superlattice. The QCL wavelength can be tailored across a wide range from the mid-infrared to the far infrared by changing the quantum well thickness. The mature technology of the QCL is now finding commercial applications. QCLs have become the most widely used sources of mid-infrared radiation for chemical sensing and spectroscopy and are commercially available. They operate at temperatures in excess of 100°C and emit up to several watts of power in continuous wave. Capasso's current research in quantum electronics deals with very high power continuous-wave QCLs, the design of new light sources based on giant optical nonlinearities in quantum wells such as widely tunable sources of terahertz radiation based on difference frequency generation and with plasmonics. He and his group at Harvard have demonstrated a new class of optical antennas and plasmonic collimators that they have used to design the near-field and far-field of semiconductor lasers, achieving ultrahigh intensity deep subwavelength size laser spots, laser beams with greatly reduced divergence and multibeam lasers. His group showed that suitably designed plasmonic interfaces consisting of optically thin arrays of optical nano-antennas lead to a powerful generalization of the centuries-old laws of reflection and refraction. They form the basis of "flat optics" based on metasurfaces. Federico Capasso has made major contributions to the study of quantum electrodynamical forces known as Casimir forces. He used the Casimir effect (the attraction between metal surfaces in vacuum due to its zero point energy) to control the motion of MicroElectroMechanical Systems (MEMS). He demonstrated novel devices (Casimir actuators and oscillators), setting limits to the scaling of MEMs technology and with his collaborators Jeremy Munday and Adrian Parsegian was the first to measure a repulsive Casimir force. Awards and honors His honors include membership in the National Academy of Sciences, the American Academy of Arts and Sciences, the European Academy of Sciences and honorary membership in the Franklin Institute. He was also elected a member of the National Academy of Engineering (1995) for contributions to solid-state electronics and optoelectronics through semiconductor 'bandgap engineering.' In 2004, he received the Chisesi-Tomassoni award for his pioneering work on the quantum-cascade laser. In 2005 he received, jointly with Nobel Laureates Frank Wilczek (MIT) and Anton Zeilinger (University of Vienna), the King Faisal International Prize for Science for his research on quantum cascade lasers. The citation called him "one of the most creative and influential physicists in the world." On behalf of the American Physical Society, he was awarded the 2004 Arthur L. Schawlow Prize in Laser Science, endowed by the NEC Corporation, for "seminal contributions to the invention and demonstration of the quantum cascade laser and the elucidation of its physics, which bridges quantum electronics, solid-state physics, and materials science." SPIE, the international society of optics and photonics, selected Capasso to receive the 2013 SPIE Gold Medal, the highest honor the society bestows. In addition, the IEEE (Institute of Electrical and Electronics Engineers), the world's largest technical professional organization, named Capasso the recipient of the 2004 IEEE Edison Medal with the following citation, "For a career of highly creative and influential contributions to heterostructure devices and materials." He is also recipient of the John Price Wetherill Medal of the Franklin Institute, the R. W. Wood Prize of the Optical Society of America, the IEEE Lasers and Electro-Optics Society W. Streifer Award for Scientific Achievement, the Materials Research Society Medal, the Rank Prize in Optoelectronics (UK), the Duddell Medal and Prize of the Institute of Physics (UK), The Willis Lamb Medal for Laser Science and Quantum Optics, the Newcomb Cleveland Prize of the American Association for the Advancement of Science, the 1995 Moet Hennessy-Louis Vuitton "Leonardo da Vinci" Prize (France), the Welker Memorial Medal (Germany), the New York Academy of Sciences Award, the IEEE David Sarnoff Award in Electronics, and the Goff Smith prize of the University of Michigan. In 2010 he received the Berthold Leibinger Zukunftspreis for research in applied laser technology and the Julius Springer Prize in Applied Physics. In 2011 he received the Jan Czochralski Medal of the European Materials Research society for his lifetime achievements in Materials Science. In 2016 he was awarded the Balzan Prize for Applied Photonics "For his pioneering work in the quantum design of new materials with specific electronic and optical features, which led to the realization of a fundamentally new class of laser, the Quantum Cascade Laser; for his major contributions in plasmonics and metamaterials at the forefront of photonics science and technology". He received the Matteucci Medal in 2019 from the Italian National Academy of Sciences for his invention of the quantum cascade laser. He is a Fellow of the American Physical Society, the Institute of Physics (UK), the Optical Society of America, the American Association for the Advancement of Science, IEEE and SPIE. He holds honorary doctorates from Lund University, Sweden, the Diderot University (Paris VII), France, the University of Bologna, Italy and the University of Torvergata (Roma II), Italy. In 2021 Capasso received the Frederic Ives Medal/Jarus W. Quinn Prize from the Optical Society of America for seminal and wide-ranging contributions to optical physics, quantum electronics and nanophotonics. Bibliography References External links Federico Capasso Federico Capasso at Harvard School of Engineering and Applied Sciences "Gold Medal Award for World-Changing Science," in SPIE Professional magazine 1949 births Living people Members of the United States National Academy of Sciences Scientists at Bell Labs Harvard University faculty Members of the United States National Academy of Engineering IEEE Edison Medal recipients Fellows of the IEEE Fellows of the American Association for the Advancement of Science Italian emigrants to the United States Scientists from Rome Optical physicists Metamaterials scientists 21st-century American physicists 20th-century Italian physicists Laser researchers Fellows of the American Physical Society Recipients of the Matteucci Medal
Federico Capasso
[ "Materials_science" ]
1,604
[ "Metamaterials scientists", "Metamaterials" ]
4,167,607
https://en.wikipedia.org/wiki/1%2C5-Cyclooctadiene
1,5-Cyclooctadiene (also known as cycloocta-1,5-diene) is a cyclic hydrocarbon with the chemical formula , specifically . There are three configurational isomers with this structure, that differ by the arrangement of the four C–C single bonds adjacent to the double bonds. Each pair of single bonds can be on the same side () or on opposite sides () of the double bond's plane; the three possibilities are denoted , , and ; or (), (), and (). (Because of overall symmetry, is the same configuration as .) Generally abbreviated COD, the isomer of this diene is a useful precursor to other organic compounds and serves as a ligand in organometallic chemistry. It is a colorless liquid with a strong odor. 1,5-Cyclooctadiene can be prepared by dimerization of butadiene in the presence of a nickel catalyst, a coproduct being vinylcyclohexene. Approximately 10,000 tons were produced in 2005. Organic reactions COD reacts with borane to give 9-borabicyclo[3.3.1]nonane, commonly known as 9-BBN, a reagent in organic chemistry used in hydroborations: COD adds (or similar reagents) to give 2,6-dichloro-9-thiabicyclo[3.3.1]nonane: The resulting dichloride can be further modified as the diazide or dicyano derivative in a nucleophilic substitution aided by anchimeric assistance. COD is used as an intermediate in one of the syntheses of disparlure, a gypsy moth pheromone. Metal complexes 1,5-COD binds to low-valent metals via both alkene groups. Metal-COD complexes are attractive because they are sufficiently stable to be isolated, often being more robust than related ethylene complexes. The stability of COD complexes is attributable to the chelate effect. The COD ligands are easily displaced by other ligands, such as phosphines. is prepared by reduction of anhydrous nickel acetylacetonate in the presence of the ligand, using triethylaluminium The related is prepared by a more circuitous route involving the dilithium cyclooctatetraene: Extensive work has been reported on complexes of COD, much of which has been described in volumes 25, 26, and 28 of Inorganic Syntheses. The platinum complex is a precursor to a 16-electron complex of ethylene: COD complexes are useful as starting materials; one noteworthy example is the reaction: The product is highly toxic, thus it is advantageous to generate it in the reaction vessel upon demand. Other low-valent metal complexes of COD include cyclooctadiene rhodium chloride dimer, cyclooctadiene iridium chloride dimer, and , and Crabtree's catalyst. The complexes with nickel, palladium, and platinum have tetrahedral geometry, whereas complexes of rhodium and iridium are square planar. (E,E)-COD The highly strained trans,trans isomer of 1,5-cyclooctadiene is a known compound. (E,E)-COD was first synthesized by George M. Whitesides and Arthur C. Cope in 1969 by photoisomerization of the cis,cis compound. Another synthesis (double elimination reaction from a cyclooctane ring) was reported by Rolf Huisgen in 1987. The molecular conformation of (E,E)-COD is twisted rather than chair-like. The compound has been investigated as a click chemistry mediator. References Cycloalkenes Dienes Ligands Eight-membered rings Foul-smelling chemicals
1,5-Cyclooctadiene
[ "Chemistry" ]
801
[ "Ligands", "Coordination chemistry" ]
4,168,072
https://en.wikipedia.org/wiki/Information%20ethics
Information ethics has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society". It examines the morality that comes from information as a resource, a product, or as a target. It provides a critical framework for considering moral issues concerning informational privacy, moral agency (e.g. whether artificial agents may be moral), new environmental issues (especially how agents should behave in the infosphere), problems arising from the life-cycle (creation, collection, recording, distribution, processing, etc.) of information (especially ownership and copyright, digital divide, and digital rights). It is very vital to understand that librarians, archivists, information professionals among others, really understand the importance of knowing how to disseminate proper information as well as being responsible with their actions when addressing information. Information ethics has evolved to relate to a range of fields such as computer ethics, medical ethics, journalism and the philosophy of information. As the use and creation of information and data form the foundation of machine learning, artificial intelligence and many areas of mathematics, information ethics also plays a central role in the ethics of artificial intelligence, big data ethics and ethics in mathematics. History The term information ethics was first coined by Robert Hauptman and used in the book Ethical Challenges in Librarianship. The field of information ethics has a relatively short but progressive history having been recognized in the United States for nearly 20 years. The origins of the field are in librarianship though it has now expanded to the consideration of ethical issues in other domains including computer science, the internet, media, journalism, management information systems, and business. Evidence of scholarly work on this subject can be traced to the 1980s, when an article authored by Barbara J. Kostrewski and Charles Oppenheim and published in the Journal of Information Science, discussed issues relating to the field including confidentiality, information biases, and quality control. Another scholar, Robert Hauptman, has also written extensively about information ethics in the library field and founded the Journal of Information Ethics in 1992. One of the first schools to introduce an Information Ethics course was the University of Pittsburgh in 1990. The course was a master's level course on the concept of Information Ethics. Soon after, Kent State University also introduced a master's level course called "Ethical Concerns For Library and Information Professionals." Eventually, the term "Information Ethics" became more associated with the computer science and information technology disciplines in university. Still however, it is uncommon for universities to devote entire courses to the subject. Due to the nature of technology, the concept of information ethics has spread to other realms in the industry. Thus, concepts such as "cyberethics," a concept which discusses topics such as the ethics of artificial intelligence and its ability to reason, and media ethics which applies to concepts such as lies, censorship, and violence in the press. Therefore, due to the advent of the internet, the concept of information ethics has been spread to other fields other than librarianship now that information has become so readily available. Information has become more relevant now than ever now that the credibility of information online is more blurry than print articles due to the ease of publishing online articles. All of these different concepts have been embraced by the International Center for Information Ethics (ICIE), established by Rafael Capurro in 1999. Dilemmas regarding the life of information are becoming increasingly important in a society that is defined as "the information society". The explosion of so much technology has brought information ethics to a forefront in ethical considerations. Information transmission and literacy are essential concerns in establishing an ethical foundation that promotes fair, equitable, and responsible practices. Information ethics broadly examines issues related to ownership, access, privacy, security, and community. It is also concerned with relational issues such as "the relationship between information and the good of society, the relationship between information providers and the consumers of information". Information technology affects common issues such as copyright protection, intellectual freedom, accountability, privacy, and security. Many of these issues are difficult or impossible to resolve due to fundamental tensions between Western moral philosophies (based on rules, democracy, individual rights, and personal freedoms) and the traditional Eastern cultures (based on relationships, hierarchy, collective responsibilities, and social harmony). The multi-faceted dispute between Google and the government of the People's Republic of China reflects some of these fundamental tensions. Professional codes offer a basis for making ethical decisions and applying ethical solutions to situations involving information provision and use which reflect an organization's commitment to responsible information service. Evolving information formats and needs require continual reconsideration of ethical principles and how these codes are applied. Considerations regarding information ethics influence "personal decisions, professional practice, and public policy". Therefore, ethical analysis must provide a framework to take into consideration "many, diverse domains" (ibid.) regarding how information is distributed. Censorship Censorship is an issue commonly involved in the discussion of information ethics because it describes the inability to access or express opinions or information based on the belief it is bad for others to view this opinion or information. Sources that are commonly censored include books, articles, speeches, art work, data, music and photos. Censorship can be perceived both as ethical and non-ethical in the field of information ethics. Those who believe censorship is ethical say the practice prevents readers from being exposed to offensive and objectionable material. Topics such as sexism, racism, homophobia, and anti-semitism are present in public works and are widely seen as unethical in the public eye. There is concern regarding the exposure of these topics to the world, especially the young generation. The Australian Library Journal states proponents for censorship in libraries, the practice of librarians deciphering which books/ resources to keep in their libraries, argue the act of censorship is an ethical way to provide information to the public that is considered morally sound, allowing positive ethics instead of negative ethics to be dispersed. According to the same journal, librarians have an "ethical duty" to protect the minds, particularly young people, of those who read their books through the lens of censorship to prevent the readers from adopting the unethical ideas and behaviors portrayed in the books. However, others in the field of information ethics argue the practice of censorship is unethical because it fails to provide all available information to the community of readers. British philosopher John Stuart Mill argued censorship is unethical because it goes directly against the moral concept of utilitarianism. Mill believes humans are unable to have true beliefs when information is withheld from the population via censorship and acquiring true beliefs without censorship leads to greater happiness. According to this argument, true beliefs and happiness (of which both concepts are considered ethical) cannot be obtained through the practice of censorship. Librarians and others who disperse information to the public also face the dilemma of the ethics of censorship through the argument that censorship harms students and is morally wrong because they are unable to know the full extent of knowledge available to the world. The debate of information ethics in censorship was highly contested when schools removed information about evolution from libraries and curriculums due to the topic conflicting with religious beliefs. In this case, advocates against ethics in censorship argue it is more ethical to include multiple sources information on a subject, such as creation, to allow the reader to learn and decipher their beliefs. Ethics of downloading Illegal downloading has also caused some ethical concerns and raised the question whether digital piracy is equivalent to stealing or not. When asked the question "Is it ethical to download copyrighted music for free?" in a survey, 44 percent of a group of primarily college-aged students responded "Yes." Christian Barry believes that understanding illegal downloading as equivalent to common theft is problematic, because clear and morally relevant differences can be shown "between stealing someone’s handbag and illegally downloading a television series". On the other hand, he thinks consumers should try to respect intellectual property unless doing so imposes unreasonable cost on them. In an article titled "Download This Essay: A Defence of Stealing Ebooks", Andrew Forcehimes argues that the way we think about copyrights is inconsistent, because every argument for (physical) public libraries is also an argument for illegally downloading ebooks and every argument against downloading ebooks would also be an argument against libraries. In a reply, Sadulla Karjiker argues that "economically, there is a material difference between permitting public libraries making physical books available and allowing such online distribution of ebooks." Ali Pirhayati has proposed a thought experiment based on a high-tech library to neutralize the magnitude problem (suggested by Karjiker), and justify Forcehimes’ main idea. Security and privacy Ethical concerns regarding international security, surveillance, and the right to privacy are on the rise. The issues of security and privacy commonly overlap in the field of information, due to the interconnectedness of online research and the development of Information Technology (IT). Some of the areas surrounding security and privacy are identity theft, online economic transfers, medical records, and state security. Companies, organizations, and institutions use databases to store, organize, and distribute user's information—with or without their knowledge. Individuals are far more likely to part with personal information when it seems that they will have some sort of control over the use of the information or if the information is given to an entity that they already have an established relationship with. In these specific circumstances, subjects will be much inclined to believe that their information has been collected for pure collection's sake. An entity may also be offering goods or services in exchange for the client's personal information. This type of collection method may seem valuable to a user due to the fact that the transaction appears to be free in the monetary sense. This forms a type of social contract between the entity offering the goods or services and the client. The client may continue to uphold their side of the contract as long as the company continues to provide them with a good or service that they deem worthy. The concept of procedural fairness indicates an individual's perception of fairness in a given scenario. Circumstances that contribute to procedural fairness are providing the customer with the ability to voice their concerns or input, and control over the outcome of the contract.  Best practice for any company collecting information from customers is to consider procedural fairness. This concept is a key proponent of ethical consumer marketing and is the basis of United States Privacy Laws, the European Union's privacy directive from 1995, and the Clinton Administration's June 1995 guidelines for personal information use by all National Information Infrastructure participants. An individual being allowed to remove their name from a mailing list is considered a best information collecting practice. In a few Equifax surveys conducted in the years 1994–1996, it was found that a substantial amount of the American public was concerned about business practices using private consumer information, and that is causes more harm than good. Throughout the course of a customer-company relationship, the company can likely accumulate a plethora of information from its customer. With data processing technology flourishing, it allows for the company to make specific marketing campaigns for each of their individual customers. Data collection and surveillance infrastructure has allowed companies to micro-target specific groups and tailor advertisements for certain populations. Medical records A recent trend of medical records is to digitize them. The sensitive information secured within medical records makes security measures vitally important. The ethical concern of medical record security is great within the context of emergency wards, where any patient records can be accessed at all times. Within an emergency ward, patient medical records need to be available for quick access; however, this means that all medical records can be accessed at any moment within emergency wards with or without the patient present. Ironically, the donation of one's body organs "to science" is easier in most western jurisdictions than donating one's medical records for research. International security Warfare has also changed the security of countries within the 21st Century. After the events of 9-11 and other terrorism attacks on civilians, surveillance by states raises ethical concerns of the individual privacy of citizens. The USA PATRIOT Act 2001 is a prime example of such concerns. Many other countries, especially European nations within the current climate of terrorism, is looking for a balancing between stricter security and surveillance, and not committing the same ethical concerns associated with the USA Patriot Act. International security is moving to towards the trends of cybersecurity and unmanned systems, which involve the military application of IT. Ethical concerns of political entities regarding information warfare include the unpredictability of response, difficulty differentiating civilian and military targets, and conflict between state and non-state actors. Journals The main, peer-reviewed, academic journals reporting on information ethics are the Journal of the Association for Information Systems, the flagship publication of the Association for Information Systems, and Ethics and Information Technology, published by Springer. Branches Bioinformatics Business ethics Computer ethics Cyberethics Information ecology Library Bill of Rights Media ethics Notes Further reading Floridi, Luciano (2013). The Ethics of Information. Oxford: Oxford University Press. Froehlich, Thomas (2017). "A Not-So-Brief Account of Current Information Ethics: The Ethics of Ignorance, Missing Information, Misinformation, Disinformation and Other Forms of Deception or Incompetence". BiD: textos universitaris de biblioteconomia i documentacio. Num. 39. Himma, Kenneth E.; and Tavani, Herman T. (eds.) (2008). The Handbook of Information and Computer Ethics, New Jersey: John Wiley and Sons, Inc.. Moore, Adam D. ed (2005). "Information Ethics: Privacy, Property, and Power", University of Washington Press. Spinello, Richard A.; and Herman T. Tavani (eds.) (2004). Readings in Cyberethics, second ed. Mass.: Jones and Bartlett Publishers. Tavani, Herman T. (2004). Ethics & Technology: Ethical Issues in an Age of Information and Communication Technology. New Jersey: John Wiley and Sons, Inc.. External links IRIE, The International Review of Information Ethics Computer Professionals for Social Responsibility IEG, the Information Ethics research Group at Oxford University Information Ethicist International Center for Information Ethics Computing and society Ethics of science and technology
Information ethics
[ "Technology" ]
2,936
[ "Ethics of science and technology", "Computing and society", "Information ethics" ]
4,168,493
https://en.wikipedia.org/wiki/Chromium%28II%29%20chloride
Chromium(II) chloride describes inorganic compounds with the formula CrCl2(H2O)n. The anhydrous solid is white when pure, however commercial samples are often grey or green; it is hygroscopic and readily dissolves in water to give bright blue air-sensitive solutions of the tetrahydrate Cr(H2O)4Cl2. Chromium(II) chloride has no commercial uses but is used on a laboratory-scale for the synthesis of other chromium complexes. Synthesis CrCl2 is produced by reducing chromium(III) chloride either with hydrogen at 500 °C: 2CrCl3 + H2 → 2CrCl2 + 2HCl or by electrolysis. On the laboratory scale, LiAlH4, zinc, and related reductants produce chromous chloride from chromium(III) precursors: 4 CrCl3 + LiAlH4 → 4 CrCl2 + LiCl + AlCl3 + 2 H2 2 CrCl3 + Zn → 2 CrCl2 + ZnCl2 CrCl2 can also be prepared by treating a solution of chromium(II) acetate with hydrogen chloride: Cr2(OAc)4 + 4 HCl → 2 CrCl2 + 4 AcOH Treatment of chromium powder with concentrated hydrochloric acid gives a blue hydrated chromium(II) chloride, which can be converted to a related acetonitrile complex. Cr + nH2O + 2HCl → CrCl2(H2O)n + H2 Structure and properties Anhydrous CrCl2 is white however commercial samples are often grey or green. It crystallizes in the Pnnm space group, which is an orthorhombically distorted variant of the rutile structure; making it isostructural to calcium chloride. The Cr centres are octahedral, being distorted by the Jahn-Teller Effect. The hydrated derivative, CrCl2(H2O)4, forms monoclinic crystals with the P21/c space group. The molecular geometry is approximately octahedral consisting of four short Cr—O bonds (2.078 Å) arranged in a square planar configuration and two longer Cr—Cl bonds (2.758 Å) in a trans configuration. Reactions The reduction potential for Cr3+ + e− ⇄ Cr2+ is −0.41. Since the reduction potential of H+ to H2 in acidic conditions is +0.00, the chromous ion has sufficient potential to reduce acids to hydrogen, although this reaction does not occur without a catalyst. Organic chemistry Chromium(II) chloride is used as precursor to other inorganic and organometallic chromium complexes. Alkyl halides and nitroaromatics are reduced by CrCl2. The moderate electronegativity of chromium and the range of substrates that CrCl2 can accommodate make organochromium reagents very synthetically versatile. It is a reagent in the Nozaki-Hiyama-Kishi reaction, a useful method for preparing medium-size rings. It is also used in the Takai olefination to form vinyl iodides from aldehydes in the presence of iodoform. References Chromium(II) compounds Chlorides Metal halides Reducing agents
Chromium(II) chloride
[ "Chemistry" ]
710
[ "Chlorides", "Inorganic compounds", "Redox", "Reducing agents", "Salts", "Metal halides" ]
4,169,615
https://en.wikipedia.org/wiki/Real-time%20data
Real-time data (RTD) is information that is delivered immediately after collection. There is no delay in the timeliness of the information provided. Real-time data is often used for navigation or tracking. Such data is usually processed using real-time computing although it can also be stored for later or off-line data analysis. Real-time data is not the same as dynamic data. Real-time data can be dynamic (e.g. a variable indicating current location) or static (e.g. a fresh log entry indicating location at a specific time). In economics Real-time economic data, and other official statistics, are often based on preliminary estimates, and therefore are frequently adjusted as better estimates become available. These later adjusted data are called "revised data". The terms real-time economic data and real-time economic analysis were coined by Francis X. Diebold and Glenn D. Rudebusch. Macroeconomist Glenn D. Rudebusch defined real-time analysis as 'the use of sequential information sets that were actually available as history unfolded.' Macroeconomist Athanasios Orphanides has argued that economic policy rules may have very different effects when based on error-prone real-time data (as they inevitably are in reality) than they would if policy makers followed the same rules but had more accurate data available. In order to better understand the accuracy of economic data and its effects on economic decisions, some economic organizations, such as the Federal Reserve Bank of St. Louis, Federal Reserve Bank of Philadelphia and the Euro-Area Business Cycle Network (EABCN), have made databases available that contain both real-time data and subsequent revised estimates of the same data. In auctions Real-time bidding is programmatic real-time auctions that sell digital-ad impressions. Entities on both the buying and selling sides require almost instantaneous access to data in order to make decisions, forcing real-time data to the forefront of their needs. To support these needs, new strategies and technologies, such Druid have arisen and are quickly evolving. See also Datafication Data mining Geographic information system Information privacy Management information system Online analytical processing Personal data service Personal Information Agent Real-time business intelligence Social information processing User activity monitoring References External links ALFRED: Archival Federal Reserve Economic Data, real-time data series at the Federal Reserve Bank of St. Louis Real-time data set for macroeconomists at the Federal Reserve Bank of Philadelphia Real-time database of the EABCN Data Data analysis Data mining Data processing Collective intelligence Information technology Real-time computing
Real-time data
[ "Technology" ]
525
[ "Information and communications technology", "Real-time computing", "Information technology", "Data" ]
4,169,622
https://en.wikipedia.org/wiki/Dynamic%20data
In data management, dynamic data or transactional data is information that is periodically updated, meaning it changes asynchronously over time as new information becomes available. The concept is important in data management, since the time scale of the data determines how it is processed and stored. Data that is not dynamic is considered either static (unchanging) or persistent, which is data that is infrequently accessed and not likely to be modified. Dynamic data is also different from streaming data, which is a constant flow of information. Dynamic data may be updated at any time, with periods of inactivity in between. Examples In enterprise data management, dynamic data is likely to be transactional, but it is not limited to financial or business transactions. It may also include engineering transactions, such as a revised schematic diagram or architectural document. In this context static data is either unchanged or so rarely changed that it can be stored remotely ("basement" or far) storage, whereas dynamic data is reused or changed frequently and therefore requires online ("office" or near) storage. An original copy of a wiring schematic will change from dynamic to static as the new versions make it obsolete. It is still possible to reuse the original, but in the normal course of business there is rarely a need to access obsoleted data. The current version of the wiring schematic is considered dynamic or changeable. These two different contexts for "dynamic" are similar, but differ their time scale. Dynamic data can become static. Persistent data is or is likely to be in the context of the execution of a program. Static data is in the context of the business historical data, regardless of any one application or program. The "dynamic" data is the new/updated/revised/deleted data in both cases, but again over different time horizons. Your paycheck stub is dynamic data for 1 week, or 1 day, then it becomes read-only and read-rarely, which would be either or both static and persistent. See also Transaction data Computer data
Dynamic data
[ "Technology" ]
419
[ "Computer data", "Computer science stubs", "Computer science", "Data", "Computing stubs" ]
4,169,718
https://en.wikipedia.org/wiki/Round-trip%20translation
Round-trip translation (RTT), also known as back-and-forth translation, recursive translation and bi-directional translation, is the process of translating a word, phrase or text into another language (forward translation), then translating the result back into the original language (back translation), using machine translation (MT) software. It is often used by laypeople to evaluate a machine translation system, or to test whether a text is suitable for MT when they are unfamiliar with the target language. Because the resulting text can often differ substantially from the original, RTT can also be a source of entertainment. Software quality To compare the quality of different machine translation systems, users perform RTT and compare the resulting text to the original. The theory is that the closer the result of the RTT is to the original text, the higher the quality of the machine translation system. One of the problems with this technique is that if there is a problem with the resulting text it is impossible to know whether the error occurred in the forward translation, in the back translation, or in both. In addition, it is possible to get a good back translation from a bad forward translation. A study using the automatic evaluation methods BLEU and F-score compared five different free online translation programs, evaluating the quality of both the forward translation and the back translation, and found no correlation between the quality of the forward translation and the quality of the back translation (i.e., a high quality forward translation did not always correspond to a high quality back translation). The author concluded that RTT was a poor method of predicting the quality of machine translation software. This conclusion was reinforced by a more in-depth study also using automatic evaluation methods. A subsequent study which included human evaluation of the back translation in addition to automatic evaluation methods found that RTT might have some ability to predict the quality of a machine translation system not on a sentence-by-sentence basis but for larger texts. Suitability of text for machine translation It is also suggested that RTT can be used to determine whether a text is suitable for machine translation. The idea being that if RTT results in a text that is close to the original, the text is suitable for MT. If after using RTT, the resulting text is inaccurate, the source text can then be edited until a satisfactory result is achieved. One of the studies looking at RTT as a means of measuring MT system quality also looked at its ability to predict whether a text was suitable for machine translation. It found that using different types of text also did not result in any correlation between the quality of the forward translation and the quality of the back translation. In contrast another study using human evaluation found that there was a correlation between the quality of the forward translation and the back translation and that this correlation could be used to estimate the quality of the forward translation. This correlation could be used to estimate the quality of the forward translation and by simplifying the source text, improve the quality of the forward translation. Entertainment Although the use of RTT for assessing MT system quality or the suitability of a text for MT is in doubt, it is a way to have fun with machine translation. The text produced from an RTT can be comically bad. At one time websites existed for the sole purpose of performing RTT for fun. Other variations send the text through several languages before translating it back into the original or continue translating the text back and forth until it reaches equilibrium (i.e., the result of the back translation is identical to the text used for the forward translation). RTT as entertainment appeared in Philip K. Dick's novel Galactic Pot-Healer. The main character runs book titles and sayings through RTT then has his friends try to guess the original. The Australian television show Spicks and Specks had a contest called "Turning Japanese" which used RTT on song lyrics. Contestants needed to correctly guess the title of the song from which the lyrics were taken. See also References Further reading Gaspari, F. (2006) "Look Who's Translating. Impersonations, Chinese Whispers and Fun with Machine Translation on the Internet" in Proceedings of the 11th Annual Conference of the European Association of Machine Translation Machine translation Translation Evaluation of machine translation Language games
Round-trip translation
[ "Technology" ]
868
[ "Machine translation", "Natural language and computing" ]
4,169,860
https://en.wikipedia.org/wiki/Rail%20integration%20system
A rail integration system (RIS; also called a rail accessory system (RAS), rail interface system, rail system, mount, base, gun rail, or simply a rail) is a generic term for any standardized attachment system for mounting firearm accessories via bar-like straight brackets (i.e. "rails") often with regularly spaced slots. Rail systems are usually made of strips of metal or polymer screw-fastened onto the gun's receiver, handguard, or fore-end stock to allow variable-position attachments. An advantage of the multiple rail slots is the moveable positions to adjust for optimal placement of each item for a user's preferences, along with the ability to switch different items at different placements due to varying eye reliefs on gun sights. Firearm accessories commonly compatible with or intended for rail systems include tactical lights, laser sights, vertical forward grips, telescopic sights, holographic sights, reflex sights, backup iron sights, bipods/tripods, slings, and bayonets. The common types of rail systems for firearms are the dovetail rail (including the Soviet variant known as the Warsaw Pact rail), the Weaver rail, the Picatinny rail, the SOPMOD, the KeyMod and the M-LOK. There are also various non-military designs used in shooting sports to attach slings and bipods such as the UIT rail, Zeiss rail and Freeland rail. History Original rails were a raised metal strip with the sides undercut, less standardized than the dovetail design, to allow hardware to slide on and be secured by means of compression only. Design Rail systems are usually based on the handguard of a weapon and/or the upper receiver. Modern pistols usually have rail systems on the underside of the barrel. Rails on rifles usually start at the top dead center ("12 o'clock"), with other common placements at the bottom 180° ("6 o'clock") and on the sides at 90° ("3 o'clock" and "9 o'clock"); some rails are also diagonal at 45° angles as opposed to 90° angles, though these are less common. There may be additional attachment rails or holes at each 45° angle position running partially or entirely the length of the handguard. On the Kalashnikov rifles, the Warsaw rail is attached to the left side of the receiver when viewed from the rear. With more modern versions adding Picatinny style rails onto the sides of the handguards of the rifles for the mounting of additional equipment. Due to updating equipment, both styles may be found on some Warsaw Pact weapons. Modern-designed firearms often include rails made into the body, instead of being an added-on modification. Older firearms may need permanent modifications of having holes drilled and tapped for screw threads to fasten the rail sections to the firearm. This is easier than milling out a dove tail slot for the placement of a gun sight's parts. Optics such as telescopic sights, reflector sights, holographic sights, red dot magnifiers, night vision sights, or thermal sights may be placed between the iron sights. The rail section may also come in various heights to help align equipment, which may align with the original iron sights inline or below an illuminated optic's center dot, ring or chevron. This is referred to as absolute or lower 1/3 co-witness respectively. In addition to height variations some rail brackets may be offset at various degrees. 22.5°, 45°, and 90° are the most common, to place accessories and/or backup folding collapsible iron sights in such a way so that they are out of the line sight on the top of the firearm and/or to decrease the outer profile edge's size. Then, the original sights are a backup if the electronic optic should fail. The rail section may also move weapon-mounted lights forward so the light does not shine and reflect on the firearm directly, which may create shadows. The amount of rail space allows adjustment and personal optimization of each device and tool attached for the user. As designs have advanced the amount of space has succeeded in the actual need for placement space. Thus, rail covers and protectors may be added to prevent snagging on gear and/or plant foliage. Future rails systems may have the option of carrying batteries or other electricity systems to supply the needs of the increasing electronics mounted to aid the shooter. Standards are still being determined for these types of systems. An example of such is NATO standards NATO Accessory Rail which is the continued improvement and standardization of the Picatinny rail. Types Most RIS equipment is compatible with one or more of the most common rail systems, all of which are broadly similar: Dovetail rail: one of the earliest rail systems, relies primarily on friction from the side unit set screw on the mounted accessory to stop longitudinal shifting Warsaw Pact rail: a Soviet-designed dovetail rail variant with cut-outs that allow quick side-mounting of optics (e.g. PSO-1 and USP-1) on Dragunov sniper rifles, RPG-7 and RPG-29 grenade launchers, as well as some versions of AKM and AK-74 assault rifles and PK family machine guns. Weaver rail: an early improvement design upon the dovetail rail, invented by William Ralph Weaver (1905–1975). This system is still popular in the civilian market. Picatinny rail: the mil-spec standardized rail system evolved from the Weaver rail. Also known as MIL-STD-1913, Picatinny rails date from the mid-1990s and have very strict dimensions and tolerance standards. The Picatinny has a rail of a very similar profile to the Weaver, but the slot width is 0.206 in (5.23 mm), and the spacing of slot centers is consistent at 0.394 in (10.01 mm). Many rail-grabber-mounted accessories can be used on either type of rail. The Picatinny locking slot width is and the spacing of slot centers is . Due to this, with devices that use only one locking slot, Weaver devices will fit on Picatinny rails, but Picatinny devices will not always fit on Weaver rails. NATO Accessory Rail: a metric standardized upgrade from the Picatinny rail. KeyMod: open source "negative space" (hollow slot) design introduced by VLTOR to replace the Picatinny rail for mounting accessories (except for scope mounts). M-LOK: a free licensed "negative space" design introduced by Magpul Industries to compete with KeyMod. These systems are used primarily in the military and by firearm enthusiasts to improve the usability of the weapon, being accessorized quickly and efficiently without requiring the operator to field-strip the weapon. Basic systems such as small rails (20mm is standard) with holes machined in them to be screwed onto the existing hand-guard of a rifle can cost as little as US $25 to US $40. More advanced systems allow for numerous accessories to be mounted simultaneously and can cost upwards of US $200. Compatibility Adapters to other types of rail interfaces may be used for legacy issues and/or to change the surface texture, abrasiveness and/or overall outer circumference of the entire rails system for the fit of the hand. Dovetail, Weaver, and Picatinny are all outward or raised attachment surfaces, while M-LOK and KeyMod have smooth surfaces with different standards & styles of holes cut into their assemblies to place the attachment hardware internally. Both of these styles are often in the handguards. All make the mounting and dismounting of these objects significantly easier. Items may be fastened by threaded bolts, requiring the use of a screwdriver or Allen wrench. Some tool-free variations of thumb screws or thumb nuts may have a threaded quick disconnect lever that pulls the hardware and plates together against the rails. During firearm recoil, the accessory may slide within that section of the rail. To avoid this, when tightening a slide, move the device forward in the placement slots and ensure that the section of the bolt is positioned against the vertical/forward section of the rail slots. Adoption Though not particularly common on firearms until the late 20th century, most modern firearms in military service and the civilian market have rail integration systems that may replace original parts. The prevalence of rails on modern firearms compared to past designs is largely owed to the increasing popularity and availability of attachments such as sights. The most common weapons to have rails are individual firearms, particularly long guns and service rifles such as the rifle, carbine, submachine gun, personal defense weapon, shotgun, designated marksman rifle, sniper rifle, and squad automatic weapon, though some larger or crew-served weapons such as the heavy machine gun, anti-materiel rifle, and rocket launcher have been designed or refreshed to include rails for compatibility. Even ranged weapons that are not firearms, such as bow and arrow, crossbow, airsoft gun, and paintball marker. HMGs have started to include and use rail sections and options for attachments of optics. Civilian clone rifles are the main weapons to adopt this, while crossbows, hunting rifles, shotguns, and handguns may be produced with rail sections either attached and/or made structurally as part of the actual firearm. Airsoft and paintball clone weapons may also have rails. See also Sling (firearms) Zeiss rail Notes References Magpul Industries - M-LOK DESCRIPTION AND FAQ DOCUMENT KeyMod vs. M-Lok: The Next AR Rail Standard by Chris Baker, November, 19, 2014 KeyMod vs. M-LOK Modular Rail System Comparison, Presented by Caleb McGee, Naval Special Warfare Center Crane Division, 4 May 2017 full pdf on page M-LOK vs KeyMod comparison 2017 MLok and KeyMod Comparison 3 years later 2017 External links KeyMod vs. M-LOK comparison Firearm components Mechanical standards
Rail integration system
[ "Technology", "Engineering" ]
2,050
[ "Firearm components", "Mechanical standards", "Components", "Mechanical engineering" ]
4,170,290
https://en.wikipedia.org/wiki/Cell%20cortex
The cell cortex, also known as the actin cortex, cortical cytoskeleton or actomyosin cortex, is a specialized layer of cytoplasmic proteins on the inner face of the cell membrane. It functions as a modulator of membrane behavior and cell surface properties. In most eukaryotic cells lacking a cell wall, the cortex is an actin-rich network consisting of F-actin filaments, myosin motors, and actin-binding proteins. The actomyosin cortex is attached to the cell membrane via membrane-anchoring proteins called ERM proteins that plays a central role in cell shape control. The protein constituents of the cortex undergo rapid turnover, making the cortex both mechanically rigid and highly plastic, two properties essential to its function. In most cases, the cortex is in the range of 100 to 1000 nanometers thick. In some animal cells, the protein spectrin may be present in the cortex. Spectrin helps to create a network by cross-linked actin filaments. The proportions of spectrin and actin vary with cell type. Spectrin proteins and actin microfilaments are attached to transmembrane proteins by attachment proteins between them and the transmembrane proteins. The cell cortex is attached to the inner cytosolic face of the plasma membrane in cells where the spectrin proteins and actin microfilaments form a mesh-like structure that is continuously remodeled by polymerization, depolymerization and branching. Many proteins are involved in the cortex regulation and dynamics, including formins, with roles in actin polymerization, Arp2/3 complexes that give rise to actin branching and capping proteins. Due to the branching process and the density of the actin cortex, the cortical cytoskeleton can comprise a highly complex meshwork such as a fractal structure. Specialized cells are usually characterized by a very specific cortical actin cytoskeleton. For example, in red blood cells, the cell cortex consists of a two-dimensional cross-linked elastic network with pentagonal or hexagonal symmetry, tethered to the plasma membrane and formed primarily by spectrin, actin and ankyrin. In neuronal axons, the actin or spectric cytoskeleton forms an array of periodic rings and in the sperm flagellum it forms a helical structure. In plant cells, the cell cortex is reinforced by cortical microtubules underlying the plasma membrane. The direction of these cortical microtubules determines which way the cell elongates when it grows. Functions The cortex mainly functions to produce tension under the cell membrane, allowing the cell to change shape. This is primarily accomplished through myosin II motors, which pull on the filaments to generate stress. These changes in tension are required for the cell to change its shape as it undergoes cell migration and cell division. In mitosis, F-actin and myosin II form a highly contractile and uniform cortex to drive mitotic cell rounding. The surface tension produced by the actomyosin cortex activity generates intracellular hydrostatic pressure capable of displacing surrounding objects to facilitate rounding. Thus, the cell cortex serves to protect the microtubule spindle from external mechanical disruption during mitosis. When external forces are applied at sufficiently large rate and magnitude to a mitotic cell, loss of cortical F-actin homogeneity occurs leading to herniation of blebs and a temporary loss of the ability to protect the mitotic spindle. Genetic studies have shown that the cell cortex in mitosis is regulated by diverse genes such as Rhoa, WDR1, ERM proteins, Ect2, Pbl, Cdc42, , Par6, DJ-1 and FAM134A. In cytokinesis the cell cortex plays a central role by producing a myosin-rich contractile ring to constrict the dividing cell into two daughter cells. Cell cortex contractility is key for amoeboidal type cell migration characteristic of many cancer cell metastasis events. In addition to cell cortex also plays essential roles in the formation of tissues, organs and organisms. By pulling on adhesion complexes, the cortex promotes the expansion of contacts with other cells or with the extracellular matrix. Notably, during early mammalian development, the cortex pulls cells together to drive compaction and the formation of the morula. Also, differences in cortical tension drives the sorting of the inner cell mass and trophectoderm progenitors during the formation of the morula, the sorting of germ layer progenitors during zebrafish gastrulation, the invagination of the mesoderm and the elongation of the germ band elongation during drosophila gastrulation. Research Basic research into the cell cortex is done with immortalised cell lines, typically HeLa cells, S2 cells, Normal rat kidney cells, and M2 cells. In M2 cells in particular, cellular blebs – which form without a cortex, then form one as they retract – are often used to model cortex formation and composition. References Further reading Cytoskeleton Cell biology
Cell cortex
[ "Biology" ]
1,091
[ "Cell biology" ]
4,170,346
https://en.wikipedia.org/wiki/Bezold%E2%80%93Br%C3%BCcke%20shift
The Bezold–Brücke shift or luminance-on-hue effect is a change in hue perception as light intensity changes. As intensity increases, spectral colors shift more towards blue (if below 500 nm) or yellow (if above 500 nm). At lower intensities, the red/green axis dominates. This means that reds become more yellow with increasing brightness. Light may change in the perceived hue as its brightness changes, despite the fact that it retains a constant spectral composition. It was discovered by Wilhelm von Bezold and M.E. Brücke. The shift in the hue of the colors that occur as the intensity of the corresponding energy change is materially increased, except in some cases like the change for certain invariable hues (approximating the psychologically primary hues). Both Bezold & Brücke worked on the Bezold-Brücke effect and gave important contributions in the field of optical illusions. This effect is a problem for simple HSV-style color models, which treat hue and intensity as independent parameters. In contrast, color appearance models try to factor in this effect. The shift in the hue is also accompanied by the changes in the perceived saturation. As the brightness of the color stimuli increases, their color strength also increases to a maximum point and then decreases again; in such a way that it is still wavelength specific. This can, to an extent, be considered as an inverse of the Helmholtz–Kohlrausch effect. In the case of the Helmholtz–Kohlrausch effect, the partially desaturated stimulus is seen to be brighter than fully saturated or achromatic stimulus. See also Opponent process Purkinje shift Abney effect Bibliography W. von Bezold: Die Farbenlehre in Hinblick auf Kunst und Kunstgewerbe. Braunschweig 1874. Full text scan "Über das Gesetz der Farbenmischung und die physiologischen Grundfarben", Annalen der Physiologischen Chemie, 1873, 226: 221–247. M. E. Brücke, “Über einige Empfindungen im Gebiet der Sehnerven,” Sitz. Ber. d. K. K. Akad. d. Wissensch. Math. Nat. Wiss. 1878, 77:39–71. References Color appearance phenomena
Bezold–Brücke shift
[ "Physics" ]
501
[ "Optical phenomena", "Physical phenomena", "Color appearance phenomena" ]
4,170,806
https://en.wikipedia.org/wiki/Corrosion%20fatigue
Corrosion fatigue is fatigue in a corrosive environment. It is the mechanical degradation of a material under the joint action of corrosion and cyclic loading. Nearly all engineering structures experience some form of alternating stress, and are exposed to harmful environments during their service life. The environment plays a significant role in the fatigue of high-strength structural materials like steel, aluminum alloys and titanium alloys. Materials with high specific strength are being developed to meet the requirements of advancing technology. However, their usefulness depends to a large extent on the degree to which they resist corrosion fatigue. The effects of corrosive environments on the fatigue behavior of metals were studied as early as 1930. The phenomenon should not be confused with stress corrosion cracking, where corrosion (such as pitting) leads to the development of brittle cracks, growth and failure. The only requirement for corrosion fatigue is that the sample be under tensile stress. Effect of corrosion on S-N diagram The effect of corrosion on a smooth-specimen S-N diagram is shown schematically on the right. Curve A shows the fatigue behavior of a material tested in air. A fatigue threshold (or limit) is seen in curve A, corresponding to the horizontal part of the curve. Curves B and C represent the fatigue behavior of the same material in two corrosive environments. In curve B, the fatigue failure at high stress levels is retarded, and the fatigue limit is eliminated. In curve C, the whole curve is shifted to the left; this indicates a general lowering in fatigue strength, accelerated initiation at higher stresses and elimination of the fatigue limit. To meet the needs of advancing technology, higher-strength materials are developed through heat treatment or alloying. Such high-strength materials generally exhibit higher fatigue limits, and can be used at higher service stress levels even under fatigue loading. However, the presence of a corrosive environment during fatigue loading eliminates this stress advantage, since the fatigue limit becomes almost insensitive to the strength level for a particular group of alloys. This effect is schematically shown for several steels in the diagram on the left, which illustrates the debilitating effect of a corrosive environment on the functionality of high-strength materials under fatigue. Corrosion fatigue in aqueous media is an electrochemical behavior. Fractures are initiated either by pitting or persistent slip bands. Corrosion fatigue may be reduced by alloy additions, inhibition and cathodic protection, all of which reduce pitting. Since corrosion-fatigue cracks initiate at a metal's surface, surface treatments like plating, cladding, nitriding and shot peening were found to improve the materials resistance to this phenomenon. Crack-propagation studies in corrosion fatigue In normal fatigue-testing of smooth specimens, about 90 percent is spent in crack nucleation and only the remaining 10 percent in crack propagation. However, in corrosion fatigue crack nucleation is facilitated by corrosion; typically, about 10 percent of life is sufficient for this stage. The rest (90 percent) of life is spent in crack propagation. Thus, it is more useful to evaluate crack-propagation behavior during corrosion fatigue. Fracture mechanics uses pre-cracked specimens, effectively measuring crack-propagation behavior. For this reason, emphasis is given to crack-propagation velocity measurements (using fracture mechanics) to study corrosion fatigue. Since fatigue crack grows in a stable fashion below the critical stress-intensity factor for fracture (fracture toughness), the process is called sub-critical crack growth. The diagram on the right shows typical fatigue-crack-growth behavior. In this log-log plot, the crack-propagation velocity is plotted against the applied stress-intensity range. Generally there is a threshold stress-intensity range, below which crack-propagation velocity is insignificant. Three stages may be visualized in this plot. Near the threshold, crack-propagation velocity increases with increasing stress-intensity range. In the second region, the curve is nearly linear and follows Paris' law(6); in the third region crack-propagation velocity increases rapidly, with the stress-intensity range leading to fracture at the fracture-toughness value. Crack propagation under corrosion fatigue may be classified as a) true corrosion fatigue, b) stress corrosion fatigue or c) a combination of true, stress and corrosion fatigue. True corrosion fatigue In true corrosion fatigue, the fatigue-crack-growth rate is enhanced by corrosion; this effect is seen in all three regions of the fatigue-crack growth-rate diagram. The diagram on the left is a schematic of crack-growth rate under true corrosion fatigue; the curve shifts to a lower stress-intensity-factor range in the corrosive environment. The threshold is lower (and the crack-growth velocities higher) at all stress-intensity factors. Specimen fracture occurs when the stress-intensity-factor range is equal to the applicable threshold-stress-intensity factor for stress-corrosion cracking. When attempting to analyze the effects of corrosion fatigue on crack growth in a particular environment, both corrosion type and fatigue load levels affect crack growth in varying degrees. Common types of corrosion include filiform, pitting, exfoliation, intergranular; each will affect crack growth in a particular material in a distinct way. For instance, pitting will often be the most damaging type of corrosion, degrading a material's performance (by increasing the crack-growth rate) more than any other kind of corrosion; even pits of the order of a material's grain size may substantially degrade a material. The degree to which corrosion affects crack-growth rates also depends on fatigue-load levels; for instance, corrosion can cause a greater increase in crack-growth rates at a low loads than it does at a high load. Stress-corrosion fatigue In materials where the maximum applied-stress-intensity factor exceeds the stress-corrosion cracking-threshold value, stress corrosion adds to crack-growth velocity. This is shown in the schematic on the right. In a corrosive environment, the crack grows due to cyclic loading at a lower stress-intensity range; above the threshold stress intensity for stress corrosion cracking, additional crack growth (the red line) occurs due to SCC. The lower stress-intensity regions are not affected, and the threshold stress-intensity range for fatigue-crack propagation is unchanged in the corrosive environment. In the most-general case, corrosion-fatigue crack growth may exhibit both of the above effects; crack-growth behavior is represented in the schematic on the left.. See also Corrosion Cyclic corrosion testing Metal Fatigue Stress corrosion cracking Stress References Structural engineering Corrosion Materials degradation Fracture mechanics
Corrosion fatigue
[ "Chemistry", "Materials_science", "Engineering" ]
1,337
[ "Structural engineering", "Fracture mechanics", "Metallurgy", "Materials science", "Corrosion", "Construction", "Electrochemistry", "Civil engineering", "Materials degradation" ]
4,171,106
https://en.wikipedia.org/wiki/Sessho-seki
The , or "Killing Stone", is a stone in the volcanic mountains of Nasu, an area of Tochigi Prefecture, Japan, that is famous for sulphurous hot springs. In Japanese mythology, the stone is said to kill anyone who comes into contact with it. In Japan, rocks and large stones in areas where volcanic toxic gases are generated are often named Sessho-seki (殺生石), meaning Killing Stone, and the representative of such stones is this one associated with the legend of Tamamo-no-Mae and the nine-tailed fox. Legend The stone is believed to be the transformed corpse of Tamamo-no-Mae, a beautiful woman who was exposed as a nine-tailed fox working for an evil daimyō plotting to kill Emperor Konoe and take his throne. According to the otogi-zōshi, when the nine-tailed fox was killed by the famous warrior named Miura-no-suke, her body became the Sessho-seki. Later, a Buddhist priest called Genno stopped for a rest near the stone and was threatened by the spirit of Tamamo-no-Mae. Genno performed exorcism rituals and begged the spirit to consider her salvation. Tamamo-no-Mae relented and swore never to haunt the stone again. Split It was reported on March 5, 2022 that the stone had split into two parts, likely as a result of natural weathering. Some people expressed their fear of the exorcised Kitsune. On 26 March 2022, the local government had priests host a ceremony to appease the spirit and pacify the beast at the site with prayers, offerings, and waving haraegushi upon the split rock. In literature A Noh play about the stone, attributed to Hiyoshi Sa'ami. It was mentioned in Oku no Hosomichi by Matsuo Bashō as he visited the stone in the 17th century and tells of his visit in his book (Narrow Road to the Deep North). Tamamo-no-Mae, a novel by Kido Okamoto, was based on the legend of the stone. A film adaptation, Kyuubi no Kitsune to Tobimaru (Sesshouseki) followed. In chapter 123 of the manga Bakidou the author used the story of the rock breaking to show the kick power of one of the characters. Gallery See also List of individual rocks References Japanese mythology Mythological objects Noh plays Stones
Sessho-seki
[ "Physics" ]
514
[ "Stones", "Physical objects", "Matter" ]
4,171,333
https://en.wikipedia.org/wiki/Hyperboloid%20structure
Hyperboloid structures are architectural structures designed using a hyperboloid in one sheet. Often these are tall structures, such as towers, where the hyperboloid geometry's structural strength is used to support an object high above the ground. Hyperboloid geometry is often used for decorative effect as well as structural economy. The first hyperboloid structures were built by Russian engineer Vladimir Shukhov (1853–1939), including the Shukhov Tower in Polibino, Dankovsky District, Lipetsk Oblast, Russia. Properties Hyperbolic structures have a negative Gaussian curvature, meaning they curve inward rather than curving outward or being straight. As doubly ruled surfaces, they can be made with a lattice of straight beams, hence are easier to build than curved surfaces that do not have a ruling and must instead be built with curved beams. Hyperboloid structures are superior in stability against outside forces compared with "straight" buildings, but have shapes often creating large amounts of unusable volume (low space efficiency). Hence they are more commonly used in purpose-driven structures, such as water towers (to support a large mass), cooling towers, and aesthetic features. A hyperbolic structure is beneficial for cooling towers. At the bottom, the widening of the tower provides a large area for installation of fill to promote thin film evaporative cooling of the circulated water. As the water first evaporates and rises, the narrowing effect helps accelerate the laminar flow, and then as it widens out, contact between the heated air and atmospheric air supports turbulent mixing. Work of Shukhov In the 1880s, Shukhov began to work on the problem of the design of roof systems to use a minimum of materials, time and labor. His calculations were most likely derived from mathematician Pafnuty Chebyshev's work on the theory of best approximations of functions. Shukhov's mathematical explorations of efficient roof structures led to his invention of a new system that was innovative both structurally and spatially. By applying his analytical skills to the doubly curved surfaces Nikolai Lobachevsky named "hyperbolic", Shukhov derived a family of equations that led to new structural and constructional systems, known as hyperboloids of revolution and hyperbolic paraboloids. The steel gridshells of the exhibition pavilions of the 1896 All-Russian Industrial and Handicrafts Exposition in Nizhny Novgorod were the first publicly prominent examples of Shukhov's new system. Two pavilions of this type were built for the Nizhni Novgorod exposition, one oval in plan and one circular. The roofs of these pavilions were doubly curved gridshells formed entirely of a lattice of straight angle-iron and flat iron bars. Shukhov himself called them azhurnaia bashnia ("lace tower", i.e., lattice tower). The patent of this system, for which Shukhov applied in 1895, was awarded in 1899. Shukhov also turned his attention to the development of an efficient and easily constructed structural system (gridshell) for a tower carrying a large load at the top – the problem of the water tower. His solution was inspired by observing the action of a woven basket supporting a heavy weight. Again, it took the form of a doubly curved surface constructed of a light network of straight iron bars and angle iron. Over the next 20 years, he designed and built nearly 200 of these towers, no two exactly alike, most with heights in the range of 12m to 68m. At least as early as 1911, Shukhov began experimenting with the concept of forming a tower out of stacked sections of hyperboloids. Stacking the sections permitted the form of the tower to taper more at the top, with a less pronounced "waist" between the shape-defining rings at bottom and top. Increasing the number of sections would increase the tapering of the overall form, to the point that it began to resemble a cone. By 1918 Shukhov had developed this concept into the design of a nine-section stacked hyperboloid radio broadcasting tower in Moscow. Shukhov designed a 350m tower, which would have surpassed the Eiffel Tower in height by 50m, while using less than a quarter of the amount of material. His design, as well as the full set of supporting calculations analyzing the hyperbolic geometry and sizing the network of members, was completed by February 1919. However, the 2200 tons of steel required to build the tower to 350m were not available. In July 1919, Lenin decreed that the tower should be built to a height of 150m, and the necessary steel was to be made available from the army's supplies. Construction of the smaller tower with six stacked hyperboloids began within a few months, and Shukhov Tower was completed by March 1922. Other architects Antoni Gaudi and Shukhov carried out experiments with hyperboloid structures nearly simultaneously, but independently, in 1880–1895. Antoni Gaudi used structures in the form of hyperbolic paraboloid (hypar) and hyperboloid of revolution in the Sagrada Família in 1910. In the Sagrada Família, there are a few places on the nativity facade – a design not equated with Gaudi's ruled-surface design, where the hyperboloid crops up. All around the scene with the pelican, there are numerous examples (including the basket held by one of the figures). There is a hyperboloid adding structural stability to the cypress tree (by connecting it to the bridge). The "bishop's mitre" spires are capped with hyperboloids. In the Palau Güell, there is one set of interior columns along the main facade with hyperbolic capitals. The crown of the famous parabolic vault is a hyperboloid. The vault of one of the stables at the Church of Colònia Güell is a hyperboloid. There is a unique column in the Park Güell that is a hyperboloid. The famous Spanish engineer and architect Eduardo Torroja designed a thin-shell water tower in Fedala and the roof of Hipódromo de la Zarzuela in the form of hyperboloid of revolution. Le Corbusier and Félix Candela used hyperboloid structures (hypar). A hyperboloid cooling tower by Frederik van Iterson and Gerard Kuypers was patented in the Netherlands on August 16, 1916. The first Van Iterson cooling tower was built and put to use at the Dutch State Mine (DSM) Emma in 1918. A whole series of the same and later designs would follow. The Georgia Dome (1992) was the first Hypar-Tensegrity dome to be built. Gallery See also Geodesic dome Lattice mast List of thin-shell structures Sam Scorer Tensile structure World's first hyperboloid structure Notes References "The Nijni-Novgorod exhibition: Water tower, room under construction, springing of 91 feet span", "The Engineer", № 19.3.1897, pp. 292–294, London, 1897. William Craft Brumfield, "The Origins of Modernism in Russian Architecture", University of California Press, 1991, . Elizabeth Cooper English: “Arkhitektura i mnimosti”: The origins of Soviet avant-garde rationalist architecture in the Russian mystical-philosophical and mathematical intellectual tradition”, a dissertation in architecture, 264p., University of Pennsylvania, 2000. "Vladimir G. Suchov 1853–1939. Die Kunst der sparsamen Konstruktion.", Rainer Graefe, Jos Tomlow und andere, 192 pp., Deutsche Verlags-Anstalt, Stuttgart, 1990, . External links The research of the Shukhov's World's First Hyperboloid structure, Prof. Dr. Armin Grün International campaign to save the Shukhov Tower Anticlastic hyperboloid shells Shells: Hyperbolic paraboloids (hypar) Hyperbolic Paraboloids & Concrete Shells Special Structures Rainer Graefe: “Vladimir G. Šuchov 1853–1939 – Die Kunst der sparsamen Konstruktion.”, Geometric shapes Structural system Russian inventions
Hyperboloid structure
[ "Mathematics", "Technology", "Engineering" ]
1,691
[ "Structural engineering", "Geometric shapes", "Building engineering", "Mathematical objects", "Hyperboloid structures", "Structural system", "Geometric objects" ]
4,171,640
https://en.wikipedia.org/wiki/A%20Man%20on%20the%20Moon
A Man on the Moon: The Voyages of the Apollo Astronauts is a 1994 book by Andrew Chaikin. It describes the 1968-1972 voyages of the Apollo program astronauts in detail, from Apollo 8 to 17. "A decade in the making, this book is based on hundreds of hours of in-depth interviews with each of the twenty-four moon voyagers, as well as those who contributed their brain power, training and teamwork on Earth." This book formed the basis of the 1998 television miniseries From the Earth to the Moon. It was released in paperback in 2007 by Penguin Books, . See also First Man: The Life of Neil A. Armstrong Carrying the Fire the autobiography of the Gemini 10 and Apollo 11 astronaut Michael Collins One Giant Leap, a 2019 book Moon Shot: The Inside Story of America's Race to the Moon References 1994 non-fiction books Spaceflight books Books about the Apollo program
A Man on the Moon
[ "Astronomy" ]
185
[ "Outer space stubs", "Astronomy book stubs", "Outer space", "Astronomy stubs" ]
4,171,659
https://en.wikipedia.org/wiki/Lug%20nut
A lug nut or wheel nut is a fastener, specifically a nut, used to secure a wheel on a vehicle. Typically, lug nuts are found on automobiles, trucks (lorries), and other large vehicles using rubber tires. Design A lug nut is a nut fastener with one rounded or conical (tapered) end, used on steel and most aluminum wheels. A set of lug nuts is typically used to secure a wheel to threaded wheel studs and thereby to a vehicle's axles. Some designs (Audi, BMW, Mercedes-Benz, Saab, Volkswagen) use lug bolts or wheel bolts instead of nuts, which screw into a tapped (threaded) hole in the wheel's hub or brake drum or brake disc. The conical lug's taper is normally 60 degrees (although 45 degrees is common for wheels designed for racing applications), and is designed to help center the wheel accurately on the axle, and to reduce the tendency for the nut to loosen due to fretting induced precession, as the car is driven. One popular alternative to the conical lug seating design is the rounded, hemispherical, or ball seat. Automotive manufacturers such as Audi, BMW, and Honda use this design rather than a tapered seat, but the nut performs the same function. Older style (non-ferrous) alloy wheels use nuts with a cylindrical shank slipping into the wheel to center it and a washer that applies pressure to clamp the wheel to the axle. Wheel lug nuts may have different shapes. Aftermarket alloy and forged wheels often require specific lug nuts to match their mounting holes, so it is often necessary to get a new set of lug nuts when the wheels are changed. There are four common lug nut types: cone seat bulge cone seat under hub cap spline drive. The lug nut thread type varies between car brands and models. Examples of commonly used metric threads include: M10×1.25 mm M12 (1.25, 1.5 or 1.75 mm thread pitch, with M12x1.5 mm being the most common) M14 (1.25, 1.5 or 2 mm pitch, with M14×1.5 mm being the most common) M16×1.5 mm Some older American cars use inch threads, for example ″-20 (11.1 mm), ″-20 (12.7 mm), or ″-20 (14.3 mm). Removal and installation Lug nuts may be removed using a lug, socket, or impact wrench. If the wheel is to be removed, an automotive jack to raise the vehicle and some wheel chocks would be used as well. Wheels that have hubcaps or wheel covers need these removed beforehand, typically with a screwdriver, flatbar, or prybar. Lug nuts can be difficult to remove, as they may become frozen to the wheel stud. In such cases a breaker bar or repeated blows from an impact wrench can be used to free them. Alternating between tightening and loosening can free especially stubborn lug nuts. Lug nuts must be installed in an alternating pattern, commonly referred to as a star pattern. This ensures a uniform distribution of load across the wheel mounting surface. When installing lug nuts, it is recommended to tighten them with a calibrated torque wrench. While a lug, socket, or impact wrench may be used to tighten lug nuts, the final tightening should be performed by a torque wrench, ensuring an accurate and adequate load is applied. Torque specifications vary by vehicle and wheel type. Both vehicle and wheel manufacturers provide recommended torque values which should be consulted when an installation is done. Failure to abide by the recommended torque value can result in damage to the wheel and brake rotor/drum. Additionally, under-tightened lug nuts may come loose with time. The tool size needed for removal and installation depends on the type of lug nut. The three most common hex sizes for lug nuts are 17 mm, 19 mm, and 21 mm, while 22 mm, 23 mm, inch (17.5 mm), and inch (20.6 mm) are less commonly used. Detecting loose nuts In order to allow early detection of loose lug nuts, some large vehicles are fitted with loose wheel nut indicators. The indicator spins with the nut so that loosening can be detected with a visual inspection. Anti-theft nuts or bolts In countries where the theft of alloy wheels is a serious problem, locking nuts (or bolts, as applicable) are available — or already fitted by the vehicle manufacturer — which require a special adaptor ("key") between the nut and the wrench to fit and remove. The key is normally unique to each set of nuts. Only one locking nut per wheel is normally used, so they are sold in sets of four. Most designs can be defeated using a hardened removal tool which uses a left-hand self-cutting thread to grip the locking nut, although more advanced designs have a spinning outer ring to frustrate such techniques. An older technique for removal was to simply hammer a slightly smaller socket over the locking wheel nut to be able to remove it. However, with the newer design of locking wheel nuts this is no longer possible. Removal nowadays requires special equipment that is not available to the general public. This helps to prevent thieves from obtaining the tools to be able to remove the lock nuts themselves. History In the United States, vehicles manufactured prior to 1975 by the Chrysler Corporation used left-hand and right-hand screw thread for different sides of the vehicle to prevent loosening. Most Buicks, Pontiacs, and Oldsmobiles used both left-handed and right-handed lug nuts prior to model year 1965. It was later realized that the taper seat performed the same function. Most modern vehicles use right-hand threads on all wheels. See also Center cap Wheel sizing References External links Nuts (hardware) Vehicle parts
Lug nut
[ "Technology" ]
1,230
[ "Vehicle parts", "Components" ]
4,171,738
https://en.wikipedia.org/wiki/Silver%28I%2CIII%29%20oxide
Silver(I,III) oxide or tetrasilver tetroxide is the inorganic compound with the formula Ag4O4. It is a component of silver zinc batteries. It can be prepared by the slow addition of a silver(I) salt to a persulfate solution e.g. AgNO3 to a Na2S2O8 solution. It adopts an unusual structure, being a mixed-valence compound. It is a dark brown solid that decomposes with evolution of O2 in water. It dissolves in concentrated nitric acid to give brown solutions containing the Ag2+ ion. Structure Although its empirical formula, AgO, suggests that the compound tetrasilver tetraoxide has silver in the +2 oxidation state, each unit has two monovalent silver atoms bonded to an oxygen atom, and two trivalent silver atoms bonded to three oxygen atoms, and it is in fact diamagnetic. X-ray diffraction studies show that the silver atoms adopt two different coordination environments, one having two collinear oxide neighbours and the other four coplanar oxide neighbours. tetrasilver tetraoxide is therefore formulated as AgIAgIIIO2 or Ag2O·Ag2O3. It has previously been called silver peroxide, which is incorrect since it does not contain the peroxide ion, O22−. Uses Tetrasilver tetroxide has been marketed under a trade name "Tetrasil." In 2010, the FDA issued a warning letter to an American company concerning the firm's marketing of Tetrasil and Genisil ointments of tetrasilver tetroxide for herpes and similar conditions. References Silver compounds Mixed valence compounds Transition metal oxides
Silver(I,III) oxide
[ "Chemistry" ]
358
[ "Mixed valence compounds", "Inorganic compounds" ]
4,171,813
https://en.wikipedia.org/wiki/Solar-powered%20pump
Solar-powered pumps run on electricity generated by photovoltaic (PV) panels or the radiated thermal energy available from collected sunlight as opposed to grid electricity- or diesel-run water pumps. Generally, solar-powered pumps consist of a solar panel array, solar charge controller, DC water pump, fuse box/breakers, electrical wiring, and a water storage tank. The operation of solar-powered pumps is more economical mainly due to the lower operation and maintenance costs and has less environmental impact than pumps powered by an internal combustion engine. Solar pumps are useful where grid electricity is unavailable or impractical, and alternative sources (in particular wind) do not provide sufficient energy. Components A PV solar-powered pump system has three main parts - one or more solar panels, a controller, and a pump. The solar panels make up most (up to 80%) of the system's cost. The size of the PV system is directly dependent on the size of the pump, the amount of water that is required, and the solar irradiance available. The purpose of the controller is twofold. Firstly, it matches the output power that the pump receives with the input power available from the solar panels. Secondly, a controller usually provides a low- or high-voltage protection, whereby the system is switched off, if the voltage is too low or too high for the operating voltage range of the pump. This increases the service life of the pump, thus reducing the need for maintenance. Other ancillary functions include automatically shutting down the system when the water source level is low or when the storage tank is full, regulating water output pressure, blending power input between the solar panels and an alternate power source such as the grid or an engine-powered generator, and remotely monitoring and managing the system through an online portal offered as a cloud service by the manufacturer. Voltage of the solar pump motors can be alternating current (AC) or direct current (DC). DC motors are used for small to medium applications up to about 4 kW rating, and are suitable for applications such as garden fountains, landscaping, drinking water for livestock, or small irrigation projects. Since DC systems tend to have overall higher efficiency levels than AC pumps of a similar size, the costs are reduced, as smaller solar panels can be used. Finally, if an AC solar pump is used, an inverter is necessary to change the DC power from the solar panels into AC for the pump. The supported power range of inverters extends from 0.15 to 55 kW, and can be used for larger irrigation systems. The panel and inverters must be sized accordingly, though, to accommodate the inrush characteristic of an AC motor. To aid in proper sizing, leading manufacturers provide proprietary sizing software tested by third-party certifying companies. The sizing software may include the projected monthly water output, which varies due to seasonal change in insolation. Water pumping Solar-powered water pumps can deliver drinking water, water for livestock, or irrigation water. Solar water pumps may be especially useful in small-scale or community-based irrigation, as large-scale irrigation requires large volumes of water that in turn require a large solar PV array. As the water may only be required during some parts of the year, a large PV array would provide excess energy that is not necessarily required, thus making the system inefficient, unless an alternative use can be found. Solar PV water pumping systems are used for irrigation and drinking water in India. Most of the pumps are fitted with a 2.0 - 3.7 kW motor that receives energy from a 4.8 kWp PV array. The 3.7 kW systems can deliver about 124,000 liters of water/day from a total of 50 meters setoff head and 70 meters dynamic head. By 30 August 2016, a total of 120,000 solar PV water pumping systems had been installed around the world. Energy storage in the form of water storage is better than energy storage in the form of batteries for solar water pumps because no intermediary transformation of one form of energy to another is needed. The most common pump mechanics used are centrifugal pumps, multistage pumps, borehole pumps, and helical pumps. Important scientific concepts of fluid dynamics such as pressure vs. head, pump heads, pump curves, system curves, and net suction head are really important for the successful deployment and design of solar-powered pumps. Oil and gas To combat negative publicity related to the environmental impacts of fossil fuels, including fracking, the oil and gas industry is embracing solar-powered pumping systems. Many oil and gas wells require the accurate injection (metering) of various chemicals under pressure to sustain their operation and to improve extraction rates. Historically, these chemical injection pumps (CIPs) have been driven by gas reciprocating motors using the pressure of the well's gas, and exhausting the raw gas into the atmosphere. Solar-powered electrical pumps (solar CIPs) can reduce these greenhouse gas emissions. Solar arrays (PV cells) not only provide a sustainable power source for the CIPs, but can also provide an electricity source to run remote SCADA-type diagnostics with remote control and satellite/cell communications from very remote locations to a desktop or notebook monitoring computer. Stirling engine Instead of generating electricity to turn a motor, sunlight can be concentrated on the heat exchanger of a Stirling engine and used to drive a pump mechanically. This dispenses with the cost of solar panels and electric equipment. In some cases, the Stirling engine may be suitable for local fabrication, eliminating the difficulty of importing equipment. One form of Stirling engine is the fluidyne engine, which operates directly on the pumped fluid as a piston. Fluidyne solar pumps have been studied since 1987. At least one manufacturer has conducted tests with a Stirling solar-powered pump. See also List of solar powered products List of photovoltaic power stations Notes References Solar-powered devices Pumps Applications of photovoltaics
Solar-powered pump
[ "Physics", "Chemistry" ]
1,220
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
4,171,950
https://en.wikipedia.org/wiki/Constrained%20optimization
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied. Relation to constraint-satisfaction problems The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part. General form A general constrained minimization problem may be written as follows: where and are constraints that are required to be satisfied (these are called hard constraints), and is the objective function that needs to be optimized subject to the constraints. In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated. Solution methods Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect. Equality constraints Substitution method For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution. The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize subject to . The constraint implies , which can be substituted into the objective function to create . The first-order necessary condition gives , which can be solved for and, consequently, . Lagrange multiplier If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables. Inequality constraints With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable. Linear programming If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time. Nonlinear programming If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem. Quadratic programming If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard. KKT conditions Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity. Branch and bound Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution. Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far. On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible. A variation of this approach called Hansen's method uses interval methods. It inherently implements rectangular constraints. First-choice bounding functions One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for while another constraint is maximal for . Russian doll search This method runs a branch-and-bound algorithm on problems, where is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables from the original problem, along with the constraints containing them. After the problem on variables is solved, its optimal cost can be used as an upper bound while solving the other problems, In particular, the cost estimate of a solution having as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point. There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search. Bucket elimination The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if is the variable to be removed, are the soft constraints containing it, and are their variables except , the new soft constraint is defined by: Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket. See also Constrained least squares Distributed constraint optimization Constraint satisfaction problem (CSP) Constraint programming Integer programming Metric projection Penalty method Superiorization References Further reading Mathematical optimization Constraint programming
Constrained optimization
[ "Mathematics" ]
1,644
[ "Mathematical optimization", "Mathematical analysis" ]
4,172,697
https://en.wikipedia.org/wiki/Doping%20at%20the%20Olympic%20Games
Competitors at the Olympic Games have used banned athletic performance-enhancing drugs. History The use of performance-enhancing tactics or more formally known as PEDs, and more broadly, the use of any external device to nefariously influence the outcome of a sporting event has been a part of the Olympics since its inception in Ancient Greece. One speculation as to why men were required to compete naked was to prevent the use of extra accoutrements and to keep women from competing in events specifically designed for men. Athletes were also known to drink "magic" potions and eat exotic meats in the hopes of giving them an athletic edge on their competition. If they were caught cheating, their likenesses were often engraved into stone and placed in a pathway that led to the Olympic stadium. In the modern Olympic era, chemically enhancing one's performance has evolved into a sophisticated science, but in the early years of the Modern Olympic movement the use of performance-enhancing drugs was almost as crude as its ancient predecessors. For example, the winner of the marathon at the 1904 Games, Thomas Hicks, was given strychnine and brandy by his coach, even during the race. During the early 20th century, many Olympic athletes discovered ways to improve their athletic abilities by boosting testosterone. As their methods became more extreme, it became increasingly evident that the use of performance-enhancing drugs was not only a threat to the integrity of sport but could also have potentially fatal side effects on the athlete. The only Olympic death linked to athletic drug use occurred at the Rome Games of 1960. During the cycling road race, Danish cyclist Knud Enemark Jensen fell from his bicycle and later died. A coroner's inquiry found that he was under the influence of amphetamine, which had caused him to lose consciousness during the race. Jensen's death exposed to the world how endemic drug use was among elite athletes. By the mid-1960s, sports federations were starting to ban the use of performance-enhancing drugs, and the IOC followed suit in 1967. The first Olympic athlete to test positive for the use of performance-enhancing drugs was Hans-Gunnar Liljenwall, a Swedish pentathlete at the 1968 Summer Olympics, who lost his bronze medal for alcohol use, "two beers" to steady his nerves. Liljenwall was the only athlete to test positive for a banned substance at the 1968 Olympics, but as the technology and testing techniques improved, the number of athletes discovered to be chemically enhancing their performance increased as well. The most systematic case of drug use for athletic achievement is that of the East German Olympic teams of the 1970s and 1980s. In 1990, documents were discovered that showed many East German female athletes, especially swimmers, had been administered anabolic steroids and other drugs by their coaches and trainers. Girls as young as eleven were started on the drug regimen without consent from their parents. American female swimmers, including Shirley Babashoff, accused the East Germans of using performance-enhancing drugs as early as the 1976 Summer Games. Babashoff's comments were dismissed by the international and domestic media as sour grapes since Babashoff, a clear favorite to win multiple gold medals, won three silver medals – losing all three times to either of the two East Germans Kornelia Ender or Petra Thümer, and one gold medal in a relay. There was no suspicion of cheating on the part of the East German female swimmers even though their medal tally increased from four silvers and one bronze in 1972 to ten golds (out of a possible 12), six silvers, and one bronze in 1976. No clear evidence was discovered until after the fall of the Berlin Wall, when the aforementioned documents proved that East Germany had embarked on a state-sponsored drug regimen to dramatically improve their competitiveness at the Olympic Games and other international sporting events. Many of the East German authorities responsible for this program have been subsequently tried and found guilty of various crimes in the German penal system. The report, titled "Doping in Germany from 1950 to today", details how the West German government helped fund a wide-scale doping program. West Germany encouraged and covered up a culture of doping across many sports for decades. Doping of West German athletes was prevalent at the Munich Games of 1972, and at the 1976 Montreal Olympics. According to British journalist Andrew Jennings, a KGB colonel stated that the agency's officers had posed as anti-doping authorities from the International Olympic Committee to undermine doping tests and that Soviet athletes were "rescued with [these] tremendous efforts". On the topic of the 1980 Summer Olympics, a 1989 Australian study said "There is hardly a medal winner at the Moscow Games, certainly not a gold medal winner, who is not on one sort of drug or another: usually several kinds. The Moscow Games might as well have been called the Chemists' Games." Documents obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated prior to the country's decision to boycott the Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. The communication, directed to the Soviet Union's head of track and field, was prepared by Dr. Sergei Portugalov of the Institute for Physical Culture. Portugalov was also one of the main figures involved in the implementation of the Russian doping program prior to the 2016 Summer Olympics. China was accused of conducting a state sanctioned doping programme on athletes in the 1980s and 1990s. In a July 2012 interview published by the Sydney Morning Herald newspaper, Chen Zhangho, the lead doctor for the Chinese Olympic team at the Los Angeles, Seoul and Barcelona Olympics told of how he had tested hormones, blood doping and steroids on about fifty elite athletes. Chen also accused the United States, the Soviet Union and France of using performance-enhancing drugs at the same time as China. A very publicized steroid-related disqualification at an Olympic Games was the case of Canadian sprinter Ben Johnson, who won the Men's 100 metres at the 1988 Seoul Olympics, but tested positive for stanozolol. His gold medal was subsequently stripped and awarded to runner-up Carl Lewis, who had tested positive for stimulants at the U.S. Olympic Trials. The highest level of stimulant Lewis recorded was 6 ppm, which was regarded as a positive test in 1988 but is now regarded as a negative test. The acceptable level was later raised to ten parts per million for ephedrine and twenty-five parts per million for other substances. According to the IOC rules at the time, positive tests with levels lower than 10 ppm were cause of further investigation but not immediate ban. Neal Benowitz, a professor of medicine at UC San Francisco who is an expert on ephedrine and other stimulants, agreed that "These [levels] are what you'd see from someone taking cold or allergy medicines and are unlikely to have any effect on performance." The IAAF acknowledged that at the 1988 Olympic Trials the USOC followed the correct procedures in dealing with positive findings for ephedrine and ephedrine-related compounds in low concentration. Response In the late 1990s, the IOC took the initiative in a more organized battle against doping, leading to the formation of the World Anti-Doping Agency (WADA) in 1999. The 2000 Summer Olympics and 2002 Winter Olympics have shown that the effort to eliminate performance-enhancing drugs from the Olympics is not over, as several medalists in weightlifting and cross-country skiing were disqualified due to failing a drug test. During the 2006 Winter Olympics, only one athlete failed a drug test and had a medal revoked. The IOC-established drug testing regimen (now known as the "Olympic Standard") has set the worldwide benchmark that other sporting federations attempt to emulate. During the Beijing games, 3,667 athletes were tested by the IOC under the auspices of the World Anti-Doping Agency. Both urine and blood testing was used in a coordinated effort to detect banned substances and recent blood transfusions. While several athletes were barred from competition by their National Olympic Committees prior to the Games, six athletes failed drug tests while in competition in Beijing. Prohibited drugs Summer Olympic Games What follows is a list of all the athletes that have tested positive for a banned substance either during or after an Olympic Games in which they competed. Any medals listed were revoked by the International Olympic Commission (IOC). In 1967 the IOC banned the use of performance-enhancing drugs, instituted a Medical Commission, and created a list of banned substances. Mandatory testing began at the following year's Games. In a few cases the IOC has reversed earlier rulings that stripped athletes of medals. 1968 Mexico City In addition, the Bulgarian Greco-Roman wrestler Hristo Traykov was disqualified from his bout against David Hazewinkel for using concealed smelling salts during their bout. 1972 Munich As a 16-year-old, Rick DeMont qualified to represent the United States at the 1972 Summer Olympics in Munich, Germany. He originally won the gold medal in the men's 400-meter freestyle, but following the race, the International Olympic Committee (IOC) disqualified DeMont after his post-race urinalysis tested positive for traces of the banned substance ephedrine contained in his prescription asthma medication, Marax. The positive test following the 400-meter freestyle final also deprived him of a chance at multiple medals, as he was barred from any other events at the Olympics, including the 1,500-meter freestyle for which he was the then-current world record-holder. Before the Olympics, DeMont had properly declared his asthma medications on his medical disclosure forms, but the U.S. Olympic Committee (USOC) had not cleared them with the IOC's medical committee. In 2001, his gold medal performance in the 1972 Summer Olympics was recognised by the United States Olympic Committee (USOC). However, only the IOC has the power to restore his medal, and it has, as of 2019, refused to do so. 1976 Montreal Leibel was disqualified from the race that took place on the day that he provided the positive sample but was allowed to continue in the event. 1980 Moscow Though no athletes were caught doping at the 1980 Summer Olympics, it has been revealed that athletes had begun using testosterone and other drugs for which tests had not been yet developed. According to British journalist Andrew Jennings, a KGB colonel stated that the agency's officers had posed as anti-doping authorities from the International Olympic Committee (IOC) to undermine doping tests and that Soviet athletes were "rescued with [these] tremendous efforts". A 1989 report by a committee of the Australian Senate claimed that "there is hardly a medal winner at the Moscow Games, certainly not a gold medal winner... who is not on one sort of drug or another: usually several kinds. The Moscow Games might well have been called the Chemists' Games". A member of the IOC Medical Commission, Manfred Donike, privately ran additional tests with a new technique for identifying abnormal levels of testosterone by measuring its ratio to epitestosterone in urine. Twenty percent of the specimens he tested, including those from sixteen gold medalists would have resulted in disciplinary proceedings had the tests been official. The results of Donike's unofficial tests later convinced the IOC to add his new technique to their testing protocols. The first documented case of "blood doping" occurred at the 1980 Summer Olympics as a runner was transfused with two pints of blood before winning medals in the 5000 m and 10,000 m. 1984 Los Angeles The organizers of the Los Angeles games had refused to provide the IOC doping authorities with a safe prior to the start of the games. Due to a lack of security, medical records were subsequently stolen. A 1994 letter from IOC Medical Commission chair Alexandre de Mérode claimed that Tony Daly, a member of the Los Angeles organizing committee had destroyed the records. Dick Pound later wrote of his frustration that the organizing committee had removed evidence before it could be acted on by the IOC. Pound also claimed that IOC President Juan Antonio Samaranch and Primo Nebiolo, President of the International Association of Athletics Federations (IAAF) had conspired to delay the announcement of positive tests so that the games could pass without controversy. The American cyclist Pat McDonough later admitted to "blood doping" at the 1984 Los Angeles Games. Following the games it was revealed that one-third of the U.S. cycling team had received blood transfusions before the games, where they won nine medals, their first medal success since the 1912 Summer Olympics. "Blood doping" was banned by the IOC in 1985 (at the time of the Olympics it was not banned), though no test existed for it at the time. 1988 Seoul 1992 Barcelona 1996 Atlanta Five athletes tested positive for the stimulant bromantan and were disqualified by the IOC, but later reinstated after an appeal to the Court of Arbitration for Sport: swimmers Andrey Korneyev and Nina Zhivanevskaya, Greco-Roman wrestler Zafar Guliev and sprinter Marina Trandenkova, all from Russia, and the Lithuanian track cyclist Rita Razmaitė. Dr. Vitaly Slionssarenko, physician to the Lithuanian cycling team and team coach Boris Vasilyev were expelled from the games by the IOC for their role in the scandal. The athletes and officials were reprimanded. The Irish long-distance runner Marie McMahon (Davenport) got a reprimand after testing positive for the stimulant phenylpropanolamine, and Cuban judoka Estella Rodriguez Villanueva got a reprimand after she tested positive for the diuretic furosemide. 2000 Sydney 2004 Athens 2008 Beijing "Zero Tolerance for Doping" was adopted as an official slogan for the Beijing Olympic Games. A number of athletes were already eliminated by testing prior to coming to Beijing. Out of the 4,500 samples that were collected from participating athletes at the games, six athletes with positive specimens were ousted from the competition. The quality of the original testing was questioned when the BBC reported that samples positive for EPO were labeled as negative by Chinese laboratories in July 2008. The initial rate of positive findings was lower than at Athens in 2004, but the prevalence of doping had not necessarily decreased; the technology for creating and concealing drugs had become more sophisticated, and a number of drugs could not be detected. Chinese crackdowns on doping athletes in 2010 included a two-year ban on 2008 Olympic judo champion Tong Wen after she tested positive for the clenbuterol. In August 2015, the Turkish Athletics Federation confirmed that an in-competition test of Elvan Abeylegesse at the 2007 IAAF World Championships in Athletics had been retested and found to be positive for a controlled substance, and that she had been temporarily suspended. On 29 March 2017, the IAAF confirmed the positive test, announced retroactive disqualifications and voided all of her results from 25 August 2007 until 25 August 2009, including the 2008 Summer Olympics. As a result, she was stripped of two silver medals she had won in the women's 5,000 and 10,000 meter races. In May 2016, following the Russian doping scandal, the IOC announced that 32 targeted retests had come back positive for performance-enhancing drugs, of which Russian News Agency TASS announced that 14 were from Russian athletes, 11 of them track and field athletes, including 2012 Olympic champion high jumper Anna Chicherova. Authorities have sent the B-samples for confirmation testing. Those confirmed as having taken doping agents stand to lose records and medals from the 2008 games to 2016 under IOC and WADA rules. On 18 June 2016, the IWF reported that as a consequence of the IOC's reanalyses of samples from the 2008 Olympic Games, the samples of the following seven weightlifters had returned positive results: Hripsime Khurshudyan (Armenia), Intigam Zairov (Azerbaijan), Alexandru Dudoglo (Moldova), gold medalist Ilya Ilyin (Kazakhstan), bronze medalist Nadezda Evstyukhina and silver medalist Marina Shainova (both from Russia), and Nurcan Taylan (Turkey). In line with the relevant rules and regulations, the IWF imposed mandatory provisional suspensions upon the athletes. Zairov and Ilyin had been serving previous suspensions. In November 2016, Ilyin was stripped of the gold medal. On 22 July 2016, Sibel Özkan (TUR) was disqualified due to an anti-doping rule violation and stripped of her silver medal. Medals have not been reallocated as yet. On 28 July 2016, it was announced that retests of samples from the 2008 Summer Olympics detected a positive sample for performance-enhancing drugs from Aksana Miankova of Belarus, who won a gold medal in the women's hammer throw. There have been no decisions about stripping and reallocation of medals as yet. On 16 August 2016, the Russian women's 4 × 100 metres relay team was disqualified for doping. Russian teammates were stripped of their gold Olympic medals, as Yuliya Chermoshanskaya had her samples reanalyzed and tested positive for two prohibited substances. The IAAF was requested to modify the results accordingly and to consider any further action within its own competence. On 19 August 2016, the Russian women's 4 × 400 metres relay team was disqualified for doping. Russian teammates were stripped of their silver Olympic medals, as Anastasiya Kapachinskaya had her samples reanalyzed and tested positive for the same two prohibited substances as Chermoshanskaya. On 24 August 2016, the IWF reported that as a consequence of the IOC's reanalyses of samples from the 2008 Olympic Games, the samples of the following athletes had returned positive results: Nizami Pashayev (Azerbaijan), Iryna Kulesha, Nastassia Novikava, Andrei Rybakou (all from Belarus), Cao Lei, Chen Xiexia, Liu Chunhong (all from China), Mariya Grabovetskaya, Maya Maneza, Irina Nekrassova, Vladimir Sedov (all from Kazakhstan), Khadzhimurat Akkaev, Dmitry Lapikov (both from Russia), and Natalya Davydova and Olha Korobka (both from Ukraine). In line with the relevant rules and regulations, the IWF imposed mandatory provisional suspensions upon the athletes, who remain provisionally suspended in view of potential anti-doping rule violations until their cases are closed. On 29 August 2016, some non-official reports indicated that Artur Taymazov of Uzbekistan had been stripped of the 2008 Olympic gold medal in the freestyle wrestling 120 kg event due to a positive test for doping. On 31 August 2016, the IOC disqualified six sportspeople for failing doping tests at the 2008 Games. They included three Russian medalists: weightlifters Nadezhda Evstyukhina (bronze medal in the women's 75 kg event), Marina Shainova (silver medal in the women's 58 kg event), and Tatyana Firova, who finished second with teammates in the 4 × 400 m relay. Bronze medal weightlifter Tigran Martirosyan of Armenia (men's 69 kg event) and fellow weightlifters Alexandru Dudoglo (9th place) of Moldova and Intigam Zairov (9th place) of Azerbaijan were also disqualified. On 1 September 2016, the IOC disqualified a further two athletes. Cuban discus thrower Yarelys Barrios, who won a silver medal in the women's discus, was disqualified after testing positive for Acetazolamide and ordered to return her medal. Qatari sprinter Samuel Francis, who finished 16th in the 100 meters, was also disqualified after testing positive for Stanozolol. On 13 September 2016, four more Russian athletes were disqualified for doping offenses. Two of those were medalists from the 2008 Summer Olympics: silver medalist Mariya Abakumova in the women's javelin throw and Denis Alekseyev, who was part of the bronze medal team in the men's 4 × 400 m relay. Inga Abitova, who finished 6th in the 10,000 meters, and cyclist Ekaterina Gnidenko also tested positive for a banned substance and were disqualified. On 23 September 2016, some non-official reports indicate wrestler Vasyl Fedoryshyn of Ukraine has been stripped of the 2008 Olympic silver medal in the freestyle 60 kg event due to a positive test for doping. On 6 October 2016, the IOC disqualified Anna Chicherova of the Russian Federation for testing positive for performance-enhancing drugs. She won a bronze medal in the women's high jump. Russia would likely keep the bronze medal, as the fourth-place athlete in the competition was also from Russia. Through 6 October 2016, the IOC has reported Adverse Analytical Findings for 25 weightlifters from its 2016 retests of samples from the 2008 Beijing Olympic Games, all but three of whom tested positive for anabolic agents (three Chinese weightlifters were positive for growth hormones). On 26 October 2016, the IOC disqualified nine more athletes for failing drugs tests at the 2008 Games. Among them were six medal winners: weightlifters Andrei Rybakou and Nastassia Novikava, both from Belarus, and Olha Korobka of Ukraine; women's steeplechase bronze medalist Ekaterina Volkova of Russia; and freestyle wrestlers Soslan Tigiev of Uzbekistan and Taimuraz Tigiyev of Kazakhstan. The others were men's 62 kg weightlifter Sardar Hasanov of Azerbaijan, long jumper Wilfredo Martinez of Cuba, and 100m-hurdler Josephine Nnkiruka Onyia of Spain. On 17 November 2016, the IOC disqualified 16 more athletes for failing drugs tests at the 2008 games. Among them were 10 medal winners: weightlifters Khadzhimurat Akkaev and Dmitry Lapikov and wrestler Khasan Baroev from the Russian Federation, weightlifters Mariya Grabovetskaya, Irina Nekrassova and wrestler Asset Mambetov from Kazakhstan, weightlifter Nataliya Davydova and pole vaulter Denys Yurchenko from Ukraine, long/triple jumper Hrysopiyí Devetzí of Greece and wrestler Vitaliy Rahimov of Azerbaijan. The others were women's 75 kg weightlifter Iryna Kulesha of Belarus, women's +63 kg weightlifter Maya Maneza of Kazakhstan, women's high jumper Vita Palamar of Ukraine, men's 94 kg weightlifter Nizami Pashayev of Azerbaijan, men's 85 kg weightlifter Vladimir Sedov of Kazakhstan, and women's high jumper Elena Slesarenko of the Russian Federation. On 25 November 2016, the IOC disqualified 5 more athletes for failing drugs tests at the 2008 games. Among them were 3 medal winners: gold-medalists 94 kg weightlifter Ilya Ilin of Kazakhstan and hammer thrower Aksana Miankova of Belarus and silver-medalist shot putter Natallia Mikhnevich of Belarus. The others were shot putter Pavel Lyzhyn and 800m runner Sviatlana Usovich, both of Belarus. On 12 January 2017, the IOC disqualified five more athletes for failing drug tests at the 2008 Games. These included three Chinese women's weightlifting gold medalists: Lei Cao (75 kg), Xiexia Chen (48 kg) and Chunhong Liu (69 kg). Two women athletes from Belarus were disqualified: bronze medalist shot putter Nadzeya Ostapchuk and hammer thrower Darya Pchelnik, who did not medal. On 25 January 2017, the IOC stripped Jamaica of the athletics gold medal in the men's 4 × 100 m relay due to Nesta Carter testing positive for the prohibited substance methylhexaneamine. The IOC also stripped Russian jumper Tatyana Lebedeva of two silver medals in women's triple jump and long jump due to use of turinabol. On 1 March 2017, the IOC disqualified Victoria Tereshchuk of Ukraine due to use of turinabol and stripped her of the bronze medal in modern pentathlon. By April 2017, the 2008 Summer Olympics has had the most (50) Olympic medals stripped for doping violations. Russia is the leading country with 14 medals stripped. Disqualified Did not start Athletes who were selected for the Games, but provisionally suspended before competing. 2012 London It was announced prior to the Summer games that half of all competitors would be tested for drugs, with 150 scientists set to take 6,000 samples between the start of the games and the end of the Paralympic games at GlaxoSmithKline's New Frontiers Science Park site in Harlow, Essex. All medalists would also be tested. The Olympic anti-doping laboratory would test up to 400 samples every day for more than 240 prohibited substances. The head of the World Anti-Doping Agency (WADA), John Fahey, announced on 24 July that 107 athletes had been sanctioned for doping offences in the six months to 19 June. The "In-competition" period began on 16 July. During the "In-competition" period Olympic competitors can be tested at any time without notice or in advance. British sprinter Dwain Chambers, cyclist David Millar and shot putter Carl Myerscough competed in London after the British Olympic Association's policy of punishing drug cheats with lifetime bans was overturned by the Court of Arbitration for Sport. Russian Darya Pishchalnikova participated in the 2012 Olympics and was awarded a silver medal. However, she tested positive for the anabolic steroid oxandrolone in the samples taken in May 2012. In December 2012, she sent an email to WADA containing details on an alleged state-run doping program in Russia. According to The New York Times, the email reached three top WADA officials but the agency decided not to open an inquiry and instead sent her email to Russian sports officials. In April 2013 Pishchalnikova was banned by the Russian Athletics Federation for ten years, and her results from May 2012 were annulled, meaning she was set on track to lose her Olympic medal. Her ban by the Russian Athletics Federation was likely in retaliation. Gold medalists at the games who had been involved in previous doping offences included Alexander Vinokourov, the winner of the men's road race, Tatyana Lysenko, the winner of the women's hammer throw, Aslı Çakır Alptekin winner of the women's 1500 meters and Sandra Perković, winner of the women's discus throw. Other competitors at the Summer games involved in previous doping cases included American athletes Justin Gatlin and LaShawn Merritt, and Jamaican sprinter Yohan Blake. Spanish athlete Ángel Mullera was first selected for the 3000 m steeplechase and later removed when emails were published in which he discussed EPO use with a trainer. Mullera appealed to CAS which ordered the Spanish Olympic Committee to allow him to participate. Prior to the Olympic competition, several prominent track and field athletes were ruled out of the competition due to failed tests. World indoor medallists Dimitrios Chondrokoukis, Debbie Dunn, and Mariem Alaoui Selsouli were withdrawn from their Olympic teams in July for doping, as was 2004 Olympic medallist Zoltán Kővágó. At the Olympic competition, Tameka Williams admitted to taking a banned stimulant and was removed from the games. Ivan Tsikhan did not compete in the hammer throw as a retest of his sample from the 2004 Athens Olympics, where he won silver, was positive. Amine Laâlou, Marina Marghieva, Diego Palomeque, and defending 50 km walk champion Alex Schwazer were also suspended before taking part in their events. Syrian hurdler Ghfran Almouhamad became the first track-and-field athlete to be suspended following a positive in-competition doping sample. Nadzeya Astapchuk was stripped of the women's shot put title after her sample came back positive for the banned anabolic agent metenolone. Karin Melis Mey was withdrawn before the long jump final when an earlier failed doping test was confirmed. A WADA report released in 2015 detailed an extensive Russian state-sponsored doping program implicating athletes, coaches, various Russian institutions, doctors and labs. The report stated that the London Olympic Games "were, in a sense, sabotaged by the admission of athletes who should have not been competing" and detailed incidents of bribery and bogus urine samples. The report recommended that Russia be barred from track and field events for the 2016 Olympics. It also recommended lifetime bans for five coaches and five athletes from the country, including runners Mariya Savinova, Ekaterina Poistogova, Anastasiya Bazdyreva, Kristina Ugarova, and Tatjana Myazina. On 15 June 2016, it was announced that four London 2012 Olympic weightlifting champions had tested positive for performance-enhancing drugs. They include Kazakhstan's Ilya Ilyin (94 kg), Zulfiya Chinshanlo (53 kg), Maiya Maneza (63 kg) and Svetlana Podobedova (75 kg). If confirmed, Kazakhstan would drop from 12th to 23rd in the 2012 medal standings. Six other lifters who competed at the 2012 Games also tested positive after hundreds of samples were reanalysed. Among them are Russia's Apti Aukhadov (silver at 85 kg), Ukraine's Yuliya Kalina (bronze at 58 kg), Belarusian Maryna Shkermankova (bronze at 69 kg), Azerbaijan's Boyanka Kostova and Belarus duo Dzina Sazanavets and Yauheni Zharnasek. On 27 July 2016, IWF has reported in the second wave of re-sampling that three silver medalists from Russia, namely Natalya Zabolotnaya (at 75 kg), Aleksandr Ivanov (at 94 kg) and Svetlana Tsarukaeva (at 63 kg), together with bronze medalists Armenian Hripsime Khurshudyan (at 75+ kg), Belarusian Iryna Kulesha (at 75 kg) and Moldovan Cristina Iovu (at 53 kg) have tested positive for steroid dehydrochlormethyltestosterone. Aukhadov was stripped of his silver medal by the IOC on 18 October 2016. On 27 October 2016 Maiya Maneza was stripped of her gold medal. In November 2016, Ilyin was stripped of the London gold medal. On 13 July 2016, the IOC announced that Yuliya Kalina of Ukraine had been disqualified from the 2012 Summer Olympics and ordered to return the bronze medal from the 58 kg weightlifting event. Reanalysis of Kalina's samples from London 2012 resulted in a positive test for the prohibited substance dehydrochlormethyltestosterone (turinabol). The positions were adjusted accordingly. On 9 August 2016, the IOC announced that Oleksandr Pyatnytsya of Ukraine would be stripped of his silver medal in the javelin throw after he tested positive for the prohibited substance dehydrochlormethyltestosterone (turinabol). Redistribution of medals has not yet been announced, but the likely case is the silver and bronze medals will be given to Finland and Czech Republic instead. On 20 August 2016, the IOC announced that Yevgeniya Kolodko of Russia would be stripped of her silver medal in shot put after she tested positive of dehydrochlormethyltestosterone (turinabol) and ipamorelin. Medals are not reallocated yet. On 29 August 2016, a report indicated that a retested sample for Besik Kudukhov of Russia, the silver medalist in the men's 60 kg freestyle wrestling event, had returned a positive result (later disclosed as dehydrochlormethyltestosterone). Kudakhov died in a car crash in December 2013. On 27 October 2016, the IOC dropped all disciplinary proceedings against Kudukhov, stating that such proceedings cannot be conducted against a deceased person. As a result, it said, Olympic results that would have been reviewed will remain uncorrected, which is the unavoidable consequence of the fact that the proceedings cannot move forward. On 13 September 2016, the IWF reported that the men's 94 kg weightlifting bronze medalist, Moldova's Anatolie Cîrîcu, had tested positive for the dehydrochlormethyltestosterone. On 6 October 2016, the IWF reported that as a consequence of the IOC's reanalyses of samples from the 2012 Olympic Games, a sample of Norayr Vardanyan, who represented Armenia, had returned a positive result. In line with the relevant rules and regulations, the IWF imposed mandatory provisional suspensions upon Vardanyan, who remains provisionally suspended until his case is closed. On 12 January 2017, the IOC disqualified Vardanyan. Through 6 October 2016, the IOC had reported Adverse Analytical Findings for 23 weightlifters from its 2016 retests of samples from the 2012 London Olympic Games, all of whom tested positive for anabolic agents. On 11 October 2016, Tatyana Lysenko of the Russian Federation was disqualified from the women's hammer throw, in which she won the gold medal. She had tested positive for a banned substance. The IOC requested the IAAF to modify the results of this event accordingly. The silver medalist Anita Włodarczyk of Poland would likely take the gold medal in her place. On 18 October 2016, the IOC disqualified Apti Aukhadov of the Russian Federation for doping and stripped him of the silver medal. The IOC requested the IWF to modify the results of this event accordingly; it has not yet published modified results. On 18 October 2016, the IOC reported that Maksym Mazuryk of Ukraine, who competed in the Men's Pole Vault event, was disqualified from the 2012 London Games, in which he ranked 18th. Re-analysis of Mazuryk's samples resulted in a positive test for dehydrochlormethyltestosterone. On 27 October 2016 the IOC disqualified a further eight athletes for failing doping tests at the games. This included four medal winners in weightlifting: Zulfiya Chinshanlo, Maiya Maneza and Svetlana Podobedova, all from Kazakhstan, and Maryna Shkermankova of Belarus. The others were hammer thrower Kirill Ikonnikov of Russia, women's 69 kg weightlifter Dzina Sazanavets of Belarus, pole vaulter Dmitry Starodubtsev of Russia, and men's +105 kg weightlifter Yauheni Zharnasek of Belarus. On 21 November 2016 the IOC disqualified a further 12 athletes for failing doping tests at the games. This included 6 medal winners in weightlifting, including Alexandr Ivanov (Russia), Anatoli Ciricu (Moldova), Cristina Iovu (Moldova), Natalya Zabolotnaya (Russia), Iryna Kulesha (Belarus), and Hripsime Khurshudyan (Armenia). Moldova has lost all its 2012 London medals. The others were hammer thrower Oleksandr Drygol and long jumper Margaryta Tverdokhlib, both of Ukraine, 85 kg weightlifter Rauli Tsirekidze of Georgia, 94 kg weightlifter Almas Uteshov of Kazakhstan, 94 kg weightlifter Andrey Demanov of Russia and 3000m steeplechaser Yuliya Zaripova of Russia, who had previously been sanctioned in March 2016 by the Court of Arbitration for Sport. On 25 November 2016, the IOC disqualified 4 more athletes for failing drug tests at the 2012 games. They were gold medalist 94 kg weightlifter Ilya Ilin of Kazakhstan, hammer thrower Aksana Miankova and long jumper Nastassia Mironchyk-Ivanova, both of Belarus, and 58 kg weightlifter Boyanka Kostova of Azerbaijan. On 29 November 2016 the Court of Arbitration for Sport issued a decision that all results achieved by 2012 Olympic heptathlon bronze medalist Tatyana Chernova of Russia between 15 August 2011 and 22 July 2013 are annulled. It also annulled all of Yekaterina Sharmina's results between 17 June 2011 and 5 August 2015, including her 33rd-place finish in the 2012 women's 1500m. CAS ruled that they "have been found to have committed an anti-doping rule violation ... of the International Athletic Association Federation (IAAF) Competition Rules after analysis of their Athlete Biological Passports (ABP) showed evidence of blood doping." On 12 January 2017, the IOC disqualified three weightlifters for failing drug tests at the 2012 games. Two competed in men's 94 kg weightlifting: Intigam Zairov of Azerbaijan and Norayr Vardanyan of Armenia. Women's 63 kg weightlifter Sibel Simsek of Turkey was disqualified. None was a medalist at these games. On 1 February 2017, the IOC disqualified three athletes due to failed doping tests, all of whom tested positive for turinabol. Russian women's discus thrower Vera Ganeeva, who finished 23rd, Turkish boxer Adem Kilicci, who ranked 5th in men's 69–75 kg boxing, and Russian 400m runner Antonina Krivoshapka, who finished 6th, were disqualified. Krivoshapka also was part of the Russian silver medal-winning women's 4 × 400 m relay team, which was stripped of the silver medals. In December 2014, a documentary aired on German TV in which 800m gold medalist Mariya Savinova allegedly admitted to using banned substances on camera. In November 2015, Savinova was one of five Russian runners the World Anti-Doping Agency recommended to receive a lifetime ban for doping during the London Olympics, along with 800m bronze medalist Ekaterina Poistogova. On 10 February 2017, the Court of Arbitration for Sport upheld a four-year ban that effectively stripped Savinova of her Olympic gold and other medals. On 7 April 2017, CAS refused to decide on disqualification from 2012, and disqualify Ekaterina Poistogova from 2015. Thus, Ekaterina Poistogova retained her Olympic 2012 medal at women's 800 metres athletic event. In 2024, the Russian Athleteics Federation cancelled Poistogova's results from July 2012 to October 2014 after analysing old samples. Poistogova stands to lose the Olympic 800m silver medal. As of December 2022, the 2012 Summer Olympics has seen a record 40 Olympic medals stripped for doping violations. Russia is the leading country with 17 medals stripped. On 21 March 2022, the Athletics Integrity Unit of World Athletics issued a two-year ban for Russian racewalker Elena Lashmanova, starting from 9 March 2021, and also disqualified her results from 18 February 2012, to 3 January 2014, thus stripping her gold medal. Disqualified Did not start Athletes who were selected for the Games, but provisionally suspended before competing. 2016 Rio de Janeiro Originally, Russia submitted a list of 389 athletes for competition. On 7 August 2016, the IOC cleared 278 athletes, and 111 were removed because of the state-sponsored doping scandal. The Taiwanese weightlifter Lin Tzu-chi was withdrawn from the games hours before her event by her team's delegation for an abnormal drugs test. Kenyan athletics coach, John Anzrah who travelled to Rio independently of his country's delegation, was sent home after being caught posing as an athlete during a doping test, and was followed by Kenya's track and field manager, Michael Rotich, who was filmed by a newspaper offering to give athletes advanced notice of any pending drugs test in return for a one-off payment. On 13 October 2016, the IWF reported that weightlifter Gabriel Sincraian of Romania, who won bronze in the men's 85-kg event, tested positive for excess testosterone in a test connected to the Rio Olympics. On 8 December 2016, the CAS affirmed the disqualification of Sincraian and stripped him of the bronze medal. The CAS also disqualified silver medalist 52 kg boxer Misha Aloian of Russia after he tested positive for tuaminoheptane. Disqualified Did not start Athletes who were selected for the Games, but provisionally suspended before competing. 2020 Tokyo Disqualified Did not start Athletes who were selected for the Games, but provisionally suspended before competing. 2024 Paris Disqualified {| class="wikitable sortable" style="font-size:98%; font-size:98%;" |- ! style="width:150px;"|Name ! style="width:115px;"|Country ! style="width:115px;"|Sport ! style="width:180px;"|Banned substance ! style="width:180px;"|Details of test |- |María José Ribera | |Swimming Women's 50m freestyle |Furosemide |Disqualified following an adverse analytical finding. |- |Eleni-Klaoudia Polak | |Athletics Women's pole vault |Not publicly disclosed |Provisional suspension following an adverse analytical finding. |- |Dominique Lasconi Mulamba | |Athletics Men's 100 metres |Stanozolol |Provisional suspension following an adverse analytical finding. |- |Mohammad Samim Faizad | |Judo Men's 81kg class |Stanozolol |Disqualified following an adverse analytical finding. |- |Tine Magnus | |Eventing |Trazodone (found in horse Dia Van Het Lichterveld Z's sample) |Provisional suspension following an adverse analytical finding. Belgian eventing team disqualified. |} Did not start Athletes who were selected for the Games, but provisionally suspended before competing. Winter Olympic Games 1968 Grenoble No athletes were caught doping at these Games. 1972 Sapporo 1976 Innsbruck 1980 Lake Placid No athletes tested positive at these Games. 1984 Sarajevo The Finnish cross-country skier Aki Karvonen admitted in 1994 that he'd had blood transfusions for the Sarajevo Games. Blood transfusions weren't formally banned by IOC until 1986. Karvonen won a silver and two bronze at the games. 1988 Calgary 1992 Albertville No athletes were caught using performance-enhancing drugs at these Games. The Russian biathlete Sergei Tarasov admitted in 2015 that the Russian biathlon team had carried out illegal blood transfusions at the Games. Something went very wrong with his transfusion, and he was rushed to the hospital where they saved his life. 1994 Lillehammer No athletes were caught using performance-enhancing drugs at these Games. 1998 Nagano No athletes were caught using performance-enhancing drugs at these Games. The Canadian snowboarder Ross Rebagliati, winner of the men's giant slalom, was initially disqualified and stripped of his gold medal by the International Olympic Committee's executive board after testing positive for marijuana. Marijuana was not then on the list of prohibited substances by the IOC, and their decision was reversed by the Court of Arbitration for Sport and Rebagliati's medal reinstated. 2002 Salt Lake City 2006 Turin On 25 April 2007, six Austrian athletes were banned for life from the Olympics for their involvement in a doping scandal at the 2006 Turin Olympics, the first time the IOC punished athletes without a positive or missed doping test. The Austrians were found guilty of possessing doping substances and taking part in a conspiracy, based on materials seized by Italian police during a raid on the athletes' living quarters. The Austrians also had their competition results from Turin annulled. A seventh athlete, cross-country skier Christian Hoffmann, had his case referred to the International Ski Federation for further investigation, but IOC charges were dismissed. The IOC has retested nearly 500 doping samples that were collected at the 2006 Turin Games. In 2014, the Estonian Olympic Committee was notified by the IOC that a retested sample from cross-country skier Kristina Šmigun had tested positive. On 24 October 2016, the World Anti-Doping Agency Athletes' Commission stated that Šmigun, who won two gold medals at the Turin Games, faces a Court of Arbitration for Sport hearing before the end of October. If Šmigun were to be stripped of her gold medals, Kateřina Neumannová of Czech Republic could be elevated to gold in the 7.5 + 7.5 km double pursuit event. Marit Bjørgen of Norway could acquire a seventh gold medal in the 10 km classical event. The case against Šmigun was dropped on 13 December 2017 without any charges being raised. Did not start On 13 February 2006, the Brazilian Olympic Committee announced that Armando dos Santos' preventive antidoping test, which had been done in Brazil on 4 January 2006, was positive for the forbidden substance nandrolone. Santos was ejected from the team, being replaced by former sprinter Claudinei Quirino, the team's substitute athlete. Disqualified during the Games Disqualified after the Games 2010 Vancouver On 23 December 2016, the IOC stated that it will re-analyse all samples from Russian athletes at the Olympic Winter Games of Vancouver 2010. In October 2017, the IOC stated that one sole athlete was caught from retests of doping samples from the Vancouver 2010 Winter Olympic Games. Biathlete Teja Gregorin was confirmed as this athlete by the International Biathlon Union. A total of 1195 samples from Vancouver 2010 (70% of the 1700 available) were reanalyzed. This included all medalists and all of the 170 Russian athletes. The IOC requested all Russian samples from the 2010 Games be retested after the publication of the McLaren Report. Russia's disappointing performance at Vancouver (11th in gold medal table with a total of 3 golds) is cited as the reason behind the implementation of a doping scheme alleged to have been in operation at major events such as the 2014 Games at Sochi. Did not start Disqualified after the Games 2014 Sochi According to the director of the country's antidoping laboratory at the time, Grigory Rodchenkov, dozens of Russian athletes at the 2014 Winter Olympics in Sochi, including at least 15 medal winners, were part of a state-run doping program, meticulously planned for years to ensure dominance at the Games. In December 2016, following the release of the McLaren report on Russian doping at the Sochi Olympics, the International Olympic Committee announced the initiation of an investigation of 28 Russian athletes (the number later rose to 46) at the Sochi Olympic Games. La Gazzetta dello Sport'' reported the names of 17 athletes, of whom 15 are among the 28 under investigation. Three female figure skaters were named as being under investigation. They are Adelina Sotnikova, the singles gold medalist, as well as pairs skaters Tatiana Volosozhar and Ksenia Stolbova. Volosozhar and Stolbova won gold and silver medals, respectively, in pairs skating. Both also won gold medals in the team event, which also puts the other eight team medalists at risk of losing their golds. In November 2017 the proceeding against Sotnikova was dropped. Six cross-country skiers were suspended from competition on the basis of the McLaren Report: Evgeniy Belov, Alexander Legkov, Alexey Petukhov, Maxim Vylegzhanin, Yulia Ivanova and Evgenia Shapovalova. Legkov won a gold and silver medals, and Vylegzhanin won three silver medals. The IOC disqualified all six from Sochi, imposed lifetime bans and, in the process, stripped Legkov and Vylegzhanin of the medals they had won in four events (three individual medals and one team medal). Nikita Kryukov, Alexander Bessmertnykh and Natalya Matveyeva were also disqualified on 22 December 2017. The International Biathlon Union suspended two Russian biathletes who were in the Sochi games: Olga Vilukhina and Yana Romanova. Vilukhina won silver in sprint, and both women were on a relay team that won the silver medal. They were disqualified and stripped of their medals on 27 November 2017. The International Bobsleigh and Skeleton Federation suspended four Russian skeleton sliders. They were Alexander Tretyakov, Elena Nikitina, Maria Orlova and Olga Potylitsina. Tretyakov won a gold medal, and Nikitina won a bronze. On 22 November 2017, the IOC stripped these medals and imposed lifetime Olympic bans on all four. Skeleton racer Sergei Chudinov was sanctioned on 28 November 2017. Seven Russian female ice hockey players were to have hearings before the Oswald Commission on 22 November 2017. Two of the seven were accused of submitting samples showing readings that were physically impossible to be held by a woman. The Russian women's ice hockey team finished sixth at Sochi 2014. On 12 December 2017, six of them were disqualified. Tatiana Burina and Anna Shukina were also disqualified ten days later. On 24 November 2017, the IOC imposed life bans on bobsledder Alexandr Zubkov and speed skater Olga Fatkulina who won a combined 3 medals (2 gold, 1 silver). All their results were disqualified, meaning that Russia lost its first place in the medal standings. Bobsledders Aleksei Negodaylo and Dmitry Trunenkov were disqualified 3 days later. 3 other Russian athletes who didn't win medals were banned on 29 November 2017. Biathlete Olga Zaitseva and 2 other Russian athletes were banned on 1 December 2017. Bobsledder Alexey Voyevoda who had been already stripped of his gold medals due to the anti-doping violations committed by his teammates was sanctioned on 18 December 2017. Speed skaters Ivan Skobrev and Artyom Kuznetsov, lugers Albert Demchenko and Tatiana Ivanova, and bobsledders Liudmila Udobkina and Maxim Belugin were disqualified on 22 December 2017, bringing the total to 43. Demchenko and Ivanova were also stripped of their silver medals. On 15 February 2020, the International Biathlon Union announced that because of a doping violation, Evgeny Ustyugov and Russian men's 4 x 7.5km relay team had been disqualified from the 2014 Olympics. The IOC results affirm the decision, but medals have not yet been reallocated. 2018 Pyeongchang After the Russian Olympic Committee was barred from competing at the 2018 Winter Olympics, Russian athletes deemed to be clean were allowed to compete as Olympic Athletes from Russia. 2022 Beijing By the end of the Beijing Olympics, a total of five athletes were reported for doping violations: Spanish figure skater Laura Barquero, Russian figure skater Kamila Valieva, Iranian alpine skier Hossein Saveh Shemshaki, and two Ukrainians cross-country skier Valentyna Kaminska and bobsledder Lidiia Hunko. Controversy surrounding the ROC The medal ceremony for the team event in figure skating, where the Russian Olympic Committee (ROC) won gold, originally scheduled for 8 February, was delayed over what International Olympic Committee (IOC) spokesperson Mark Adams described as a situation that required "legal consultation" with the International Skating Union. Several media outlets reported on 9 February that the issue was over a positive test for trimetazidine by the ROC's Kamila Valieva, which was officially confirmed on 11 February. Valieva's sample in question was taken by the Russian Anti-Doping Agency (RUSADA) at the 2022 Russian Figure Skating Championships on 25 December, but the sample was not analyzed at the World Anti-Doping Agency (WADA) laboratory where it was sent for testing until 8 February, one day after the team event concluded. Valieva was assessed a provisional suspension after her positive result, but upon appeal, she was cleared by RUSADA's independent Disciplinary Anti-Doping Committee (DAC) on 9 February, just a day after receiving the provisional suspension. Following formal appeals lodged by the IOC, the International Skating Union (ISU), and WADA to review RUSADA DAC's decision, the Court of Arbitration for Sport (CAS) heard the case on 13 February, and removal of her provisional suspension was upheld on 14 February, ahead of her scheduled appearance in the women's singles event beginning 15 February. Due to Valieva being a minor at the time, as well as being classified as a "protected person" under WADA guidelines, RUSADA and the IOC announced on 12 February that they would broaden the scope of their respective investigations to include members of her entourage (e.g. coaches, team doctors, etc.). On 14 February, the CAS declined to reinstate Valieva's provisional suspension issued the previous Monday and ruled that she would be allowed provisionally to compete in the women's singles event. The CAS decided that preventing her from competing "would cause her irreparable harm in the circumstances", while noting that any medals won by Valieva at the Beijing Olympics would be withheld pending the results of the continuing investigation into her doping violation. The temporary provisional decision from the court was made on three grounds: 1/ Due to her age, she is a "Protected Person" as per WADA Code, subject to different rules than adult athletes; 2/ Athlete "did not test positive during the Olympic Games in Beijing"; 3/ "There were serious issues of untimely notification of the results, ... which impinged upon the Athlete’s ability to establish certain legal requirements for her benefit". The IOC announced that the team event medal ceremony, as well as the women's singles flower ceremony and medal ceremony if Valieva were to medal, would not take place until the investigation is over, and there is a concrete decision whether to strip Valieva and the ROC of their medals. To allow for the possibility that Valieva's results may be disqualified, the IOC asked the ISU to expand the qualifying field for the women's singles free skating by one to 25. On 29 January 2024, the Court of Arbitration for Sport (CAS) ruled in Valieva's doping case involving the Russian Anti-Doping Agency (RUSADA). The International Skating Union (ISU) and World Anti-Doping Agency (WADA) imposed a four year ban on Valieva backdated to 25 December 2021, and disqualified her of all competitive results from that date, including the first place finishes at the 2022 European Figure Skating Championships and the 2022 Olympic team event. Disqualified after the Games Did not start Athletes who were selected for the Games, but provisionally suspended before competing. See also Doping at the Asian Games List of doping cases in athletics List of doping cases in sport List of sporting scandals List of stripped Olympic medals World Anti-Doping Agency Technology doping References External links Olympic Movement Anti-doping Code (PDF) Olympic Games controversies Lists of doping cases Olympic Games Olympics-related lists Doping in Russia Lists of Olympic competitors
Doping at the Olympic Games
[ "Chemistry" ]
11,408
[ "Drug-related lists" ]
4,172,895
https://en.wikipedia.org/wiki/E18%20error
The E18 error is an error message on Canon digital cameras. The E18 error occurs when anything prevents the zoom lens from properly extending or retracting. The error has become notorious in the Canon user community as it can completely disable the camera, requiring expensive repairs. ConsumerAffairs.com reports that the "lens has a feature called bellows claw, which is a gear that physically extends and retracts the lens. A piece that holds the lens, the barrier plate, is not large enough and can sometimes cause the bellows claw to malfunction, resulting in a stuck lens". The result is a black screen that only contains the error message, E18. Another problem mentioned on the site blames a sticky iris in the lens, caused by grease entering inside from the microphones built into the lens. The buildup freezes up the ability of the lens to open. Although the use of the E18 error coding made this problem seem to be the particular domain of Canon cameras, the problem is actually quite common throughout all cameras with telescopic lens barrels. As a result, Canon has since dropped the use of this error code in its newer cameras. In its place it has adopted the more common term "lens error" that other manufacturers use. As such, its newer cameras report this term when the problem occurs. Causes According to Canon, one may get an E18 due to any of the following: Camera activating and lens opening while in a confined space or being blocked Extracted lens getting jarred Low battery condition as the unit is turned on or off Dropping of the camera Foreign substances, such as dust, sand or dirt entering the camera body. General jarring of the camera "General camera malfunction" One major contributor to E18 lens errors is the improper use of camera cases, or the carrying of cameras in pockets. An inadvertent activation of the camera while in the case or pocket may cause the lens to extend with the lens restricted in its movement, causing the error. Another cause is that sand, dust, and dirt will accumulate in the bottom of the case if it is not cleaned regularly (and lint for pockets). These materials readily cling to the camera by electrostatic build-up from the camera rubbing against the side of the case, especially for those cases with soft fibrous interiors. The lens error will occur once these materials work their way into the lens mechanism. Another major contributor is sand in general. Extra care should be utilized when taking a camera to the beach. Sand can cling to the lens barrel, again by electrostatic attraction. This may jam the lens mechanism when it tries to close. When at the beach, always inspect the lens barrel prior to closing to ensure that no sand particles are clinging to it (a single grain can jam the camera). Repairing the E18 error Two different types of problems are reported: The camera can take a couple of shots (clear and in focus), then stops working. Removing the batteries and replacing may produce 2–3 extra shots. Canon's instructions (by phone) are to 'remove the batteries, rotate the on button and hold for 5 seconds, and then replace the battery'. The few lucky pictures are clear and in focus. A better solution (not provided by Canon) is to connect the camera to the TV or a computer. This may completely solve the problem. If not, it may at least allow an extra 10–15 shots. There are several forums on the net that mention that connecting the camera to the TV completely resolved the E18 error. A camera lens is out-of-focus. Some users have been able to manipulate the lens back into place (see reference links below). To fix the problem, it is often necessary that the camera and optical assembly is disassembled, realigned and reassembled. A non-warranty repair at an authorized service center reportedly costs between US$79 and $250. There are a number of online guides to repairing E18 errors oneself, from simple guides on tapping the lens back into place to complete disassembly/realignment/reassembly instructions. Many of these fixes are presented in the below listed "External Links", and may also be found on search engines when the more common term "lens error" is utilized for the search. Class action A Chicago law firm, Horwitz, Horwitz & Associates, filed a class action lawsuit in 2005. The law firm Girard Gibbs & De Bartolomeo LLP is investigating this camera error and may file a class-action lawsuit against Canon. A lawsuit was filed and dismissed in 2006, but the plaintiffs planned to appeal. References Digital Camera Disasters: Will Yours Get Fixed? One widespread camera problem gets out-of-warranty repairs, another gets a lawsuit. (Grace Aquino, PC World, Tuesday 21 February 2006) Repair guide IXUS 40 aka SD300 Repair Guide E18 quick fix (CNet Digital cameras forum) Investigation by Girard Gibbs & De Bartolomeo LLP (currently accepting users to help with the investigation) External links E18 error experiences log – list of cameras affected by this error along with anecdotes Photographed solution for the E18 problem E18 lens error repair techniques that do not involve opening of the camera. Extensive reader database of additional tips for this problem. The only fix that worked for me An entire website dedicated to the E18 error (Canon lens error) Chicago law firms An eBay seller that fix Canon S2, S3, S5 and other SX models with lens error Fix Errors, Problems, Mistakes Canon PowerShot cameras Canon digital cameras Computer errors
E18 error
[ "Technology" ]
1,157
[ "Computer errors" ]
4,173,095
https://en.wikipedia.org/wiki/Solar%20transition%20region
The solar transition region is a region of the Sun's atmosphere between the upper chromosphere and corona. It is important because it is the site of several unrelated but important transitions in the physics of the solar atmosphere: Below, gravity tends to dominate the shape of most features, so that the Sun may often be described in terms of layers and horizontal features (like sunspots); above, dynamic forces dominate the shape of most features, so that the transition region itself is not a well-defined layer at a particular altitude. Below, most of the helium is not fully ionized, so that it radiates energy very effectively; above, it becomes fully ionized. This has a profound effect on the equilibrium temperature (see below). Below, the material is opaque to the particular colors associated with spectral lines, so that most spectral lines formed below the transition region are absorption lines in infrared, visible light, and near ultraviolet, while most lines formed at or above the transition region are emission lines in the far ultraviolet (FUV) and X-rays. This makes radiative transfer of energy within the transition region very complicated. Below, gas pressure and fluid dynamics usually dominate the motion and shape of structures; above, magnetic forces dominate the motion and shape of structures, giving rise to different simplifications of magnetohydrodynamics. The transition region itself is not well studied in part because of the computational cost, uniqueness, and complexity of Navier–Stokes combined with electrodynamics. Helium ionization is important because it is a critical part of the formation of the corona: when solar material is cool enough that the helium within it is only partially ionized (i.e. retains one of its two electrons), the material cools by radiation very effectively via both black-body radiation and direct coupling to the helium Lyman continuum. This condition holds at the top of the chromosphere, where the equilibrium temperature is a few tens of thousands of kelvins. Applying slightly more heat causes the helium to ionize fully, at which point it ceases to couple well to the Lyman continuum and does not radiate nearly as effectively. The temperature jumps up rapidly to nearly one million kelvin, the temperature of the solar corona. This phenomenon is called the temperature catastrophe and is a phase transition analogous to boiling water to make steam; in fact, solar physicists refer to the process as evaporation by analogy to the more familiar process with water. Likewise, if the amount of heat being applied to coronal material is slightly reduced, the material very rapidly cools down past the temperature catastrophe to around one hundred thousand kelvin, and is said to have condensed. The transition region consists of material at or around this temperature catastrophe. See also Moreton wave Coronal hole Solar spicule References External links Animated explanation of the Transition Region (and Chromosphere) (University of South Wales). Animated explanation of the temperature of the Transition Region (and Chromosphere) (University of South Wales). Space plasmas Transition region Light sources
Solar transition region
[ "Physics" ]
620
[ "Space plasmas", "Astrophysics" ]
4,173,098
https://en.wikipedia.org/wiki/Richard%20Dunthorne
Richard Dunthorne (1711 – 3 March 1775) was an English astronomer and surveyor, who worked in Cambridge as astronomical and scientific assistant to Roger Long (master of Pembroke Hall and Lowndean Professor of Astronomy and Geometry), and also concurrently for many years as surveyor to the Bedford Level Corporation. Life and work There are short biographical notes of Dunthorne, one in the Philosophical Transactions (Abridgement Series, published 1809) (unsigned), another in the 'Dictionary of National Biography' (vol.16), and a third by W T Lynn. Dunthorne was born in humble circumstances in Ramsey, Cambridgeshire, where he attended the free grammar school. There he attracted the notice of Roger Long (later Master of Pembroke Hall, Cambridge), whose protégé Dunthorne became. Dunthorne moved to Cambridge where Long first appointed him as a "footboy", and where he received some further education (though this does not seem to have been regular university education). Dunthorne then "managed" a preparatory school in Coggeshall, Essex, and later returned to Cambridge where Long obtained for him an appointment as a "butler" at Pembroke Hall, an office that Dunthorne retained for the rest of his life. Here, Dunthorne's main activity seems to have been in assisting Long in astronomical and scientific work. Dunthorne also held an appointment for some years, concurrently with his work with Long, as superintendent of works of the Bedford Level Corporation, responsible for water management in the Fens; he began this work several years" before 1761, continuing into the 1770s. In this role, Dunthorne was concerned in a survey of the fens in Cambridgeshire, and he also supervised construction of locks near Chesterton on the River Cam. Dunthorne's association with Long remained lifelong, and in the end Dunthorne acted as executor of Long's will. Lunar tables Dunthorne published a book of astronomical tables in 1739 entitled Practical Astronomy of the Moon: or, new Tables... Exactly constructed from Sir Isaac Newton's Theory, as published by Dr Gregory in his Astronomy, London & Oxford, 1739. These tables were modelled on Isaac Newton's lunar theory of 1702, to facilitate testing Newton's theory. In a 1746 letter to the keeper of Cambridge's Woodwardian Museum, Dunthorne wrote: "After I had compared a good Number of modern Observations made in different Situations of the Moon and of her Orbit in respect of the Sun, with the Newtonian Theory . . . I proceeded to examine the mean Motion of the Moon, of her Apogee, and Nodes, to see whether they were well represented by the Tables for any considerable Number of Years . . . " On the basis of his observations, Dunthorne proposed some adjustments of the numerical terms of the theory. Acceleration of the Moon Dunthorne is particularly remembered for his study of the phenomenon of the changing apparent speed of The Moon in its orbit. Edmond Halley in about 1695 had already suggested on the basis of comparison between contemporary observations and on the other hand ancient records for the timing of ancient eclipses, that the Moon was very gradually accelerating in its orbit. (It was not yet known in Halley's or in Dunthorne's time that what is actually happening is a slowing-down of the Earth's rate of rotation – see Ephemeris time.) Dunthorne's computations, based in part on records of ancient accounts of eclipses, confirmed the apparent acceleration; and he was the first to quantify the effect, which he put at +10" (arcseconds/century^2) in terms of the difference of lunar longitude. Dunthorne's estimate is not far from those assessed later, e.g. in 1786 by de Lalande and still not very far away from the values from about 10" to nearly 13" being derived about a century later. Astronomical publications and observations Dunthorne published papers in the Philosophical Transactions, including "On the motion of the Moon" (1746), "On the acceleration of the Moon" (1749), and the letter "Concerning comets" in 1751. He observed the transits of Venus in 1761 and 1769, and also published tables on the motion of Jupiter's satellites in 1762. Work for the Nautical Almanac On 18 July 1765, the Board of Longitude (effectively led by Nevil Maskelyne) appointed Dunthorne as the first "Comparer of the Ephemeris and Corrector of the Proofs" for the (then still future) Nautical Almanac and Astronomical Ephemeris. The first issue appeared with data for the year 1767, breaking new ground in providing computational tools to enable mariners to use lunar observations to find their longitude at sea. Dunthorne worked as sole comparer for the first three issues, with data for 1767–69, and afterward continued as one of several comparers until the issue for 1776. Dunthorne also contributed a method for clearing nautical lunar observations of the effects of refraction and parallax, for the purpose of finding the longitude at sea, and Maskelyne included this in his 'Tables requisite to be used with the Nautical Ephemeris', an accessory volume published to accompany the Nautical Almanac. It is also reported that Dunthorne in 1772 received from the Board of Longitude a reward of £50 for this contribution towards shortening the tedious calculations involved in "clearing the lunar distance" (at the same time as a similar reward was given to the contributor of an alternative method for the same purpose, Israel Lyons, 1739–1775). Improvements were added and "Dunthorne's improved method" was included in an edition of 1802. In this area of celestial navigation, Dunthorne has been credited as the first to apply trigonometrical formulae for the general spherical triangle to the reduction of lunar distances and to give auxiliary tables for that purpose. Benefactions in Cambridge Dunthorne planned and funded the construction of an observatory in 1765. The observatory was situated on the Shrewsbury Gate of St. John's College. Dunthorne also gave astronomical instruments to the college. The observatory remained in place until its closure in 1859. A contemporary, Rev. William Ludlam (in charge of the St John's College observatory from 1767), described Dunthorne as one "who without the benefit of an Academical education is arrived at such a perfection in many branches of learning, and particularly in Astronomy, as would do honour to the proudest Professor in any University . . . he joined to a consummate excellence in his profession a generosity without limit in the exercise of it." Dunthorne died at Cambridge. The crater Dunthorne on The Moon is named after him. Dunthorne's publications Richard Dunthorne (1739), Practical Astronomy of the Moon: or, new Tables... Exactly constructed from Sir Isaac Newton's Theory, as published by Dr Gregory in his Astronomy, London & Oxford, 1739. Richard Dunthorne (1746), "A Letter from Mr. Richard Dunthorne, to the Rev. Mr. Charles Mason, F. R. S. and Woodwardian Professor of Nat. Hist. at Cambridge, concerning the Moon's Motion", Philosophical Transactions, Volume 44 (1746), pp.412–420. Richard Dunthorne (1749), "A Letter from the Rev. Mr. Richard Dunthorne to the Reverend Mr. Richard Mason F. R. S. and Keeper of the Wood-Wardian Museum at Cambridge, concerning the Acceleration of the Moon", Philosophical Transactions, Vol. 46 (1749–1750) No. 492, pp. 162–172.-- also given in Philosophical Transactions (abridgements) (1809), vol.9 (for 1744–49), pp. 669–675 as "On the Acceleration of the Moon, by the Rev. Richard Dunthorne". Richard Dunthorne (1751), "A Letter from Mr. Rich. Dunthorne to the Rev. Dr. Long, F. R. S. Master of Pembroke-Hall in Cambridge, and Lowndes's Professor of Astronomy and Geometry in That University, concerning Comets", Philosophical Transactions, Volume 47 (1751), pp. 281–288. Richard Dunthorne (1761), "Elements of New Tables of the Motions of Jupiter's Satellites: In a Letter to the Reverend Charles Mason, D. D. Woodwardian Professor in the University of Cambridge, and F. R. S. from Mr. Richard Dunthorne", Philosophical Transactions, Volume 52 (1761), pp. 105–107. Notes and references Other sources Mary Croarken (2002), "Providing Longitude for All", Journal of Maritime Research (National Maritime Museum, Greenwich), September 2002. Library of St John's College, Cambridge, (online article) mentioning Dunthorne in connection with his astronomically-related gifts to the college 1764–5, including a regulator clock by John Shelton. W T Lynn (1905), "Richard Dunthorne", The Observatory, vol.28 (1905), pp.215–6. Philosophical Transactions (Abridgement Series) (1809), vol.9 (for 1744–49) pages 669–70, (unsigned) biographical note about Richard Dunthorne. Frédéric Marguet (Capitaine de Vaisseau) (1931), "Histoire générale de la navigation du XVe au XXe siècle", Paris 1931, chapter 7, at page 242. Christof A. Plicht, "R. Dunthorne," Red Hill Observatory Curious About Astronomy "Right" Answers Maskelyne, N. (1767), Nautical Almanac and Astronomical Ephemeris, editions for 1767 and 1768; (see especially Maskelyne's Preface, acknowledging Dunthorne. 18th-century English astronomers Amateur astronomers 1711 births 1775 deaths People from Ramsey, Cambridgeshire
Richard Dunthorne
[ "Astronomy" ]
2,070
[ "Astronomers", "Amateur astronomers" ]
4,173,255
https://en.wikipedia.org/wiki/Brain%20mapping
Brain mapping is a set of neuroscience techniques predicated on the mapping of (biological) quantities or properties onto spatial representations of the (human or non-human) brain resulting in maps. According to the definition established in 2013 by Society for Brain Mapping and Therapeutics (SBMT), brain mapping is specifically defined, in summary, as the study of the anatomy and function of the brain and spinal cord through the use of imaging, immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering, neurophysiology and nanotechnology. In 2024, a team of 287 researchers completed a full brain mapping of an adult animal (a Drosophila melanogaster, or fruit fly) and published their results in Nature. Overview All neuroimaging is considered part of brain mapping. Brain mapping can be conceived as a higher form of neuroimaging, producing brain images supplemented by the result of additional (imaging or non-imaging) data processing or analysis, such as maps projecting (measures of) behavior onto brain regions (see fMRI). One such map, called a connectogram, depicts cortical regions around a circle, organized by lobes. Concentric circles within the ring represent various common neurological measurements, such as cortical thickness or curvature. In the center of the circles, lines representing white matter fibers illustrate the connections between cortical regions, weighted by fractional anisotropy and strength of connection. At higher resolutions brain maps are called connectomes. These maps incorporate individual neural connections in the brain and are often presented as wiring diagrams. Brain mapping techniques are constantly evolving, and rely on the development and refinement of image acquisition, representation, analysis, visualization and interpretation techniques. Functional and structural neuroimaging are at the core of the mapping aspect of brain mapping. Some scientists have criticized the brain image-based claims made in scientific journals and the popular press, like the discovery of "the part of the brain responsible" things like love or musical abilities or a specific memory. Many mapping techniques have a relatively low resolution, including hundreds of thousands of neurons in a single voxel. Many functions also involve multiple parts of the brain, meaning that this type of claim is probably both unverifiable with the equipment used, and generally based on an incorrect assumption about how brain functions are divided. It may be that most brain functions will only be described correctly after being measured with much more fine-grained measurements that look not at large regions but instead at a very large number of tiny individual brain circuits. Many of these studies also have technical problems like small sample size or poor equipment calibration which means they cannot be reproduced - considerations which are sometimes ignored to produce a sensational journal article or news headline. In some cases the brain mapping techniques are used for commercial purposes, lie detection, or medical diagnosis in ways which have not been scientifically validated. History In the late 1980s in the United States, the Institute of Medicine of the National Academy of Science was commissioned to establish a panel to investigate the value of integrating neuroscientific information across a variety of techniques. Of specific interest is using structural and functional magnetic resonance imaging (fMRI), diffusion MRI (dMRI), magnetoencephalography (MEG), electroencephalography (EEG), positron emission tomography (PET), Near-infrared spectroscopy (NIRS) and other non-invasive scanning techniques to map anatomy, physiology, perfusion, function and phenotypes of the human brain. Both healthy and diseased brains may be mapped to study memory, learning, aging, and drug effects in various populations such as people with schizophrenia, autism, and clinical depression. This led to the establishment of the Human Brain Project. It may also be crucial to understanding traumatic brain injuries (as in the case of Phineas Gage) and improving brain injury treatment. Following a series of meetings, the International Consortium for Brain Mapping (ICBM) evolved. The ultimate goal is to develop flexible computational brain atlases. Achievements The interactive and citizen science website Eyewire maps mices' retinal cells and was launched in 2012. In 2021, the most comprehensive 3D map of the human brain was published by researchers at Google. It shows neurons and their connections along with blood vessels and other components of a millionth of a brain. For the map, the 1 mm³ sized fragment was sliced into about 5,300 pieces of about 30 nanometer thickness which were then each scanned with an electron microscope. The interactive map required 1.4 petabytes of storage-space. About two months later, scientists reported that they created the first complete neuron-level-resolution 3D map of a monkey brain which they scanned via a new method within 100 hours. They made only a fraction of the 3D map publicly available as the entire map takes more than 1 petabyte of storage space even when compressed. In October 2021, the BRAIN Initiative Cell Census Network concluded the first phase of a long-term project to generate an atlas of the entire mouse (mammalian) brain with 17 studies, including an atlas and census of cell types in the primary motor cortex. In 2024, FlyWire, a team of 287 researchers spanning 76 institutions completed a brain mapping, or connectome, of an adult animal (a Drosophila melanogaster, or fruit fly) and published their results in Nature. Prior to this, the only adult animal to have its brain entirely reconstructed was the nematode Caenorhabditis elegans, but the fruit fly brain map is the first "complete map of any complex brain", according to Murthy, one of the researchers involved. Primary mapping data was collected through electron microscopy, assisted by artificial intelligence and citizen scientists, who corrected errors that artificial intelligence made. The resulting model had more than 140,000 neurons with over 50 million synapses. From the model, research expect to identify how the brain creates new connections for functions such as vision, creating digital twin equivalents to track how segments of the neuron connection map interact to external signals, including the nervous system. Brain development In 2021, the first connectome that shows how an animal's brain changes throughout its lifetime was reported. Scientists mapped and compared the whole brains of eight isogenic C. elegans worms, each at a different stage of development. Later that year, scientists combined electron microscopy and brainbow imaging to show for the first time the development of a mammalian neural circuit. They reported the complete wiring diagrams between the CNS and muscles of ten individual mice. Vision In August 2021, scientists of the MICrONS program, launched in 2016, published a functional connectomics dataset that "contains calcium imaging of an estimated 75,000 neurons from primary visual cortex (VISp) and three higher visual areas (VISrl, VISal and VISlm), that were recorded while a mouse viewed natural movies and parametric stimuli". Based on this data they also published "interactive visualizations of anatomical and functional data that span all 6 layers of mouse primary visual cortex and 3 higher visual areas (LM, AL, RL) within a cubic millimeter volume" – the MICrONS Explorer. Brain regeneration In 2022, a first spatiotemporal cellular atlas of the axolotl brain development and regeneration, the interactive Axolotl Regenerative Telencephalon Interpretation via Spatiotemporal Transcriptomic Atlas , revealed key insights about axolotl brain regeneration. Current atlas tools Talairach Atlas, 1988 Harvard Whole Brain Atlas, 1995 MNI Template, 1998 (the standard template of SPM and International Consortium for Brain Mapping) Atlas of the Developing Human Brain, 2012 Infant Brain Atlas, 2023 See also Outline of brain mapping Outline of the human brain Brain Mapping Foundation BrainMaps Project Center for Computational Biology Connectogram FreeSurfer Human Connectome Project IEEE P1906.1 List of neuroscience databases Brain atlas Map projection Neuroimaging software Whole brain emulation Topographic map (neuroanatomy) Society for Brain Mapping and Therapeutics Computational anatomy References Further reading Rita Carter (1998). Mapping the Mind. F.J. Chen (2006). Brain Mapping And Language F.J. Chen (2006). Focus on Brain Mapping Research. F.J. Chen (2006). Trends in Brain Mapping Research. F.J. Chen (2006). Progress in Brain Mapping Research. Koichi Hirata (2002). Recent Advances in Human Brain Mapping: Proceedings of the 12th World Congress of the International Society for Brain Electromagnetic Topography (ISBET 2001). Konrad Maurer and Thomas Dierks (1991). Atlas of Brain Mapping: Topographic Mapping of Eeg and Evoked Potentials. Konrad Maurer (1989). Topographic Brain Mapping of Eeg and Evoked Potentials. Arthur W. Toga and John C. Mazziotta (2002). Brain Mapping: The Methods. Tatsuhiko Yuasa, James Prichard and S. Ogawa (1998). Current Progress in Functional Brain Mapping: Science and Applications. Neurophysiology Neuroimaging Neurosurgery Bioinformatics
Brain mapping
[ "Engineering", "Biology" ]
1,880
[ "Bioinformatics", "Biological engineering" ]
4,173,350
https://en.wikipedia.org/wiki/Polynomial%20expansion
In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition. Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of products, continuing until the expression becomes a sum of (repeated) products. During the expansion, simplifications such as grouping of like terms or cancellations of terms may also be applied. Instead of multiplications, the expansion steps could also involve replacing powers of a sum of terms by the equivalent expression obtained from the binomial formula; this is a shortened form of what would happen if the power were treated as a repeated multiplication, and expanded repeatedly. It is customary to reintroduce powers in the final result when terms involve products of identical symbols. Simple examples of polynomial expansions are the well known rules when used from left to right. A more general single-step expansion will introduce all products of a term of one of the sums being multiplied with a term of the other: An expansion which involves multiple nested rewrite steps is that of working out a Horner scheme to the (expanded) polynomial it defines, for instance . The opposite process of trying to write an expanded polynomial as a product is called polynomial factorization. Expansion of a polynomial written in factored form To multiply two factors, each term of the first factor must be multiplied by each term of the other factor. If both factors are binomials, the FOIL rule can be used, which stands for "First Outer Inner Last," referring to the terms that are multiplied together. For example, expanding yields Expansion of (x+y)n When expanding , a special relationship exists between the coefficients of the terms when written in order of descending powers of x and ascending powers of y. The coefficients will be the numbers in the (n + 1)th row of Pascal's triangle (since Pascal's triangle starts with row and column number of 0). For example, when expanding , the following is obtained: See also Polynomial factorization Factorization Multinomial theorem External links Discussion Review of Algebra: Expansion , University of Akron Online tools Expand page, quickmath.com Online Calculator with Symbolic Calculations, livephysics.com Polynomials de:Ausmultiplizieren
Polynomial expansion
[ "Mathematics" ]
488
[ "Polynomials", "Algebra" ]
4,173,609
https://en.wikipedia.org/wiki/Tiller%20%28botany%29
A tiller is a shoot that arises from the base of a grass plant. The term refers to all shoots that grow after the initial parent shoot grows from a seed. Tillers are segmented, each segment possessing its own two-part leaf. They are involved in vegetative propagation and, in some cases, also seed production. "Tillering" refers to the production of side shoots and is a property possessed by many species in the grass family. This enables them to produce multiple stems (tillers) starting from the initial single seedling. This ensures the formation of dense tufts and multiple seed heads. Tillering rates are heavily influenced by soil water quantity. When soil moisture is low, grasses tend to develop more sparse and deep root systems (as opposed to dense, lateral systems). Thus, in dry soils, tillering is inhibited: the lateral nature of tillering is not supported by lateral root growth. See also Crown (botany) References Grasses Biology terminology Plant morphology
Tiller (botany)
[ "Biology" ]
200
[ "Plant morphology", "nan", "Plants" ]
4,173,711
https://en.wikipedia.org/wiki/History%20of%20molecular%20biology
The history of molecular biology begins in the 1930s with the convergence of various, previously distinct biological and physical disciplines: biochemistry, genetics, microbiology, virology and physics. With the hope of understanding life at its most fundamental level, numerous physicists and chemists also took an interest in what would become molecular biology. In its modern sense, molecular biology attempts to explain the phenomena of life starting from the macromolecular properties that generate them. Two categories of macromolecules in particular are the focus of the molecular biologist: 1) nucleic acids, among which the most famous is deoxyribonucleic acid (or DNA), the constituent of genes, and 2) proteins, which are the active agents of living organisms. One definition of the scope of molecular biology therefore is to characterize the structure, function and relationships between these two types of macromolecules. This relatively limited definition allows for the estimation of a date for the so-called "molecular revolution", or at least to establish a chronology of its most fundamental developments. General overview In its earliest manifestations, molecular biology—the name was coined by Warren Weaver of the Rockefeller Foundation in 1938—was an idea of physical and chemical explanations of life, rather than a coherent discipline. Following the advent of the Mendelian-chromosome theory of heredity in the 1910s and the maturation of atomic theory and quantum mechanics in the 1920s, such explanations seemed within reach. Weaver and others encouraged (and funded) research at the intersection of biology, chemistry and physics, while prominent physicists such as Niels Bohr and Erwin Schrödinger turned their attention to biological speculation. However, in the 1930s and 1940s it was by no means clear which—if any—cross-disciplinary research would bear fruit; work in colloid chemistry, biophysics and radiation biology, crystallography, and other emerging fields all seemed promising. In 1940, George Beadle and Edward Tatum demonstrated the existence of a precise relationship between genes and proteins. In the course of their experiments connecting genetics with biochemistry, they switched from the genetics mainstay Drosophila to a more appropriate model organism, the fungus Neurospora; the construction and exploitation of new model organisms would become a recurring theme in the development of molecular biology. In 1944, Oswald Avery, working at the Rockefeller Institute of New York, demonstrated that genes are made up of DNA (see Avery–MacLeod–McCarty experiment). In 1952, Alfred Hershey and Martha Chase confirmed that the genetic material of the bacteriophage, the virus which infects bacteria, is made up of DNA (see Hershey–Chase experiment). In 1953, James Watson and Francis Crick discovered the double helical structure of the DNA molecule based on the discoveries made by Rosalind Franklin. In 1961, François Jacob and Jacques Monod demonstrated that the products of certain genes regulated the expression of other genes by acting upon specific sites at the edge of those genes. They also hypothesized the existence of an intermediary between DNA and its protein products, which they called messenger RNA. Between 1961 and 1965, the relationship between the information contained in DNA and the structure of proteins was determined: there is a code, the genetic code, which creates a correspondence between the succession of nucleotides in the DNA sequence and a series of amino acids in proteins. In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery. The chief discoveries of molecular biology took place in a period of only about twenty-five years. Another fifteen years were required before new and more sophisticated technologies, united today under the name of genetic engineering, would permit the isolation and characterization of genes, in particular those of highly complex organisms. The exploration of the molecular dominion If we evaluate the molecular revolution within the context of biological history, it is easy to note that it is the culmination of a long process which began with the first observations through a microscope. The aim of these early researchers was to understand the functioning of living organisms by describing their organization at the microscopic level. From the end of the 18th century, the characterization of the chemical molecules which make up living beings gained increasingly greater attention, along with the birth of physiological chemistry in the 19th century, developed by the German chemist Justus von Liebig and following the birth of biochemistry at the beginning of the 20th, thanks to another German chemist Eduard Buchner. Between the molecules studied by chemists and the tiny structures visible under the optical microscope, such as the cellular nucleus or the chromosomes, there was an obscure zone, "the world of the ignored dimensions," as it was called by the chemical-physicist Wolfgang Ostwald. This world is populated by colloids, chemical compounds whose structure and properties were not well defined. The successes of molecular biology derived from the exploration of that unknown world by means of the new technologies developed by chemists and physicists: X-ray diffraction, electron microscopy, ultracentrifugation, and electrophoresis. These studies revealed the structure and function of the macromolecules. A milestone in that process was the work of Linus Pauling in 1949, which for the first time linked the specific genetic mutation in patients with sickle cell disease to a demonstrated change in an individual protein, the hemoglobin in the erythrocytes of heterozygous or homozygous individuals. The encounter between biochemistry and genetics The development of molecular biology is also the encounter of two disciplines which made considerable progress in the course of the first thirty years of the twentieth century: biochemistry and genetics. The first studies the structure and function of the molecules which make up living things. Between 1900 and 1940, the central processes of metabolism were described: the process of digestion and the absorption of the nutritive elements derived from alimentation, such as the sugars. Every one of these processes is catalyzed by a particular enzyme. Enzymes are proteins, like the antibodies present in blood or the proteins responsible for muscular contraction. As a consequence, the study of proteins, of their structure and synthesis, became one of the principal objectives of biochemists. The second discipline of biology which developed at the beginning of the 20th century is genetics. After the rediscovery of the laws of Mendel through the studies of Hugo de Vries, Carl Correns and Erich von Tschermak in 1900, this science began to take shape thanks to the adoption by Thomas Hunt Morgan, in 1910, of a model organism for genetic studies, the famous fruit fly (Drosophila melanogaster). Shortly after, Morgan showed that the genes are localized on chromosomes. Following this discovery, he continued working with Drosophila and, along with numerous other research groups, confirmed the importance of the gene in the life and development of organisms. Nevertheless, the chemical nature of genes and their mechanisms of action remained a mystery. Molecular biologists committed themselves to the determination of the structure, and the description of the complex relations between, genes and proteins. The development of molecular biology was not just the fruit of some sort of intrinsic "necessity" in the history of ideas, but was a characteristically historical phenomenon, with all of its unknowns, imponderables and contingencies: the remarkable developments in physics at the beginning of the 20th century highlighted the relative lateness in development in biology, which became the "new frontier" in the search for knowledge about the empirical world. Moreover, the developments of the theory of information and cybernetics in the 1940s, in response to military exigencies, brought to the new biology a significant number of fertile ideas and, especially, metaphors. The choice of bacteria and of its virus, the bacteriophage, as models for the study of the fundamental mechanisms of life was almost natural—they are the smallest living organisms known to exist—and at the same time the fruit of individual choices. This model owes its success, above all, to the fame and the sense of organization of Max Delbrück, a German physicist, who was able to create a dynamic research group, based in the United States, whose exclusive scope was the study of the bacteriophage: the phage group. The phage group was an informal network of biologists that carried out basic research mainly on bacteriophage T4 and made numerous seminal contributions to microbial genetics and the origins of molecular biology in the mid-20th century. In 1961, Sydney Brenner, an early member of the phage group, collaborated with Francis Crick, Leslie Barnett and Richard Watts-Tobin at the Cavendish Laboratory in Cambridge to perform genetic experiments that demonstrated the basic nature of the genetic code for proteins. These experiments, carried out with mutants of the rIIB gene of bacteriophage T4, showed, that for a gene that encodes a protein, three sequential bases of the gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. They also found that the codons do not overlap with each other in the DNA sequence encoding a protein, and that such a sequence is read from a fixed starting point. During 1962–1964 phage T4 researchers provided an opportunity to study the function of virtually all of the genes that are essential for growth of the bacteriophage under laboratory conditions. These studies were facilitated by the discovery of two classes of conditional lethal mutants. One class of such mutants is known as amber mutants. Another class of conditional lethal mutants is referred to as temperature-sensitive mutants. Studies of these two classes of mutants led to considerable insight into numerous fundamental biologic problems. Thus understanding was gained on the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair and DNA recombination. Furthermore, understanding was gained on the processes by which viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Also, the role of chain terminating codons was elucidated. One noteworthy study used amber mutants defective in the gene encoding the major head protein of bacteriophage T4. This experiment provided strong evidence for the widely held, but prior to 1964 still unproven, "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein. Thus, this study demonstrated the co-linearity of the gene with its encoded protein. The geographic panorama of the developments of the new biology was conditioned above all by preceding work. The US, where genetics had developed the most rapidly, and the UK, where there was a coexistence of both genetics and biochemical research of highly advanced levels, were in the avant-garde. Germany, the cradle of the revolutions in physics, with the best minds and the most advanced laboratories of genetics in the world, should have had a primary role in the development of molecular biology. But history decided differently: the arrival of the Nazis in 1933—and, to a less extreme degree, the rigidification of totalitarian measures in fascist Italy—caused the emigration of a large number of Jewish and non-Jewish scientists. The majority of them fled to the US or the UK, providing an extra impulse to the scientific dynamism of those nations. These movements ultimately made molecular biology a truly international science from the very beginnings. History of DNA biochemistry The study of DNA is a central part of molecular biology. First isolation of DNA Working in the 19th century, biochemists initially isolated DNA and RNA (mixed together) from cell nuclei. They were relatively quick to appreciate the polymeric nature of their "nucleic acid" isolates, but realized only later that nucleotides were of two types—one containing ribose and the other deoxyribose. It was this subsequent discovery that led to the identification and naming of DNA as a substance distinct from RNA. Friedrich Miescher (1844–1895) discovered a substance he called "nuclein" in 1869. Somewhat later, he isolated a pure sample of the material now known as DNA from the sperm of salmon, and in 1889 his pupil, Richard Altmann, named it "nucleic acid". This substance was found to exist only in the chromosomes. In 1919 Phoebus Levene at the Rockefeller Institute identified the components (the four bases, the sugar and the phosphate chain) and he showed that the components of DNA were linked in the order phosphate-sugar-base. He called each of these units a nucleotide and suggested the DNA molecule consisted of a string of nucleotide units linked together through the phosphate groups, which are the 'backbone' of the molecule. However Levene thought the chain was short and that the bases repeated in the same fixed order. Torbjörn Caspersson and Einar Hammersten showed that DNA was a polymer. Chromosomes and inherited traits In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" which would be made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". Max Delbrück, Nikolay Timofeev-Ressovsky, and Karl G. Zimmer published results in 1935 suggesting that chromosomes are very large molecules the structure of which can be changed by treatment with X-rays, and that by so changing their structure it was possible to change the heritable characteristics governed by those chromosomes. In 1937 William Astbury produced the first X-ray diffraction patterns from DNA. He was not able to propose the correct structure but the patterns showed that DNA had a regular structure and therefore it might be possible to deduce what this structure was. In 1943, Oswald Theodore Avery and a team of scientists discovered that traits proper to the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria merely by making the killed "smooth" (S) form available to the live "rough" (R) form. Quite unexpectedly, the living R Pneumococcus bacteria were transformed into a new strain of the S form, and the transferred S characteristics turned out to be heritable. Avery called the medium of transfer of traits the transforming principle; he identified DNA as the transforming principle, and not protein as previously thought. He essentially redid Frederick Griffith's experiment. In 1953, Alfred Hershey and Martha Chase did an experiment (Hershey–Chase experiment) that showed, in T2 phage, that DNA is the genetic material (Hershey shared the Nobel prize with Luria). Discovery of the structure of DNA In the 1950s, three groups made it their goal to determine the structure of DNA. The first group to start was at King's College London and was led by Maurice Wilkins and was later joined by Rosalind Franklin. Another group consisting of Francis Crick and James Watson was at Cambridge. A third group was at Caltech and was led by Linus Pauling. Crick and Watson built physical models using metal rods and balls, in which they incorporated the known chemical structures of the nucleotides, as well as the known position of the linkages joining one nucleotide to the next along the polymer. At King's College Maurice Wilkins and Rosalind Franklin examined X-ray diffraction patterns of DNA fibers. Of the three groups, only the London group was able to produce good quality diffraction patterns and thus produce sufficient quantitative data about the structure. Helix structure In 1948, Pauling discovered that many proteins included helical (see alpha helix) shapes. Pauling had deduced this structure from X-ray patterns and from attempts to physically model the structures. (Pauling was also later to suggest an incorrect three chain helical DNA structure based on Astbury's data.) Even in the initial diffraction data from DNA by Maurice Wilkins, it was evident that the structure involved helices. But this insight was only a beginning. There remained the questions of how many strands came together, whether this number was the same for every helix, whether the bases pointed toward the helical axis or away, and ultimately what were the explicit angles and coordinates of all the bonds and atoms. Such questions motivated the modeling efforts of Watson and Crick. Complementary nucleotides In their modeling, Watson and Crick restricted themselves to what they saw as chemically and biologically reasonable. Still, the breadth of possibilities was very wide. A breakthrough occurred in 1952, when Erwin Chargaff visited Cambridge and inspired Crick with a description of experiments Chargaff had published in 1947. Chargaff had observed that the proportions of the four nucleotides vary between one DNA sample and the next, but that for particular pairs of nucleotides—adenine and thymine, guanine and cytosine—the two nucleotides are always present in equal proportions. Using X-ray diffraction, as well as other data from Rosalind Franklin and her information that the bases were paired, James Watson and Francis Crick arrived at the first accurate model of DNA's molecular structure in 1953, which was accepted through inspection by Rosalind Franklin. The discovery was announced on February 28, 1953; the first Watson/Crick paper appeared in Nature on April 25, 1953. Sir Lawrence Bragg, the director of the Cavendish Laboratory, where Watson and Crick worked, gave a talk at Guy's Hospital Medical School in London on Thursday, May 14, 1953, which resulted in an article by Ritchie Calder in the News Chronicle of London, on Friday, May 15, 1953, entitled "Why You Are You. Nearer Secret of Life." The news reached readers of The New York Times the next day; Victor K. McElheny, in researching his biography, "Watson and DNA: Making a Scientific Revolution", found a clipping of a six-paragraph New York Times article written from London and dated May 16, 1953, with the headline "Form of 'Life Unit' in Cell Is Scanned." The article ran in an early edition and was then pulled to make space for news deemed more important. (The New York Times subsequently ran a longer article on June 12, 1953). The Cambridge University undergraduate newspaper also ran its own short article on the discovery on Saturday, May 30, 1953. Bragg's original announcement at a Solvay Conference on proteins in Belgium on April 8, 1953, went unreported by the press. In 1962 Watson, Crick, and Maurice Wilkins jointly received the Nobel Prize in Physiology or Medicine for their determination of the structure of DNA. "Central Dogma" Watson and Crick's model attracted great interest immediately upon its presentation. Arriving at their conclusion on February 21, 1953, Watson and Crick made their first announcement on February 28. In an influential presentation in 1957, Crick laid out the "central dogma of molecular biology", which foretold the relationship between DNA, RNA, and proteins, and articulated the "sequence hypothesis." A critical confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 in the form of the Meselson–Stahl experiment. Messenger RNA (mRNA) was identified as an intermediate between DNA sequences and protein synthesis by Brenner, Meselson, and Jacob in 1961. Then, work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, and Har Gobind Khorana and others deciphered the genetic code not long afterward (1966). These findings represent the birth of molecular biology. History of RNA tertiary structure Pre-history: the helical structure of RNA The earliest work in RNA structural biology coincided, more or less, with the work being done on DNA in the early 1950s. In their seminal 1953 paper, Watson and Crick suggested that van der Waals crowding by the 2`OH group of ribose would preclude RNA from adopting a double helical structure identical to the model they proposed—what we now know as B-form DNA. This provoked questions about the three-dimensional structure of RNA: could this molecule form some type of helical structure, and if so, how? As with DNA, early structural work on RNA centered around isolation of native RNA polymers for fiber diffraction analysis. In part because of heterogeneity of the samples tested, early fiber diffraction patterns were usually ambiguous and not readily interpretable. In 1955, Marianne Grunberg-Manago and colleagues published a paper describing the enzyme polynucleotide phosphorylase, which cleaved a phosphate group from nucleotide diphosphates to catalyze their polymerization. This discovery allowed researchers to synthesize homogenous nucleotide polymers, which they then combined to produce double stranded molecules. These samples yielded the most readily interpretable fiber diffraction patterns yet obtained, suggesting an ordered, helical structure for cognate, double stranded RNA that differed from that observed in DNA. These results paved the way for a series of investigations into the various properties and propensities of RNA. Through the late 1950s and early 1960s, numerous papers were published on various topics in RNA structure, including RNA-DNA hybridization, triple stranded RNA, and even small-scale crystallography of RNA di-nucleotides—G-C, and A-U—in primitive helix-like arrangements. For a more in-depth review of the early work in RNA structural biology, see the article The Era of RNA Awakening: Structural biology of RNA in the early years by Alexander Rich. The beginning: crystal structure of tRNAPHE In the mid-1960s, the role of tRNA in protein synthesis was being intensively studied. At this point, ribosomes had been implicated in protein synthesis, and it had been shown that an mRNA strand was necessary for the formation of these structures. In a 1964 publication, Warner and Rich showed that ribosomes active in protein synthesis contained tRNA molecules bound at the A and P sites, and discussed the notion that these molecules aided in the peptidyl transferase reaction. However, despite considerable biochemical characterization, the structural basis of tRNA function remained a mystery. In 1965, Holley et al. purified and sequenced the first tRNA molecule, initially proposing that it adopted a cloverleaf structure, based largely on the ability of certain regions of the molecule to form stem loop structures. The isolation of tRNA proved to be the first major windfall in RNA structural biology. Following Robert W. Holley's publication, numerous investigators began work on isolation tRNA for crystallographic study, developing improved methods for isolating the molecule as they worked. By 1968 several groups had produced tRNA crystals, but these proved to be of limited quality and did not yield data at the resolutions necessary to determine structure. In 1971, Kim et al. achieved another breakthrough, producing crystals of yeast tRNAPHE that diffracted to 2–3 Ångström resolutions by using spermine, a naturally occurring polyamine, which bound to and stabilized the tRNA. Despite having suitable crystals, however, the structure of tRNAPHE was not immediately solved at high resolution; rather it took pioneering work in the use of heavy metal derivatives and a good deal more time to produce a high-quality density map of the entire molecule. In 1973, Kim et al. produced a 4 Ångström map of the tRNA molecule in which they could unambiguously trace the entire backbone. This solution would be followed by many more, as various investigators worked to refine the structure and thereby more thoroughly elucidate the details of base pairing and stacking interactions, and validate the published architecture of the molecule. The tRNAPHE structure is notable in the field of nucleic acid structure in general, as it represented the first solution of a long-chain nucleic acid structure of any kind—RNA or DNA—preceding Richard E. Dickerson's solution of a B-form dodecamer by nearly a decade. Also, tRNAPHE demonstrated many of the tertiary interactions observed in RNA architecture which would not be categorized and more thoroughly understood for years to come, providing a foundation for all future RNA structural research. The renaissance: the hammerhead ribozyme and the group I intron: P4-6 For a considerable time following the first tRNA structures, the field of RNA structure did not dramatically advance. The ability to study an RNA structure depended upon the potential to isolate the RNA target. This proved limiting to the field for many years, in part because other known targets—i.e., the ribosome—were significantly more difficult to isolate and crystallize. Further, because other interesting RNA targets had simply not been identified, or were not sufficiently understood to be deemed interesting, there was simply a lack of things to study structurally. As such, for some twenty years following the original publication of the tRNAPHE structure, the structures of only a handful of other RNA targets were solved, with almost all of these belonging to the transfer RNA family. This unfortunate lack of scope would eventually be overcome largely because of two major advancements in nucleic acid research: the identification of ribozymes, and the ability to produce them via in vitro transcription. Subsequent to Tom Cech's publication implicating the Tetrahymena group I intron as an autocatalytic ribozyme, and Sidney Altman's report of catalysis by ribonuclease P RNA, several other catalytic RNAs were identified in the late 1980s, including the hammerhead ribozyme. In 1994, McKay et al. published the structure of a 'hammerhead RNA-DNA ribozyme-inhibitor complex' at 2.6 Ångström resolution, in which the autocatalytic activity of the ribozyme was disrupted via binding to a DNA substrate. The conformation of the ribozyme published in this paper was eventually shown to be one of several possible states, and although this particular sample was catalytically inactive, subsequent structures have revealed its active-state architecture. This structure was followed by Jennifer Doudna's publication of the structure of the P4-P6 domains of the Tetrahymena group I intron, a fragment of the ribozyme originally made famous by Cech. The second clause in the title of this publication—Principles of RNA Packing—concisely evinces the value of these two structures: for the first time, comparisons could be made between well described tRNA structures and those of globular RNAs outside the transfer family. This allowed the framework of categorization to be built for RNA tertiary structure. It was now possible to propose the conservation of motifs, folds, and various local stabilizing interactions. For an early review of these structures and their implications, see RNA FOLDS: Insights from recent crystal structures, by Doudna and Ferre-D'Amare. In addition to the advances being made in global structure determination via crystallography, the early 1990s also saw the implementation of NMR as a powerful technique in RNA structural biology. Coincident with the large-scale ribozyme structures being solved crystallographically, a number of structures of small RNAs and RNAs complexed with drugs and peptides were solved using NMR. In addition, NMR was now being used to investigate and supplement crystal structures, as exemplified by the determination of an isolated tetraloop-receptor motif structure published in 1997. Investigations such as this enabled a more precise characterization of the base pairing and base stacking interactions which stabilized the global folds of large RNA molecules. The importance of understanding RNA tertiary structural motifs was prophetically well described by Michel and Costa in their publication identifying the tetraloop motif: "...it should not come as a surprise if self-folding RNA molecules were to make intensive use of only a relatively small set of tertiary motifs. Identifying these motifs would greatly aid modeling enterprises, which will remain essential as long as the crystallization of large RNAs remains a difficult task". The modern era: the age of RNA structural biology The resurgence of RNA structural biology in the mid-1990s has caused a veritable explosion in the field of nucleic acid structural research. Since the publication of the hammerhead and P4-6 structures, numerous major contributions to the field have been made. Some of the most noteworthy examples include the structures of the Group I and Group II introns, and the Ribosome solved by Nenad Ban and colleagues in the laboratory of Thomas Steitz. The first three structures were produced using in vitro transcription, and that NMR has played a role in investigating partial components of all four structures—testaments to the indispensability of both techniques for RNA research. Most recently, the 2009 Nobel Prize in Chemistry was awarded to Ada Yonath, Venkatraman Ramakrishnan and Thomas Steitz for their structural work on the ribosome, demonstrating the prominent role RNA structural biology has taken in modern molecular biology. History of protein biochemistry First isolation and classification Proteins were recognized as a distinct class of biological molecules in the eighteenth century by Antoine Fourcroy and others. Members of this class (called the "albuminoids", Eiweisskörper, or matières albuminoides) were recognized by their ability to coagulate or flocculate under various treatments such as heat or acid; well-known examples at the start of the nineteenth century included albumen from egg whites, blood serum albumin, fibrin, and wheat gluten. The similarity between the cooking of egg whites and the curdling of milk was recognized even in ancient times; for example, the name albumen for the egg-white protein was coined by Pliny the Elder from the Latin albus ovi (egg white). With the advice of Jöns Jakob Berzelius, the Dutch chemist Gerhardus Johannes Mulder carried out elemental analyses of common animal and plant proteins. To everyone's surprise, all proteins had nearly the same empirical formula, roughly C400H620N100O120 with individual sulfur and phosphorus atoms. Mulder published his findings in two papers (1837,1838) and hypothesized that there was one basic substance (Grundstoff) of proteins, and that it was synthesized by plants and absorbed from them by animals in digestion. Berzelius was an early proponent of this theory and proposed the name "protein" for this substance in a letter dated 10 July 1838 The name protein that he propose for the organic oxide of fibrin and albumin, I wanted to derive from [the Greek word] πρωτειος, because it appears to be the primitive or principal substance of animal nutrition. Mulder went on to identify the products of protein degradation such as the amino acid, leucine, for which he found a (nearly correct) molecular weight of 131 Da. Purifications and measurements of mass The minimum molecular weight suggested by Mulder's analyses was roughly 9 kDa, hundreds of times larger than other molecules being studied. Hence, the chemical structure of proteins (their primary structure) was an active area of research until 1949, when Fred Sanger sequenced insulin. The (correct) theory that proteins were linear polymers of amino acids linked by peptide bonds was proposed independently and simultaneously by Franz Hofmeister and Emil Fischer at the same conference in 1902. However, some scientists were sceptical that such long macromolecules could be stable in solution. Consequently, numerous alternative theories of the protein primary structure were proposed, e.g., the colloidal hypothesis that proteins were assemblies of small molecules, the cyclol hypothesis of Dorothy Wrinch, the diketopiperazine hypothesis of Emil Abderhalden and the pyrrol/piperidine hypothesis of Troensgard (1942). Most of these theories had difficulties in accounting for the fact that the digestion of proteins yielded peptides and amino acids. Proteins were finally shown to be macromolecules of well-defined composition (and not colloidal mixtures) by Theodor Svedberg using analytical ultracentrifugation. The possibility that some proteins are non-covalent associations of such macromolecules was shown by Gilbert Smithson Adair (by measuring the osmotic pressure of hemoglobin) and, later, by Frederic M. Richards in his studies of ribonuclease S. The mass spectrometry of proteins has long been a useful technique for identifying posttranslational modifications and, more recently, for probing protein structure. Most proteins are difficult to purify in more than milligram quantities, even using the most modern methods. Hence, early studies focused on proteins that could be purified in large quantities, e.g., those of blood, egg white, various toxins, and digestive/metabolic enzymes obtained from slaughterhouses. Many techniques of protein purification were developed during World War II in a project led by Edwin Joseph Cohn to purify blood proteins to help keep soldiers alive. In the late 1950s, the Armour Hot Dog Co. purified 1 kg (= one million milligrams) of pure bovine pancreatic ribonuclease A and made it available at low cost to scientists around the world. This generous act made RNase A the main protein for basic research for the next few decades, resulting in several Nobel Prizes. Protein folding and first structural models The study of protein folding began in 1910 with a famous paper by Harriette Chick and C. J. Martin, in which they showed that the flocculation of a protein was composed of two distinct processes: the precipitation of a protein from solution was preceded by another process called denaturation, in which the protein became much less soluble, lost its enzymatic activity and became more chemically reactive. In the mid-1920s, Tim Anson and Alfred Mirsky proposed that denaturation was a reversible process, a correct hypothesis that was initially lampooned by some scientists as "unboiling the egg". Anson also suggested that denaturation was a two-state ("all-or-none") process, in which one fundamental molecular transition resulted in the drastic changes in solubility, enzymatic activity and chemical reactivity; he further noted that the free energy changes upon denaturation were much smaller than those typically involved in chemical reactions. In 1929, Hsien Wu hypothesized that denaturation was protein unfolding, a purely conformational change that resulted in the exposure of amino acid side chains to the solvent. According to this (correct) hypothesis, exposure of aliphatic and reactive side chains to solvent rendered the protein less soluble and more reactive, whereas the loss of a specific conformation caused the loss of enzymatic activity. Although considered plausible, Wu's hypothesis was not immediately accepted, since so little was known of protein structure and enzymology and other factors could account for the changes in solubility, enzymatic activity and chemical reactivity. In the early 1960s, Chris Anfinsen showed that the folding of ribonuclease A was fully reversible with no external cofactors needed, verifying the "thermodynamic hypothesis" of protein folding that the folded state represents the global minimum of free energy for the protein. The hypothesis of protein folding was followed by research into the physical interactions that stabilize folded protein structures. The crucial role of hydrophobic interactions was hypothesized by Dorothy Wrinch and Irving Langmuir, as a mechanism that might stabilize her cyclol structures. Although supported by J. D. Bernal and others, this (correct) hypothesis was rejected along with the cyclol hypothesis, which was disproven in the 1930s by Linus Pauling (among others). Instead, Pauling championed the idea that protein structure was stabilized mainly by hydrogen bonds, an idea advanced initially by William Astbury (1933). Remarkably, Pauling's incorrect theory about H-bonds resulted in his correct models for the secondary structure elements of proteins, the alpha helix and the beta sheet. The hydrophobic interaction was restored to its correct prominence by a famous article in 1959 by Walter Kauzmann on denaturation, based partly on work by Kaj Linderstrøm-Lang. The ionic nature of proteins was demonstrated by Bjerrum, Weber and Arne Tiselius, but Linderstrom-Lang showed that the charges were generally accessible to solvent and not bound to each other (1949). The secondary and low-resolution tertiary structure of globular proteins was investigated initially by hydrodynamic methods, such as analytical ultracentrifugation and flow birefringence. Spectroscopic methods to probe protein structure (such as circular dichroism, fluorescence, near-ultraviolet and infrared absorbance) were developed in the 1950s. The first atomic-resolution structures of proteins were solved by X-ray crystallography in the 1960s and by NMR in the 1980s. , the Protein Data Bank has over 150,000 atomic-resolution structures of proteins. In more recent times, cryo-electron microscopy of large macromolecular assemblies has achieved atomic resolution, and computational protein structure prediction of small protein domains is approaching atomic resolution. See also History of biology History of biotechnology History of genetics References Fruton, Joseph. Proteins, Genes, Enzymes: The Interplay of Chemistry and Biology. New Haven: Yale University Press. 1999. Lily E. Kay, The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology, Oxford University Press, Reprint 1996 Morange, Michel. A History of Molecular Biology. Cambridge, MA: Harvard University Press. 1998. Fry, Michael. Landmark Experiments in Molecular Biology. Amsterdam: Elsevier/Academic Press. 2016. History of molecular biology History of chemistry
History of molecular biology
[ "Chemistry" ]
7,855
[ "History of molecular biology", "Molecular biology" ]
4,174,309
https://en.wikipedia.org/wiki/Annular%20dark-field%20imaging
Annular dark-field imaging is a method of mapping samples in a scanning transmission electron microscope (STEM). These images are formed by collecting scattered electrons with an annular dark-field detector. Conventional TEM dark-field imaging uses an objective aperture to only collect scattered electrons that pass through. In contrast, STEM dark-field imaging does not use an aperture to differentiate the scattered electrons from the main beam, but uses an annular detector to collect only the scattered electrons. Consequently, the contrast mechanisms are different between conventional dark field imaging and the STEM dark field. An annular dark field detector collects electrons from an annulus around the beam, sampling far more scattered electrons than can pass through an objective aperture. This gives an advantage in terms of signal collection efficiency and allows the main beam to pass to an electron energy loss spectroscopy (EELS) detector, allowing both types of measurement to be performed simultaneously. Annular dark-field imaging is also commonly performed in parallel with energy-dispersive X-ray spectroscopy acquisition and can be also done in parallel to bright-field (STEM) imaging. HAADF High-angle annular dark-field imaging (HAADF) is a STEM technique which produces an annular dark field image formed by very high angle, incoherently scattered electrons (Rutherford scattered from the nucleus of the atoms) — as opposed to Bragg scattered electrons. This technique is highly sensitive to variations in the atomic number of atoms in the sample (Z-contrast images). For elements with a higher Z, more electrons are scattered at higher angles due to greater electrostatic interactions between the nucleus and electron beam. Because of this, the HAADF detector senses a greater signal from atoms with a higher Z, causing them to appear brighter in the resulting image. This high dependence on Z (with contrast approximately proportional to Z2) makes HAADF a useful way to easily identify small areas of an element with a high Z in a matrix of material with a lower Z. With this in mind, a common application for HAADF is in heterogeneous catalysis research, as determination of the size of metal particles and their distribution is extremely important. Resolution Image resolution in HAADF STEM is very high and predominately determined by the size of the electron probe, which in turn depends on the ability to correct the aberrations of the objective lens, in particular the spherical aberration. The high resolution gives it an advantage over the detection of backscattered electrons (BSE), which can also be used to detect materials with a high Z in a matrix of material with a lower Z. Microscope Specifications HAADF imaging typically uses electrons scattered at an angle of >5° (Rutherford scattered electrons). For imaging on a TEM/STEM, optimum HAADF imaging is provided by TEM/STEM systems with a large maximum diffraction angle and a small minimum camera length. Both of these factors allow for greater separation between Bragg and Rutherford's scattered electrons. The large maximum diffraction angle is necessary to account for materials that show Bragg scattering at high angles, such as many crystalline materials. The high maximum diffraction angle allows for good separation between Bragg and Rutherford scattered electrons, therefore the maximum diffraction angle of the microscope needs to be as large as possible for use with HAADF. A small camera length is needed for the Rutherford scattered electrons to hit the detector, while avoiding the detection of Bragg scattered electrons. A small camera length will cause most of the Bragg scattered electrons to fall on the bright field detector with the transmitted electrons, leaving only the high angle scattered electrons to fall on the dark field detector. See also Transmission electron microscopy Scanning transmission electron microscopy Dark field microscopy References Electron microscopy
Annular dark-field imaging
[ "Chemistry" ]
760
[ "Electron", "Electron microscopy", "Microscopy" ]
4,174,336
https://en.wikipedia.org/wiki/Resources%20of%20a%20Resource
Resources of a Resource (ROR) is an XML format for describing the content of an internet resource or website in a generic fashion so this content can be better understood by search engines, spiders, web applications, etc. The ROR format provides several pre-defined terms for describing objects like sitemaps, products, events, reviews, jobs, classifieds, etc. The format can be extended with custom terms. RORweb.com is the official website of ROR; the ROR format was created by AddMe.com as a way to help search engines better understand content and meaning. Similar concepts, like Google Sitemaps and Google Base, have also been developed since the introduction of the ROR format. ROR objects are placed in an ROR feed called ror.xml. This file is typically located in the root directory of the resource or website it describes. When a search engine like Google or Yahoo searches the web to determine how to categorize content, the ROR feed allows the search engines "spider" to quickly identify all the content and attributes of the website. This has three main benefits: It allows the spider to correctly categorize the content of the website into its engine. It allows the spider to extract very detailed information about the objects on a website (sitemaps, products, events, reviews, jobs, classifieds, etc.) It allows the website owner to optimize his site for inclusion of its content into the search engines. External links RORweb.com XML
Resources of a Resource
[ "Technology" ]
313
[ "Computing stubs" ]
4,174,517
https://en.wikipedia.org/wiki/Algaculture
Algaculture is a form of aquaculture involving the farming of species of algae. The majority of algae that are intentionally cultivated fall into the category of microalgae (also referred to as phytoplankton, microphytes, or planktonic algae). Macroalgae, commonly known as seaweed, also have many commercial and industrial uses, but due to their size and the specific requirements of the environment in which they need to grow, they do not lend themselves as readily to cultivation (this may change, however, with the advent of newer seaweed cultivators, which are basically algae scrubbers using upflowing air bubbles in small containers, known as tumble culture). Commercial and industrial algae cultivation has numerous uses, including production of nutraceuticals such as omega-3 fatty acids (as algal oil) or natural food colorants and dyes, food, fertilizers, bioplastics, chemical feedstock (raw material), protein-rich animal/aquaculture feed, pharmaceuticals, and algal fuel, and can also be used as a means of pollution control and natural carbon sequestration. Global production of farmed aquatic plants, overwhelmingly dominated by seaweeds, grew in output volume from 13.5 million tonnes in 1995, to just over 30 million tonnes in 2016 and 37.8 million tonnes in 2022. This increase was the result of production expansions led by China, followed by Malaysia, the Philippines, the United Republic of Tanzania, the Russian Federation. Cultured microalgae already contribute to a wide range of sectors in the emerging bioeconomy. Research suggests there are large potentials and benefits of algaculture for the development of a future healthy and sustainable food system. Uses of algae Food Several species of algae are raised for food. While algae have qualities of a sustainable food source, "producing highly digestible proteins, lipids, and carbohydrates, and are rich in essential fatty acids, vitamins, and minerals" and e.g. having a high protein productivity per acre, there are several challenges "between current biomass production and large-scale economic algae production for the food market". Micro-algae can be used to create microbial protein used as a powder or in a variety of products. Purple laver (Porphyra) is perhaps the most widely domesticated marine algae. In Asia it is used in nori (Japan) and gim (Korea). In Wales, it is used in laverbread, a traditional food, and in Ireland it is collected and made into a jelly by stewing or boiling. Preparation also can involve frying or heating the fronds with a little water and beating with a fork to produce a pinkish jelly. Harvesting also occurs along the west coast of North America, and in Hawaii and New Zealand. Algae oil is used as a dietary supplement as the plants also produce Omega-3 (and Omega-6) fatty acids, which are commonly also found in fish oils, and which have been shown to have positive health benefits, including for cognition and against brain aging. Dulse (Palmaria palmata) is a red species sold in Ireland and Atlantic Canada. It is eaten raw, fresh, dried, or cooked like spinach. Spirulina (Arthrospira platensis) is a blue-green microalgae with a long history as a food source in East Africa and pre-colonial Mexico. Spirulina is high in protein and other nutrients, finding use as a food supplement and for malnutrition. Spirulina thrives in open systems and commercial growers have found it well-suited to cultivation. One of the largest production sites is Lake Texcoco in central Mexico. The plants produce a variety of nutrients and high amounts of protein. Spirulina is often used commercially as a nutritional supplement. Chlorella, another popular microalgae, has similar nutrition to spirulina. Chlorella is very popular in Japan. It is also used as a nutritional supplement with possible effects on metabolic rate. Irish moss (Chondrus crispus), often confused with Mastocarpus stellatus, is the source of carrageenan, which is used as a stiffening agent in instant puddings, sauces, and dairy products such as ice cream. Irish moss is also used by beer brewers as a fining agent. Sea lettuce (Ulva lactuca), is used in Scotland, where it is added to soups and salads. Dabberlocks or badderlocks (Alaria esculenta) is eaten either fresh or cooked in Greenland, Iceland, Scotland and Ireland. Aphanizomenon flos-aquae is a cyanobacteria similar to spirulina, which is used as a nutritional supplement. Extracts and oils from algae are also used as additives in various food products. Sargassum species are an important group of seaweeds. These algae have many phlorotannins. Cochayuyo (Durvillaea antarctica) is eaten in salads and ceviche in Peru and Chile. Both microalgae and macroalgae are used to make agar (see below), which is used as a gelling agent in foods. Lab manipulation Australian scientists at Flinders University in Adelaide have been experimenting with using marine microalgae to produce proteins for human consumption, creating products like "caviar", vegan burgers, fake meat, jams and other food spreads. By manipulating microalgae in a laboratory, the protein and other nutrient contents could be increased, and flavours changed to make them more palatable. These foods leave a much lighter carbon footprint than other forms of protein, as the microalgae absorb rather than produce carbon dioxide, which contributes to the greenhouse gases. Fertilizer and agar For centuries seaweed has been used as fertilizer. It is also an excellent source of potassium for manufacture of potash and potassium nitrate. Some types of microalgae can be used this way as well. Both microalgae and macroalgae are used to make agar. Pollution control With concern over global warming, new methods for the thorough and efficient capture of CO2 are being sought out. The carbon dioxide that a carbon-fuel burning plant produces can feed into open or closed algae systems, fixing the CO2 and accelerating algae growth. Untreated sewage can supply additional nutrients, thus turning two pollutants into valuable commodities. Waste high-purity as well as sequestered carbon from the atmosphere can be used, with potential significant benefits for climate change mitigation. Algae cultivation is under study for uranium/plutonium sequestration and purifying fertilizer runoff. Energy production Business, academia and governments are exploring the possibility of using algae to make gasoline, bio-diesel, biogas and other fuels. Algae itself may be used as a biofuel, and additionally be used to create hydrogen. Microalgae are also researched for hydrogen production – e.g. micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors capable of producing oxygen as well as hydrogen via photosynthesis in daylight under air. Microgeneration Carbon sequestration Other uses Chlorella, particularly a transgenic strain which carries an extra mercury reductase gene, has been studied as an agent for environmental remediation due to its ability to reduce to the less toxic elemental mercury. Cultured strains of a common coral microalgal endosymbionts are researched as a potential way to increase corals' thermal tolerance for climate resilience and bleaching tolerance. Cultured microalgae is used in research and development for potential medical applications, in particular for microbots such as biohybrid microswimmers for targeted drug delivery. Cultivated algae serve many other purposes, including cosmetics, animal feed, bioplastic production, dyes and colorant production, chemical feedstock production, and pharmaceutical ingredients. Growing, harvesting, and processing algae Monoculture Most growers prefer monocultural production and go to considerable lengths to maintain the purity of their cultures. However, the microbiological contaminants are still under investigation. With mixed cultures, one species comes to dominate over time and if a non-dominant species is believed to have particular value, it is necessary to obtain pure cultures in order to cultivate this species. Individual species cultures are also much needed for research purposes. A common method of obtaining pure cultures is serial dilution. Cultivators dilute either a wild sample or a lab sample containing the desired algae with filtered water and introduce small aliquots (measures of this solution) into a large number of small growing containers. Dilution follows a microscopic examination of the source culture that predicts that a few of the growing containers contain a single cell of the desired species. Following a suitable period on a light table, cultivators again use the microscope to identify containers to start larger cultures. Another approach is to use a special medium which excludes other organisms, including invasive algae. For example, Dunaliella is a commonly grown genus of microalgae which flourishes in extremely salty water that few other organisms can tolerate. Alternatively, mixed algae cultures can work well for larval mollusks. First, the cultivator filters the sea water to remove algae which are too large for the larvae to eat. Next, the cultivator adds nutrients and possibly aerates the result. After one or two days in a greenhouse or outdoors, the resulting thin soup of mixed algae is ready for the larvae. An advantage of this method is low maintenance. Growing algae Water, carbon dioxide, minerals and light are all important factors in cultivation, and different algae have different requirements. The basic reaction for algae growth in water is carbon dioxide + light energy + water = glucose + oxygen + water. This is called autotrophic growth. It is also possible to grow certain types of algae without light, these types of algae consume sugars (such as glucose). This is known as heterotrophic growth. Temperature The water must be in a temperature range that will support the specific algal species being grown mostly between 15˚C and 35˚C. Light and mixing In a typical algal-cultivation system, such as an open pond, light only penetrates the top of the water, though this depends on the algae density. As the algae grow and multiply, the culture becomes so dense that it blocks light from reaching deeper into the water. Direct sunlight is too strong for most algae, which can use only about the amount of light they receive from direct sunlight; however, exposing an algae culture to direct sunlight (rather than shading it) is often the best course for strong growth, as the algae underneath the surface is able to utilize more of the less intense light created from the shade of the algae above. To use deeper ponds, growers agitate the water, circulating the algae so that it does not remain on the surface. Paddle wheels can stir the water and compressed air coming from the bottom lifts algae from the lower regions. Agitation also helps prevent over-exposure to the sun. Another means of supplying light is to place the light in the system. Glow plates made from sheets of plastic or glass and placed within the tank offer precise control over light intensity, and distribute it more evenly. They are seldom used, however, due to high cost. Odor and oxygen The odor associated with bogs, swamps, and other stagnant waters can be due to oxygen depletion caused by the decay of deceased algal blooms. Under anoxic conditions, the bacteria inhabiting algae cultures break down the organic material and produce hydrogen sulfide and ammonia, which causes the odor. This hypoxia often results in the death of aquatic animals. In a system where algae is intentionally cultivated, maintained, and harvested, neither eutrophication nor hypoxia are likely to occur. Some living algae and bacteria also produce odorous chemicals, particularly certain cyanobacteria (previously classed as blue-green algae) such as Anabaena. The most well known of these odor-causing chemicals are MIB (2-methylisoborneol) and geosmin. They give a musty or earthy odor that can be quite strong. Eventual death of the cyanobacteria releases additional gas that is trapped in the cells. These chemicals are detectable at very low levels – in the parts per billion range – and are responsible for many "taste and odor" issues in drinking water treatment and distribution. Cyanobacteria can also produce chemical toxins that have been a problem in drinking water. Nutrients Nutrients such as nitrogen (N), phosphorus (P), and potassium (K) serve as fertilizer for algae, and are generally necessary for growth. Silica and iron, as well as several trace elements, may also be considered important marine nutrients as the lack of one can limit the growth of, or productivity in, a given area. Carbon dioxide is also essential; usually an input of CO2 is required for fast-paced algal growth. These elements must be dissolved into the water, in bio-available forms, for algae to grow. Methods Farming of macroalgae Open system cultivation An open system of algae cultivation involves the growth of algae in shallow water streams which could originate from a natural system or artificially prepared. In this system, algae can be cultivated in natural water bodies like lakes, rivers, and in oceans, as well as artificial ponds made up of concrete, plastic, pond liners or variety of materials. The open system of algae cultivation is simple and cost-effective, making it an attractive option for commercial production of algae-based products. Open ponds are highly vulnerable to contamination by other microorganisms, such as other algal species or bacteria. Thus cultivators usually choose closed systems for monocultures. Open systems also do not offer control over temperature and lighting. The growing season is largely dependent on location and, aside from tropical areas, is limited to the warmer months. Open pond systems are cheaper to construct, at the minimum requiring only a trench or pond. Large ponds have the largest production capacities relative to other systems of comparable cost. Also, open pond cultivation can exploit unusual conditions that suit only specific algae. For instance, Dunaliella salina grow in extremely salty water; these unusual media exclude other types of organisms, allowing the growth of pure cultures in open ponds. Open culture can also work if there is a system of harvesting only the desired algae, or if the ponds are frequently re-inoculated before invasive organisms can multiply significantly. The latter approach is frequently employed by Chlorella farmers, as the growth conditions for Chlorella do not exclude competing algae. The former approach can be employed in the case of some chain diatoms since they can be filtered from a stream of water flowing through an outflow pipe. A "pillow case" of a fine mesh cloth is tied over the outflow pipe allowing other algae to escape. The chain diatoms are held in the bag and feed shrimp larvae (in Eastern hatcheries) and inoculate new tanks or ponds. Enclosing a pond with a transparent or translucent barrier effectively turns it into a greenhouse. This solves many of the problems associated with an open system. It allows more species to be grown, it allows the species that are being grown to stay dominant, and it extends the growing season – if heated, the pond can produce year round. Open race way ponds were used for removal of lead using live Spirulina (Arthospira) sp. Water lagoons A lagoon is a type of aquatic ecosystem that is characterized by a shallow body of water that is separated from the open ocean by natural barriers such as sandbars, barrier islands, or coral reefs. The Australian company Cognis Australia is a well-known company that specializes in producing β-carotene from Dunaliella salina harvested from hypersaline extensive ponds located in Hutt Lagoon and Whyalla. These ponds are primarily used for wastewater treatment, and the production of D. salina is a secondary benefit. Open sea Open sea cultivation is a method of cultivating seaweed in the open ocean, as well as on a costal line in shallow water. Seaweed farming industry serves commercial needs for various products such as food, feed, pharma chemicals, cosmetics, biofuels, and bio-stimulants. Seaweed extracts act as bio-stimulants, reducing biotic stress and increasing crop production. Additionally, it presents opportunities for creating animal and human nutrition products that can improve immunity and productivity. Open ocean seaweed cultivation is an eco-friendly technology that doesn't require land, fresh water, or chemicals. It also helps mitigate the effects of climate change by sequestering CO2. Open sea cultivation method involves the use of rafts or ropes anchored in the ocean, where the seaweed grows attached to them. This method is widely used for commercial seaweed farming, as it allows for large-scale production and harvesting. The process of open sea cultivation of seaweed involves several steps. First, a suitable site in the ocean is identified, based on factors such as water depth, temperature, salinity, and nutrient availability. Once a site is chosen, ropes or rafts are anchored in the water, and the seed pieces of seaweed are attached to them using specialized equipment. The seaweed is then left to grow for several months, during which it absorbs nutrients from the water and sunlight through photosynthesis. Raceway ponds Raceway-type ponds and lakes are open to the elements. They are one of the most common and economic methods of large-scale algae cultivation, and offer several advantages over other cultivation methods. An open raceway pond is a shallow, rectangular-shaped pond used for the cultivation of algae. Because it is designed to circulate water in a continuous loop or raceway, allowing algae to grow in a controlled environment. Open system is a low-cost method of algae cultivation, and it is relatively easy to construct and maintain. The pond is typically lined with a synthetic material, such as polyethylene (HDPE) or polyvinyl chloride, to prevent the loss of water and nutrients. The pond is also equipped with paddlewheels or other types of mechanical devices to provide mixing and aeration. HRAPs High-Rate Algal Ponds (HRAPs) are a type of open algae cultivation system that has gained popularity in recent years due to their efficiency and low cost of operation. HRAPs are shallow ponds, typically between 0.1 to 0.4 meters deep, that are used for the cultivation of algae. The ponds are equipped with a paddlewheel or other type of mechanical agitation system that provides mixing and aeration, which promotes algae growth. HRAP system is also recommended in wastewater treatment using algae. Photobioreactors Algae can also be grown in a photobioreactor (PBR). A PBR is a bioreactor which incorporates a light source. Virtually any translucent container could be called a PBR; however, the term is more commonly used to define a closed system, as opposed to an open tank or pond. Because PBR systems are closed, the cultivator must provide all nutrients, including . A PBR can operate in "batch mode", which involves restocking the reactor after each harvest, but it is also possible to grow and harvest continuously. Continuous operation requires precise control of all elements to prevent immediate collapse. The grower provides sterilized water, nutrients, air, and carbon dioxide at the correct rates. This allows the reactor to operate for long periods. An advantage is that algae that grows in the "log phase" is generally of higher nutrient content than old "senescent" algae. Algal culture is the culturing of algae in ponds or other resources. Maximum productivity occurs when the "exchange rate" (time to exchange one volume of liquid) is equal to the "doubling time" (in mass or volume) of the algae. PBRs can hold the culture in suspension, or they can provide a substrate on which the culture can form a biofilm. Biofilm-based PBRs have the advantage that they can produce far higher yields for a given water volume, but they can suffer from problems with cells separating from the substrate due to the water flow required to transport gases and nutrients to the culture. Flat panel PBRs Flat panel PBRs consist of a series of flat, transparent panels that are stacked on top of each other, creating a thin layer of liquid between them. Algae are grown in this thin layer of liquid, which is continuously circulated to promote mixing and prevent stagnation. The panels are typically made of glass or plastic and can be arranged in various configurations to optimize light exposure. Flat panel PBRs are generally used for low-to-medium density cultivation and are well-suited for species that require lower light intensity and maximum surface area for optimum light exposure. The temperature control in Flat panel PBR system is carried out by cooling down the culture in reservoir chamber using chilled water jacket as well as by sprinkling cold water on the flat panel surface. Tubular PBRs Tubular PBRs consist of long, transparent tubes that are either vertically or horizontally oriented. Algae are grown inside the tubes, which are typically made of glass or plastic. The tubes are arranged in a helical or serpentine pattern to increase surface area for light exposure. The tubing can be either continuously or intermittently circulated to promote mixing and prevent stagnation. Tubular PBRs are generally used for high-density cultivation and are well-suited for species that require high light intensity. The temperature control in tubular PBR is a difficult task which is generally achieved by external sprinkling of deionized water which allow cooling of the tubes and subsequently reduces the temperature of culture circulating inside the tubes. Biofilm PBRs Biofilm PBRs include packed bed and porous substrate PBRs. Packed bed PBRs can be different shapes, including flat plate or tubular. In Porous Substrate Bioreactors (PSBRs), the biofilm is exposed directly to the air and receives its water and nutrients by capillary action through the substrate itself. This avoids problems with cells becoming suspended because there is no water flow across the biofilm surface. The culture could become contaminated by airborne organisms, but defending against other organisms is one of the functions of a biofilm. Plastic bag PBRs V-shaped plastic bags are commonly used in closed systems of algae cultivation for several reasons. These bags are made from high-density polyethylene (HDPE) and are designed to hold algae cultures in a closed environment, providing an ideal environment for algae growth. V-shaped plastic bags are effective for growing a variety of algae species, including Chlorella, Spirulina, and Nannochloropsis. The growth rate and biomass yield of Chlorella vulgaris in V-shaped plastic bags was found to be higher than any other shaped plastic bags. Different designs of plastic bags based PBR is developed from sealing the plastic bags at different places that generated, flat bottom hanging plastic bags, V-shaped hanging plastic bags, horizontally laying plastic bags that serves kind of flat PBR system, etc. Many plastic bag-based designs are proposed but few are utilized on commercial scale due to their productivities. Operation of plastic bags is tedious as they need to be replaced after every use to maintain the sterility, which is a laborious task for large scale facility. Harvesting Algae can be harvested using microscreens, by centrifugation, by flocculation and by froth flotation. Interrupting the carbon dioxide supply can cause algae to flocculate on its own, which is called "autoflocculation". "Chitosan", a commercial flocculant, more commonly used for water purification, is far more expensive. The powdered shells of crustaceans are processed to acquire chitin, a polysaccharide found in the shells, from which chitosan is derived via deacetylation. Water that is more brackish, or saline requires larger amounts of flocculant. Flocculation is often too expensive for large operations. Alum and ferric chloride are used as chemical flocculants. In froth flotation, the cultivator aerates the water into a froth, and then skims the algae from the top. Ultrasound and other harvesting methods are currently under development. Oil extraction Algae oils have a variety of commercial and industrial uses, and are extracted through a variety of methods. Estimates of the cost to extract oil from microalgae vary, but are likely to be around three times higher than that of extracting palm oil. Physical extraction In the first step of extraction, the oil must be separated from the rest of the algae. The simplest method is mechanical crushing. When algae is dried it retains its oil content, which then can be "pressed" out with an oil press. Different strains of algae warrant different methods of oil pressing, including the use of screw, expeller and piston. Many commercial manufacturers of vegetable oil use a combination of mechanical pressing and chemical solvents in extracting oil. This use is often also adopted for algal oil extraction. Osmotic shock is a sudden reduction in osmotic pressure, this can cause cells in a solution to rupture. Osmotic shock is sometimes used to release cellular components, such as oil. Ultrasonic extraction, a branch of sonochemistry, can greatly accelerate extraction processes. Using an ultrasonic reactor, ultrasonic waves are used to create cavitation bubbles in a solvent material. When these bubbles collapse near the cell walls, the resulting shock waves and liquid jets cause those cells walls to break and release their contents into a solvent. Ultrasonication can enhance basic enzymatic extraction. Chemical extraction Chemical solvents are often used in the extraction of the oils. The downside to using solvents for oil extraction are the dangers involved in working with the chemicals. Care must be taken to avoid exposure to vapors and skin contact, either of which can cause serious health damage. Chemical solvents also present an explosion hazard. A common choice of chemical solvent is hexane, which is widely used in the food industry and is relatively inexpensive. Benzene and ether can also separate oil. Benzene is classified as a carcinogen. Another method of chemical solvent extraction is Soxhlet extraction. In this method, oils from the algae are extracted through repeated washing, or percolation, with an organic solvent such as hexane or petroleum ether, under reflux in a special glassware. The value of this technique is that the solvent is reused for each cycle. Enzymatic extraction uses enzymes to degrade the cell walls with water acting as the solvent. This makes fractionation of the oil much easier. The costs of this extraction process are estimated to be much greater than hexane extraction. Supercritical CO2 can also be used as a solvent. In this method, CO2 is liquefied under pressure and heated to the point that it becomes supercritical (having properties of both a liquid and a gas), allowing it to act as a solvent. Other methods are still being developed, including ones to extract specific types of oils, such as those with a high production of long-chain highly unsaturated fatty acids. Algal culture collections Specific algal strains can be acquired from algal culture collections, with over 500 culture collections registered with the World Federation for Culture Collections. See also Sources References External links www.sas.org How to Rear a Plankton Menagerie (home grow micro algae in soda bottles) io.uwinnipeg.ca breeding algae in batch and continuous flow systems on small scale Making Algae Grow www.unu.edu Indian experience with algal ponds Blog Posts | gerd-kloeck-141049 | Renewable Energy World List of companies involved in microalgae production. Photobioreactors : Scale-up and optimisation PhD thesis Wageningen UR. Research on algae within Wageningen UR Photobioreactor using polyethylene and chicken wire . Instructables.com – Simple Home Algae Culture and Breeding Microphyt – Microalgae Production and Photobioreactor Design
Algaculture
[ "Biology" ]
5,868
[ "Algaculture", "Algae" ]
4,174,874
https://en.wikipedia.org/wiki/Coronal%20hole
Coronal holes are regions of the Sun's corona that emit low levels of ultraviolet and X-ray radiation compared to their surroundings. They are composed of relatively cool and tenuous plasma permeated by magnetic fields that are open to interplanetary space. Compared to the corona's usual closed magnetic field that arches between regions of opposite magnetic polarity, the open magnetic field of a coronal hole allows solar wind to escape into space at a much quicker rate. This results in decreased temperature and density of the plasma at the site of a coronal hole, as well as an increased speed in the average solar wind measured in interplanetary space. Streams of fast solar wind originating from coronal holes can interact with slow solar wind streams to produce corotating interaction regions. These regions can interact with Earth's magnetosphere to produce geomagnetic storms of minor to moderate intensity. During solar minima, CIRs are the main cause of geomagnetic storms. History Coronal holes were first observed during total solar eclipses. They appeared as dark regions surrounded by much brighter helmet streamers above the Sun's limb. In the 1960s, coronal holes appeared in X-ray images taken by sounding rockets and in observations at radio wavelengths by the Sydney Chris Cross radio telescope. At the time, what they were was unclear. Their true nature was recognized in the 1970s, when X-ray telescopes in the Skylab mission were flown above the Earth's atmosphere to reveal the structure of the corona. Solar cycle Coronal hole size and population correspond with the solar cycle. As the Sun heads toward solar maximum, the coronal holes move closer and closer to the Sun's poles. During solar maxima, the number of coronal holes decreases until the magnetic fields on the Sun reverse. Afterwards, new coronal holes appear near the new poles. The coronal holes then increase in size and number, extending farther from the poles as the Sun moves toward a solar minimum again. Solar wind The solar wind exists primarily in two alternating states referred to as the slow solar wind and the fast solar wind. The latter originates in coronal holes and has radial flow speeds of 450–800 km/s compared to speeds of 250–450 km/s for the slow solar wind. Interactions between fast and slow solar wind streams produce stream interaction regions which, if present after a solar rotation, are referred to as co-rotating interaction regions (CIRs). CIRs can interact with Earth's magnetosphere, producing minor- to moderate-intensity geomagnetic storms. The majority of moderate-intensity geomagnetic storms originate from CIRs. Typically, geomagnetic storms originating from CIRs have a gradual commencement (over hours) and are not as severe as storms caused by coronal mass ejections (CMEs), which usually have a sudden onset. Because coronal holes and associated CIRs can last for several solar rotations (i.e., several months), predicting the recurrence of this type of disturbance is often possible significantly farther in advance than for CME-related disturbances. See also – includes coronal dimmings, sometimes referred to as transient coronal holes Sunspot – dark spots on the Sun's photosphere List of solar storms References Further reading Jiang, Y., Chen, H., Shen, Y., Yang, L., & Li, K. (2007, January). Hα dimming associated with the eruption of a coronal sigmoid in the quiet Sun. Solar Physics, 240(1), 77–87. External links Solar phenomena
Coronal hole
[ "Physics" ]
739
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
8,758,323
https://en.wikipedia.org/wiki/AG%20Carinae
AG Carinae (AG Car) is a star in the constellation of Carina. It is classified as a luminous blue variable (LBV) and is one of the most luminous stars in the Milky Way. The great distance (20,000 light-years) and intervening dust mean that the star is not usually visible to the naked eye; its apparent brightness varies erratically between magnitude 5.7 and 9.0. In 1914, Harry Edwin Wood announced his discovery that this star, then called CPD –59° 2860, is a variable star, based on photographic plates taken in 1911 and 1914. It was given its variable star designation, AG Carinae, in 1921. Description The star is surrounded by a nebula of ejected material at 0.4–1.2 pc from the star. The nebula contains around , all lost from the star around 10,000 years ago. There is an 8.8-parsec-wide empty cavity in the interstellar medium around the star, presumably cleared by fast winds earlier in the star's life. AG Carinae is apparently in a transitional phase between a massive class O blue supergiant and a Wolf–Rayet star, where it is highly unstable and suffers from erratic pulsations, occasional larger outbursts, and rare massive eruptions. The spectral type varies between WN11 at visual minimum and an early A hypergiant at maximum. At visual minimum the star is about and 20,000–24,000 K, while at maximum it is and 8,000 K. The temperature varies at different minima. One study calculated that the bolometric luminosity of AG Carinae decreases during its S Doradus-type outbursts, unlike most LBVs which remain at approximately constant luminosity. The luminosity drops from around at visual minimum to around at visual maximum, possibly due to the energy required to expand a considerable fraction of the star. Evolutionary models of the star suggest that it had a low rotation rate for much of its life, but current observations show fairly rapid rotation. Models of LBV progenitors of type IIb supernovae list AG Carinae as matching the final stellar spectrum prior to core collapse, although the models are for stars with 20 to 25 times the mass of the Sun while AG Carinae is thought to be considerably more massive. The initial mass of the star would have been around and is now thought to be . Distance controversy Parallaxes from data release 1 (DR1) of the Gaia mission suggest a much closer distance to AG Carinae and its neighbour Hen 3-519 than previously accepted, around 2,000 parsecs. Then both stars would be less luminous than LBVs and it is argued that they would be former red supergiants whose unusual characteristics are the result of binary evolution. The earlier Hipparcos parallax for AG Carinae had a margin of error larger than the parallax itself and so gave little information about its distance. The distance of 6,000 parsecs is based on assumptions about the properties of LBVs, models of interstellar extinction, and kinematical measurements. The Gaia DR1 parallax, derived from the combination of the first year of Gaia measurements with Tycho astrometry, is . The Gaia team recommend that a further 0.3 mas systematic error is allowed for (i.e. added to the formal margin of error). A 2017 study argues that the 0.3 mas systematic margin of error can be ignored and that the implied distance to AG Carinae is . In Gaia Data Release 2, the parallax is , suggesting a distance around . A 2019 observation yields a most likely distance of . Gaia Early Data Release 3 gives a parallax of , although with a non-trivial level of excess astrometric noise where there was none in Gaia DR2. Light curve Notes References External links 2MASS Atlas Image Gallery: Miscellaneous Objects includes an infrared image of AG Carinae Hubble Space Telescope zooms to AG Carinae YouTube Carina (constellation) Luminous blue variables Carinae, AG 094910 053461 CD-59 03430 Wolf–Rayet stars A-type hypergiants
AG Carinae
[ "Astronomy" ]
861
[ "Carina (constellation)", "Constellations" ]
8,758,924
https://en.wikipedia.org/wiki/W%20Mensae
W Mensae (W Men) is an unusual yellow supergiant star in the Large Magellanic Cloud in the southern constellation Mensa. It is an R Coronae Borealis variable and periodically decreases in brightness by several magnitudes. W Men is very distant, being located in the neighboring galaxy Large Magellanic Cloud, where it lies on the southern metal-deficient edge. Despite its high luminosity, the star has a maximum apparent brightness of +13.8m, too dim to be visible in a small telescope. Its radius has been calculated to be 61 times that of the Sun. The variability of W Men was discovered in 1927 by W. J. Luyten. It belongs to the very rare R Coronae Borealis class of variables which are often called "inverse novae" since they experience occasional very large drops in brightness. At minimum brightness, W Men has a photographic (blue) magnitude less than +18.3, being undetectable on photographic plates at the time. The drop in brightness is less pronounced at longer wavelengths, and the overall luminosity of the star is thought to be largely unchanged. The variations are caused by condensation of dust which temporarily obscures the star. Short wavelengths of light are absorbed and re-emitted as infra-red. Many R CrB variables show small amplitude pulsations and W Mensae has a pulsation period of approximately 67 days. References Stars in the Large Magellanic Cloud Large Magellanic Cloud Mensa (constellation) R Coronae Borealis variables F-type supergiants Extragalactic stars Mensae, W J05262451-7111117
W Mensae
[ "Astronomy" ]
346
[ "Mensa (constellation)", "Constellations" ]
8,759,049
https://en.wikipedia.org/wiki/Chiral%20color
In particle physics phenomenology, chiral color is a speculative model which extends quantum chromodynamics (QCD), the generally accepted theory for the strong interactions of quarks. QCD is a gauge field theory based on a gauge group known as color SU(3)C with an octet of colored gluons acting as the force carriers between a triplet of colored quarks. In Chiral Color, QCD is extended to a gauge group which is SU(3)L × SU(3)R and leads to a second octet of force carriers. SU(3)C is identified with a diagonal subgroup of these two factors. The gluons correspond to the unbroken gauge bosons and the color octet axigluons – which couple strongly to the quarks – are massive. Hence the name is Chiral Color. Although Chiral Color has presently no experimental support, it has the "aesthetic" advantage of rendering the Standard Model more similar in its treatment of the two short range forces, strong and weak interactions. Unlike gluons, the axigluons are predicted to be massive. Extensive searches for axigluons at CERN and Fermilab have placed a lower bound on the axigluon mass of about . Axigluons may be discovered when collisions are studied with higher energy at the Large Hadron Collider. References Physics beyond the Standard Model
Chiral color
[ "Physics" ]
294
[ "Particle physics stubs", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
8,759,405
https://en.wikipedia.org/wiki/Oceana%20%28non-profit%20group%29
Oceana, inc. is a 501(c)(3) nonprofit ocean conservation organization focused on influencing specific policy decisions on the national level to preserve and restore the world's oceans. It is headquartered in Washington, D.C., with offices in Juneau, Monterey, Fort Lauderdale, New York, Portland, Toronto, Mexico City, Madrid, Brussels, Copenhagen, Geneva, London, Manila, Belmopan, Brasília, Santiago, and Lima, and it is the largest international advocacy group dedicated entirely to ocean conservation. Currently, Oceana has a staff of about 200 and 6,000 volunteers, and it has almost 50 million dollars of revenue (as of 2017). Oceana takes a multi-faceted approach to ocean conservation; It conducts its own scientific research in addition to making policy recommendations, lobbying for specific legislation, and filing and litigating lawsuits. History Oceana was established in 2001 by an international group of leading foundations including the Rockefeller Brothers Fund, Sandler Foundation, and The Pew Charitable Trusts. This followed a 1999 study they commissioned, which found that less than 0.5% of all resources spent by U.S. environmental nonprofit groups were used for ocean conservation. In 2001, Oceana absorbed The Ocean Law Project, which was also created by The Pew Charitable Trusts, for Oceana's legal branch. In 2002, American Oceans Campaign, founded by actor and environmentalist Ted Danson, merged with Oceana to further their common goals of ocean conservation. On April 19, 2024, Oceana, Inc. announced the appointment of James Simon as the new chief executive officer. Simon, previously the President of Oceana, succeeded Andrew Sharpless following an eight-month international search. Oceana Canada In 2015, Oceana Canada was established as a legally distinct non-profit organization. It works in collaboration with Oceana, inc. and is considered part of the larger charity. Except under very specific circumstances, Canadian charity law does not grant either legal charity status or the ability to issue tax exempt receipts to Canadian offices of non-Canadian nonprofits, making it beneficial to create an independent, Canadian charity. Current campaigns Responsible fishing Concerned about declining fish catches since 1980, Oceana is committed to combating overfishing and restoring the world's fisheries. It mainly focuses on legislation for scientific based catch limits, which have led to dramatic recoveries of depleted fisheries in the recent past. It also opposes fishing subsidies, which it argues are (in their current form) contributing to overfishing. Oceana also focuses on reducing bycatch, especially of protected or endangered species. Oceana's main focus with sustainable fishing is providing clean, plentiful food. They often cite the lack of emissions or resources, like land or fresh water, that wild fish require, and that this lack of pollution or resources would be necessary to feed the world's growing population. This campaign is called "Save the Oceans, Feed the World". Plastics Oceana focuses on curbing or eliminating the use of plastics, especially single use plastics due to their harmful impact on marine ecosystem and on human consumers. The organization generally opposes focusing on recycling or cleanup, and it says this is due to inefficiencies of recycling the large amounts of plastics in the ocean. Seafood fraud Oceana has led the way on exposing and advocating against seafood fraud. Its opposition comes from the widespread nature of this problem, the negative health impact mislabeled fish can have (especially to people with certain seafood allergies) and their impact on overfishing by obscuring its impact. Climate and energy Oceana is dedicated to combating the numerous threats to the world's oceans that climate change imposes. Its main focus has been the acidification of the ocean, which threatens marine life, especially shellfish and coral that are necessary to many marine ecosystems, and, consequently, sources of seafood. They also focus on promoting offshore wind farms and combating the use of offshore drilling and seismic airgun blasting. Expeditions Oceana launches expeditions to gather scientific data, which is used by Oceana, other nonprofit groups, local communities, and governmental agencies to create or influence policy. Recent examples of these expeditions' success can be seen in Malta, where an expedition led to the Maltese government expanding marine protected areas, or in the Philippines, where an expedition led to the government creating a new marine protected area in the Benham Bank. Victories Oceana focuses on influencing specific legislation, lawsuits, or other policies, which fit under its broader goals. It calls these "victories" when successful. Recent victories have included protecting dusky sharks, banning industrial activity in Canada's marine protected areas, increasing transparency through digital tracking in Chile's fishing industry, and creating the second-largest marine national park in Spain's Mediterranean coast. Over the course of its existence, Oceana has protected 4.5 million square miles of the ocean by influencing legislation and policy related to banning bottom trawling, restricting fishing, and establishing Marine Protected Areas. Oceana considers an area "protected" once it has achieved a policy victory related to protecting it. Books The Perfect Protein Andy Sharpless, the CEO of Oceana, and author Suzannah Evans wrote The Perfect Protein in 2013. While it mentions some of Oceana's achievements, it focuses on its main goal: to make fishing a sustainable and abundant food supply. The main recommendations and goals of the book are science based catch limits, eating fish lower on the food chain (like sardines), focusing less on more glamorous sea creatures (like whales and dolphins), protecting habitats, and reducing bycatch. Oceana: Our Endangered Oceans and What We Can Do to Save Them Actor and Oceana Vice Chair Ted Danson, along with Michael D'Orso, wrote the book Oceana: Our Endangered Oceans and What We Can Do to Save Them in 2011. It describes Danson's early involvement with the environmental movement while also explaining the problems that face our oceans today, such as offshore drilling, pollution, ocean acidification, and overfishing. The book is scientifically grounded and was called engaging by the Los Angeles Times because it is filled with asides, charts, and photographs. Criticism Responsible fishing The California Wetfish Producers Association (CWPA), a small nonprofit organization dedicated to preserving California's wetfish industry, has repeatedly criticized Oceana's attempts to temporarily halt the Pacific sardine fishery. CWPA criticized Oceana's citation of a National Oceanic and Atmospheric Administration (NOAA) study that reported 95% of the sardine stock had been depleted since 2006 (and the study itself). CWPA claims that these numbers are inflated and that the actual (smaller) decline in fish stock has not been caused by overfishing, but rather by environmental factors. The CWPA has specifically called Oceana's claims about overfishing "fake news." Although the NOAA has not fully responded to the CWPA's calls for a new study, it has not declared sardines overfished, but it has also banned commercial fishing of sardines. In 2021, a Netflix documentary Seaspiracy criticized Oceana for appearing to be unable to provide a definition for "sustainable fishing". Oceana responded by saying it was misrepresented in the film, and argued that abstaining from eating fish as the film recommends is not a realistic choice for people who depend on coastal fisheries. Seafood fraud Various environmental news outlets have published op-eds criticizing Oceana's reports on seafood fraud, and similar criticism was included in a New York Times article. Criticism focuses on Oceana's assumption that all mislabeled seafood is intentionally fraudulent, even for species that are easily confused or have different names in different countries. The methodology of Oceana's studies has also been questioned, mainly due to its selection of historically mislabeled fish for testing instead of a more representative sample. Additionally, they criticized policy recommendations that Oceana recommended in their reports for being infeasible and bureaucratic. See also Biodiversity Conservation movement Earth science Ecology Green politics Marine conservation Natural environment Nature Sustainability References External links Oceana at Charity Navigator Fisheries conservation organizations Environmental organizations established in 2001 Environmental organizations based in Washington, D.C. Marine conservation organizations 2001 establishments in Washington, D.C. 501(c)(3) organizations
Oceana (non-profit group)
[ "Environmental_science" ]
1,703
[ "Environmental research" ]
8,759,421
https://en.wikipedia.org/wiki/Hexamethylenediamine
Hexamethylenediamine or hexane-1,6-diamine, is the organic compound with the formula H2N(CH2)6NH2. The molecule is a diamine, consisting of a hexamethylene hydrocarbon chain terminated with amine functional groups. The colorless solid (yellowish for some commercial samples) has a strong amine odor. About 1 billion kilograms are produced annually. Synthesis Hexamethylenediamine was first reported by Theodor Curtius. It is produced by the hydrogenation of adiponitrile: NC(CH2)4CN + 4 H2 → H2N(CH2)6NH2 The hydrogenation is conducted on molten adiponitrile diluted with ammonia, typical catalysts being based on cobalt and iron. The yield is good, but commercially significant side products are generated by virtue of reactivity of partially hydrogenated intermediates. These other products include 1,2-diaminocyclohexane, hexamethyleneimine, and the triamine bis(hexamethylenetriamine). An alternative process uses Raney nickel as the catalyst and adiponitrile that is diluted with hexamethylenediamine itself (as the solvent). This process operates without ammonia and at lower pressure and temperature. Applications Hexamethylenediamine is used almost exclusively for the production of polymers, an application that takes advantage of its structure. It is difunctional in terms of the amine groups and tetra functional with respect to the amine hydrogens. The great majority of the diamine is consumed by the production of nylon 66 via condensation with adipic acid. Otherwise hexamethylene diisocyanate (HDI) is generated from this diamine by phosgenation as a monomer feedstock in the production of polyurethane. The diamine also serves as a cross-linking agent in epoxy resins. Safety Hexamethylenediamine is moderately toxic, with of 792–1127 mg/kg. Nonetheless, like other basic amines, it can cause serious burns and severe irritation. Such injuries were observed in the accident at the BASF site in Seal Sands, near Billingham (UK) on 4 January 2007 in which 37 persons were injured, one of them seriously. See also 1,2-Diaminocyclohexane 2-Methylpentamethylenediamine References Monomers Diamines
Hexamethylenediamine
[ "Chemistry", "Materials_science" ]
525
[ "Monomers", "Polymer chemistry" ]
8,759,469
https://en.wikipedia.org/wiki/ESSA-8
ESSA-8 was a weather satellite launched by the National Aeronautics and Space Administration (NASA) on December 15, 1968, from Vandenberg Air Force Base, California. Its name was derived from that of its oversight agency, the Environmental Science Services Administration (ESSA). ESSA-8 was an 18-sided polygon. It measured in diameter by in height, with a mass of . It was made of aluminum alloy and stainless steel covered with 10,020 solar cells. The cells charged 63 nickel–cadmium batteries, which served as a power source. The satellite could take 8 to 10 pictures every 24 hours. Each photo covered a area at a resolution of per pixel. ESSA-8's mission was to replace ESSA-6, and provide detailed cloud pattern photography to ground stations worldwide. Partners in the project included NASA, ESSA, RCA, the National Weather Service, and the National Centers for Environmental Prediction (NMC). ESSA-8 operated for 2,644 days until it was deactivated on March 12, 1976. References External links http://www.earth.nasa.gov/history/essa/essa8.html https://web.archive.org/web/20060902131201/http://www.met.fsu.edu/explores/Guide/Essa_Html/essa8.html Spacecraft launched in 1968 Meteorological instrumentation and equipment Weather satellites of the United States Television Infrared Observation Satellites
ESSA-8
[ "Technology", "Engineering" ]
311
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
8,759,532
https://en.wikipedia.org/wiki/Science%20and%20technology%20in%20Germany
Science and technology in Germany has a long and illustrious history, and research and development efforts form an integral part of the country's economy. Germany has been the home of some of the most prominent researchers in various scientific disciplines, notably physics, mathematics, chemistry and engineering. Before World War II, Germany had produced more Nobel laureates in scientific fields than any other nation, and was the preeminent country in the natural sciences. Germany is currently the nation with the 3rd most Nobel Prize winners, 115. The German language, along with English and French, was one of the leading languages of science from the late 19th century until the end of World War II. After the war, because so many scientific researchers' and teachers' careers had been ended either by Nazi Germany which started a brain drain, the denazification process, the American Operation Paperclip and Soviet Operation Osoaviakhim which exacerbated the brain drain in post-war Germany, or simply losing the war, "Germany, German science, and German as the language of science had all lost their leading position in the scientific community." Today, scientific research in the country is supported by industry, the network of German universities and scientific state-institutions such as the Max Planck Society and the Deutsche Forschungsgemeinschaft. The raw output of scientific research from Germany consistently ranks among the world's highest. Germany was declared the most innovative country in the world in the 2020 Bloomberg Innovation Index and was ranked 9th in the Global Innovation Index in 2024. Institutions The of Masterpieces of Science and Technology in Munich is one of the largest science and technology museums in the world in terms of exhibition space, with about 28,000 exhibited objects from 50 fields of science and technology. The (BMBF) is a supreme authority of the Federal Republic of Germany for science and technology. The headquarter of the Federal Ministry is located in Bonn, the second office in Berlin. It was founded in 1972 as Federal Ministry of Research and Technology (BMFT) to promote basic research, applied research and technological development. Federal Ministry for Economic Affairs and Climate Action ( (BMWK, previous BMWi) Foundations Alexander von Humboldt Foundation Deutsche Forschungsgemeinschaft (DFG, German Research association) German Academic Exchange Service (DAAD), promoting international exchange of scientists and students) The supports young scientists and research projects. It was founded in 1959 and is located in Cologne. The purpose of the foundation, with an endowment capital of €542.4 million, is to promote science at scientific universities and research institutes, primarily in Germany, under particular consideration on young scientists. National science libraries German National Library of Economics (ZWB), Kiel & Hamburg German National Library of Medicine (ZB MED), Cologne & Bonn German National Library of Science and Technology (TIB), Hannover Research organizations Helmholtz Association of German Research Centres (complex systems und large-scale research), Bonn & Berlin Fraunhofer Society (applied research and mission oriented research, Munich) Leibniz Association (fundamental and applied research), Berlin Max Planck Society (fundamental research), Munich Gesellschaft für Angewandte Mathematik und Mechanik ("Society of Applied Mathematics and Mechanics"), Dresden The Hasso Plattner Institute (HPI), officially: Hasso Plattner Institute for Digital Engineering gGmbH, is a privately financed IT institute and, together with the University of Potsdam, forms the Digital Engineering Faculty. It is located in Potsdam-Babelsberg and researches practical and applied topics in digital technologies. Its founder and namesake is SAP founder Hasso Plattner. Prize committees The Gottfried Wilhelm Leibniz Prize is granted to ten scientists and academics every year. With a maximum of €2.5 million per award it is one of highest endowed research prizes in the world. The prize and the mentioned organization above is named after the German polymath and philosopher Gottfried Wilhelm Leibniz (1646–1716), who was a contemporary and competitor of Isaac Newton (1642–1727). The Bunsen–Kirchhoff Award is a prize for "outstanding achievements" in the field of analytical spectroscopy. The prize is named in honor of chemist Robert Bunsen and physicist Gustav Kirchhoff (→ Physics). The Helmholtz Prize is awarded with €20,000 every two to three years to European scientists for scientific and technological research in metrology. Scientific fields The global spread of the printing press with movable types and an oil-based ink was a process that began around 1440 with the invention of the printing press by Johannes Gutenberg () and continued until the introduction of printing based on this procedure in all parts of the world in the 19th century, thus creating the conditions for the dissemination of generally accessible scientific publications emerging to the revolution of science. Scientific Revolution Johannes Kepler (1571–1630) was one of the originators of the Scientific Revolution of the 16th and 17th centuries. He was an astronomer, physicist, mathematician and natural philosopher He advocated the idea of a heliocentric model of the Solar System, which can be traced back to the theories of the ancient Greek astronomers Aristarchus of Samos and Seleucus of Seleucia, as well as to the 16th-century astronomer Nicolaus Copernicus (1473–1543), whose main work about the heliocentric model was first published by Johannes Petreius (–1550) and likely the polymath Johannes Schöner (1477–1547) in the Free Imperial City of Nuremberg in 1543. In March 1600, Kepler became assistant to the astronomer Tycho Brahe (1546–1601) at the court of Emperor Rudolf II in Prague, Kingdom of Bohemia. After Brahe's death in October of the next year, Kepler succeeded him as imperial mathematician and court astronomer (until 1627). Johannes Kepler discovered the laws according to which planets are moving around the Sun, who were called Kepler's laws after him. With his introduction to calculating with logarithms, Kepler contributed to the spread of this type of calculation. In mathematics, a numerical method for calculating the volume of wine barrels with integrals was named former Kepler's barrel rule. He made optics to a subject of scientific investigation and confirmed the discoveries made with the telescope by his Italian contemporary Galileo Galilei (1564–1642). He worked on the theory of the telescope and invented the refracting astronomical or Keplerian telescope, which involved a considerable improvement over the Galilean telescope. Kepler also made the invention of the valveless gear pump, because a mine owner needed a device to pump water out of his mine. Physics Otto von Guericke (1602–1686) was a scientist, inventor, mathematician and physicist from Magdeburg. He is best known for his experiments on air pressure using the Magdeburg hemispheres. With the invention of the vacuum pump he laid the foundation of vacuum technology. Daniel Gabriel Fahrenheit (1686–1736) was a physicist and inventor of measuring instruments from Danzig. The temperature unit degrees Fahrenheit (°F) was named after him. Gustav Kirchhoff (1824–1887) was a physicist from Königsberg who made a particular contribution to the study of electricity. However, they were discovered as early as 1833 by Carl Friedrich Gauss (1777–1855) during his experiments on electricity. Today Kirchhoff is best known for Kirchhoff's rules, the fundamental laws of electrical engineering, and to describe the emission of black-body radiation by heated objects, what contributed eventually to the emergence of quantum mechanics. With Robert Bunsen (1811–1899) he developed flame spectroscopy in 1859, which can be used to detect chemical elements with high specificity. Bunsen was a chemist from Göttingen, he discovered together with Kirchhoff the elements caesium and rubidium in 1861. He perfected the Bunsen burner, which is named after him, and invented the Bunsen cell and a grease-spot photometer. The work of Albert Einstein (1879–1955), best known for developing the theory of relativity, and Max Planck (1858–1947), he is known for the Planck constant, was crucial to the foundation of modern physics, which Werner Heisenberg (1901–1976) and Erwin Schrödinger (1887–1961) developed further. They were preceded by such key physicists as Joseph von Fraunhofer (1787–1826), who discovered the Fraunhofer lines in spectroscopy, and Hermann von Helmholtz (1857–1894), among others. Wilhelm Conrad Röntgen (1845–1923) discovered X-rays in 1895, an accomplishment that made him the first winner of the Nobel Prize in Physics in 1901 and eventually earned him an element name, roentgenium. Heinrich Rudolf Hertz's (1857–1894) work in the domain of electromagnetic radiation were pivotal to the development of modern telecommunication; the unit of frequency was named in his honor "Hertz". Mathematical aerodynamics was developed in Germany, especially by Ludwig Prandtl. Karl Schwarzschild (1873–1916) was an astrophysicist from Frankfurt am Main. He was professor and director of the Göttingen Observatory from 1901 to 1909. There he was able to work together with scientists like David Hilbert (1862–1943) and Hermann Minkowski (1864–1909). Schwarzschild works on relativity provided the first exact solutions to the field equations of Albert Einstein's general relativity – one for an uncharged, non-rotating spherically symmetric body and one for a static isotropic void around a solid body. Schwarzschild did some fundamental works on classical black holes. This is why some properties of black holes got their name, namely the Schwarzschild metric and the Schwarzschild radius. The center of a non-rotating, uncharged black hole is called the Schwarzschild singularity. Paul Forman in 1971 argued the remarkable scientific achievements in quantum physics were the cross-product of the hostile intellectual atmosphere whereby many scientists rejected Weimar Germany and Jewish scientists, revolts against causality, determinism and materialism, and the creation of the revolutionary new theory of quantum mechanics. The scientists adjusted to the intellectual environment by dropping Newtonian causality from quantum mechanics, thereby opening up an entirely new and highly successful approach to physics. The "Forman Thesis" has generated an intense debate among historians of science. Deutsche Physik The so-called was a movement that some German physicists hold during the Nazi period, which mixed physics with racist views. They rejected new discoveries in physics as being too theoretical and advocated a stronger emphasis on empirical evidence. This physics was influenced by anti-Semitic ideas that were widespread in the polarized political climate of the Weimar Republic. In addition, some leading theoretical physicists at that time were of Jewish descent. Leading representatives of this ideology were the Bavarian physicist Johannes Stark (1874–1957, Nobel Prize in Physics in 1919) and the German-Hungarian physicist Philipp Lenard (1862–1947, Nobel Prize winner of 1905). Notably, the latter labeled Albert Einstein's contributions to science as Jewish physics. Chemistry Georgius Agricola gave chemistry its modern name. He is generally referred to as the father of mineralogy and as the founder of geology as a scientific discipline. Justus von Liebig (1803–1873) made major contributions to agricultural and biological chemistry, and is one of the principal founders of organic chemistry. At the start of the 20th century, Germany garnered fourteen of the first thirty-one Nobel Prizes in Chemistry, starting with Hermann Emil Fischer (1852–1919) in 1902 and until Carl Bosch (1874–1940) and Friedrich Bergius (1884–1949) in 1931. Otto Hahn (1879–1968) was a pioneer of radioactivity and radiochemistry with the discovery of nuclear fission together with the Austrian scientist Lise Meitner (1878–1968) and Fritz Strassmann (1902–1980) in 1938, the scientific and technological basis for the utilization of atomic energy. The bio-chemist Adolf Butenandt (1903–1995) independently worked out the molecular structure of the primary male sex hormone of testosterone and was the first to successfully synthesize it from cholesterol in 1935. Engineering Germany has been the home of many famous inventors and engineers, such as Johannes Gutenberg, who is credited with the invention of movable type printing press in Europe; Hans Geiger, the creator of the Geiger counter; and Konrad Zuse, who built the first electronic computer. German inventors, engineers and industrialists such as Zeppelin, Siemens, Daimler, Otto, Wankel, Von Braun and Benz helped shape modern automotive and air transportation technology including the beginnings of space travel. The engineer Otto Lilienthal laid some of the fundamentals for the science of aviation. The physicist and optician Ernst Abbe (1840–1905) founded in the 19th century together with the entrepreneurs Carl Zeiss (1840–1905) and Otto Schott (1851–1935) the basics of modern Optical engineering and developed many optical instruments like microscopes and telescopes. Since 1899 he was the sole owner of the Carl Zeiss AG and played a decisive role of setting up the enterprise Jenaer Glaswerk Schott & Gen (today Schott AG). These enterprises are very successful worldwide up to present time (21st century). The engineer Rudolf Diesel (1858–1913) was the inventor of an internal combustion engine, the Diesel engine. He first published his idea of an engine with a particularly high level of efficiency in 1893 in his work . After 1893, he succeeded in building such an engine in a laboratory at the Augsburg Machine Factory (now MAN). Through his patents registered in many countries and his public relations work, he gave his name to the engine and the associated Diesel fuel. In the 1930s the electrical engineers Ernst Ruska (1906–1988) and Max Knoll (1897–1969) developed at the "Technische Hochschule zu Berlin" the first electron microscope. Manfred von Ardenne (1907–1997) was a scientist, engineer and active as a researcher primarily in applied physics and is the originator of around 600 inventions and patents in radio and television technology, electron microscopy, nuclear, plasma and medical technology. Biological and earth sciences Martin Waldseemüller (–1520) and Matthias Ringmann (1482–1511) were cartographers of the Renaissance. In 1507 they created the first world map on which the land masses in the west of the Atlantic Ocean were named "America" after Amerigo Vespucci. The Waldseemüller map of 1507 has been part of the UNESCO World Documentary Heritage since 2005. Emil Behring, Ferdinand Cohn, Paul Ehrlich, Robert Koch, Friedrich Loeffler and Rudolph Virchow, six key figures in microbiology, were from Germany. Alexander von Humboldt's (1769–1859) work as a natural scientist and explorer was foundational to biogeography, he was one of the outstanding scientists of his time and a shining example for Charles Darwin. Wladimir Köppen (1846–1940) was an eclectic Russian-born botanist and climatologist who synthesized global relationships between climate, vegetation and soil types into a classification system that is used, with some modifications, to this day. The Frankfurt surgeon, botanist, microbiologist, and mycologist Anton de Bary (1831–1888) laid one of the fundamentals of the plant pathology and was one of the discoverer of the symbiosis of organisms. Ernst Haeckel (1834 – 1919) discovered, described and named thousands of new species, mapped a tree of life relating all life forms and coined many terms in biology, for example ecology and phylum. His published artwork of different lifeforms includes over 100 detailed, multi-colour illustrations of animals and sea creatures, collected in his , an international bestseller and a book that would go on to influence the Art Nouveau (). But Haeckel was also a promoter of scientific racism and embraced the idea of Social Darwinism. Alfred Wegener (1880–1930), a similarly interdisciplinary scientist, was one of the first people to hypothesize the theory of continental drift that was later developed into the overarching geological theory of plate tectonics. Psychology Wilhelm Wundt is credited with the establishment of psychology as an independent empirical science through his construction of the first laboratory at the University of Leipzig in 1879. In the beginning of the 20th century, the Kaiser Wilhelm Institute founded by Oskar and Cécile Vogt was among the world's leading institutions in the field of brain research. They collaborated with Korbinian Brodmann to map areas of the cerebral cortex. After the National Socialistic laws banning Jewish doctors in 1933, the fields of neurology and psychiatry faced a decline of 65% of its professors and teachers. The research shifted to a 'Nazi neurology', with subjects such as eugenics or euthanasia. Humanities Besides natural sciences, German researchers have added much to the development of humanities. Albertus Magnus (–1280) was a polymath, philosopher, lawyer, natural scientist, theologian, Dominican and Bishop of Regensburg. His great, diverse knowledge earned him the name Magnus ("the Great"), the title of Doctor of the Church and the honorary title of doctor universalis. Johann Joachim Winckelmann (1717–1768) was a German art historian and archaeologist, "the prophet and founding hero of modern archaeology". Heinrich Schliemann (1822–1890) was a wealthy businessman and a devotee of the historicity of places mentioned in the works of Homer and an archaeological excavator of Hisarlik (since 1871), now presumed to be the site of Troy, along with the Mycenaean sites Mycenae and Tiryns. Theodor Mommsen (1817–1903) is widely counted as one of the greatest classicists of the 19th century; his work regarding Roman history is still of fundamental importance for contemporary research. Max Weber (1864–1920) was together with Karl Marx (1818–1883) among the most important theorists of the development of modern Western society and is regarded as one of the founder of the Sociology. Immanuel Kant (1724–1804) was a philosopher of the Enlightenment and professor of logic and metaphysics in Königsberg. Kant is one of the most important representatives of Western philosophy. His work Critique of Pure Reason marks a turning point in the history of philosophy and the beginning of modern philosophy. Kant is best known for the categorical imperative, the fundamental principle of moral action from his Groundwork of the Metaphysics of Morals: "Act only according to that maxim whereby you can at the same time will that it should become a universal law." While Kant was one of the first philosopher of German idealism, Georg Wilhelm Friedrich Hegel (1770–1831) is one of the most influential and last representative of it. His philosophy seeks to interprete the whole of reality in its variety of manifestations, including historical development, in a coherent, systematic and definitive manner. It is divided into "logic", "natural philosophy" and "Phenomenology of Geist", which also includes a philosophy of history. His thinking also became the starting point for numerous other movements in the theory of science, sociology, history, theology, politics, jurisprudence and art theory, and it also influenced other areas of culture and intellectual life. Contemporary examples are the philosopher Jürgen Habermas, the Egyptologist Jan Assmann, the sociologist Niklas Luhmann, the historian Reinhart Koselleck and the legal historian Michael Stolleis. In order to promote the international visibility of research in these fields a new prize, , was established in 2008; it serves the translation of studies of humanities into English. Warfare Carl von Clausewitz (1780–1831) was a Prussian , army reformer, military scientist and ethicist. Clausewitz became known through his unfinished major work , which deals with the problem of the theory of war. His theories on strategy, tactics and philosophy had a major influence on the military theory in all Western countries and are still taught at military academies until today. They are also used in business management and marketing. The most used quotation is the statement from his masterpiece: "War is the continuation of policy with other means." Oswald Boelcke was the progenitor of air-to-air combat tactics, fighter squadron organization, early-warning systems, and the German air force; he has been dubbed "the father of air combat". From his first victories, the news of his success instructed and motivated both his fellow fliers and the German public. It was at his instigation that the Imperial German Air Service founded its (Fighter School) to teach his aerial tactics. The promulgation of his Dicta Boelcke set tactics for the German fighter force. The concentration of fighter airplanes into squadrons gained Germany air supremacy on the Western Front, and was the basis for their wartime successes. Personalities See also German inventors and discoverers German inventions and discoveries Operation Paperclip Technology during World War II Körber European Science Prize Notes References Competing Modernities: Science and Education, Kathryn Olesko and Christoph Strupp. (A comparative analysis of the history of science and education in Germany and the United States) English section of the Federal Ministry of Education and Research's website Germany's science and research landscape Articles and dossiers about Research and Technology in Germany, Goethe-Institut Audretsch, D. B., Lehmann, E. E., & Schenkenhofer, J. (2018). Internationalization strategies of hidden champions: lessons from Germany. Multinational Business Review. External links Federal Ministry of Education and Research Deutsche Forschungsgemeinschaft Research-in-germany.org History of science de:Deutschland#Wissenschaft
Science and technology in Germany
[ "Technology" ]
4,502
[ "History of science", "History of science and technology" ]