id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
3,011,741
https://en.wikipedia.org/wiki/Software%20token
A software token (a.k.a. soft token) is a piece of a two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. (Contrast hardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated — absent physical invasion of the device) Because software tokens are something one does not physically possess, they are exposed to unique threats based on duplication of the underlying cryptographic material - for example, computer viruses and software attacks. Both hardware and software tokens are vulnerable to bot-based man-in-the-middle attacks, or to simple phishing attacks in which the one-time password provided by the token is solicited, and then supplied to the genuine website in a timely manner. Software tokens do have benefits: there is no physical token to carry, they do not contain batteries that will run out, and they are cheaper than hardware tokens. Security architecture There are two primary architectures for software tokens: shared secret and public-key cryptography. For a shared secret, an administrator will typically generate a configuration file for each end-user. The file will contain a username, a personal identification number, and the secret. This configuration file is given to the user. The shared secret architecture is potentially vulnerable in a number of areas. The configuration file can be compromised if it is stolen and the token is copied. With time-based software tokens, it is possible to borrow an individual's PDA or laptop, set the clock forward, and generate codes that will be valid in the future. Any software token that uses shared secrets and stores the PIN alongside the shared secret in a software client can be stolen and subjected to offline attacks. Shared secret tokens can be difficult to distribute, since each token is essentially a different piece of software. Each user must receive a copy of the secret, which can create time constraints. Some newer software tokens rely on public-key cryptography, or asymmetric cryptography. This architecture eliminates some of the traditional weaknesses of software tokens, but does not affect their primary weakness (ability to duplicate). A PIN can be stored on a remote authentication server instead of with the token client, making a stolen software token no good unless the PIN is known as well. However, in the case of a virus infection, the cryptographic material can be duplicated and then the PIN can be captured (via keylogging or similar) the next time the user authenticates. If there are attempts made to guess the PIN, it can be detected and logged on the authentication server, which can disable the token. Using asymmetric cryptography also simplifies implementation, since the token client can generate its own key pair and exchange public keys with the server. See also Authentication Electronic authentication Google Authenticator Multi-factor authentication Security token References External links Microsoft to abandon passwords Banks to Use 2-factor Authentication by End of 2006 Cryptography Computer access control fr:Authentification forte
Software token
[ "Mathematics", "Engineering" ]
651
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering", "Computer access control" ]
3,011,773
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem%20for%20smooth%20manifolds
In mathematics, a Riemann–Roch theorem for smooth manifolds is a version of results such as the Hirzebruch–Riemann–Roch theorem or Grothendieck–Riemann–Roch theorem (GRR) without a hypothesis making the smooth manifolds involved carry a complex structure. Results of this kind were obtained by Michael Atiyah and Friedrich Hirzebruch in 1959, reducing the requirements to something like a spin structure. Formulation Let X and Y be oriented smooth closed manifolds, and f: X → Y a continuous map. Let vf=f*(TY) − TX in the K-group K(X). If dim(X) ≡ dim(Y) mod 2, then where ch is the Chern character, d(vf) an element of the integral cohomology group H2(Y, Z) satisfying d(vf) ≡ f* w2(TY)-w2(TX) mod 2, fK* the Gysin homomorphism for K-theory, and fH* the Gysin homomorphism for cohomology . This theorem was first proven by Atiyah and Hirzebruch. The theorem is proven by considering several special cases. If Y is the Thom space of a vector bundle V over X, then the Gysin maps are just the Thom isomorphism. Then, using the splitting principle, it suffices to check the theorem via explicit computation for line bundles. If f: X → Y is an embedding, then the Thom space of the normal bundle of X in Y can be viewed as a tubular neighborhood of X in Y, and excision gives a map and . The Gysin map for K-theory/cohomology is defined to be the composition of the Thom isomorphism with these maps. Since the theorem holds for the map from X to the Thom space of N, and since the Chern character commutes with u and v, the theorem is also true for embeddings. f: X → Y. Finally, we can factor a general map f: X → Y into an embedding and the projection The theorem is true for the embedding. The Gysin map for the projection is the Bott-periodicity isomorphism, which commutes with the Chern character, so the theorem holds in this general case also. Corollaries Atiyah and Hirzebruch then specialised and refined in the case X = a point, where the condition becomes the existence of a spin structure on Y. Corollaries are on Pontryagin classes and the J-homomorphism. Notes Theorems in differential geometry Algebraic surfaces Bernhard Riemann
Riemann–Roch theorem for smooth manifolds
[ "Mathematics" ]
562
[ "Theorems in differential geometry", "Theorems in geometry" ]
3,011,783
https://en.wikipedia.org/wiki/IFA%20Berlin
The IFA ( ) or Internationale Funkausstellung Berlin (International radio exhibition Berlin, a.k.a. 'Berlin Radio Show') is one of the oldest industrial exhibitions in Germany. Between 1924 and 1939 it was an annual event, but from 1950 it was held every other year until 2005. Since then it has become an annual event again, held in September. Today it is one of the world's leading trade shows for consumer electronics and home appliances. It offers the opportunity to exhibitors to present their latest products and developments to the general public. As a result of daily reporting in almost all the German media, the radio exhibition and the showcased technology receives a large amount of attention around the globe. In the course of its history, many world innovations were first seen at the exhibition. IFA is "Europe's biggest tech show". 245,000 visitors and 1,645 exhibitors attended IFA 2015. History German physicist and inventor Manfred von Ardenne Ardenne gave the world's first public demonstration of a fully electronic television system using a cathode ray tube for both transmission (using flying-spot image scans, not a camera) and reception, at the 1931 show. In 1933 the Volksempfänger (VE 301 W), a Nazi-sponsored radio receiver design, was introduced. Ordered by Joseph Goebbels, designed by Otto Griessing, sold by Gustav Seibt, it was presented at the tenth Berliner Funkausstellung on 18 August 1933, its price fixed at 76 Reichsmark (RM). 100,000 units were sold during the exhibition. In 1938 the DKE 38 (Deutscher Kleinempfänger 38, i.e. German miniature receiver 1938) followed, the price fixed at 35 RM. AEG, founded in 1883 by Emil Rathenau, showed the first practical audio tape recorder, the Magnetophon K1, at the August 1935 show. In 1939 the exhibition was called Grosse Deutsche Funk- und Fernseh-Ausstellung (Great German Radio and Television Exhibition). The Einheits-Fernseh-Empfänger E1, a TV set designed to be affordable for everybody, was introduced. Plans for large-scale manufacture were thwarted by the outbreak of World War II. Color TV was also introduced (a prototype), based on an invention by Werner Flechsig (cf. shadow mask). Multinational Dutch electronics corporation Philips introduced the compact audio cassette medium for audio storage and the first cassette recorder (the Philips EL3300), developed by ir. Lou Ottens and his team at the Philips factory in Hasselt, at the 1963 show, on Friday 30 August. Due to global pandemic, IFA Berlin was closed in 2020 and 2021. It was open again on September 2, 2022. See also Technics Digital Link interface introduced at 2014 IFA Notes External links Highlights of past exhibitions from 1926 to 2005 Trade fairs in Germany Economy of Berlin Consumer electronics Computer-related trade shows Recurring events established in 1924 1924 establishments in Germany
IFA Berlin
[ "Technology" ]
629
[ "Computer industry", "Computer-related trade shows" ]
3,011,979
https://en.wikipedia.org/wiki/Flying%20serpent%20%28asterism%29
Flying Serpent (Tengshe 螣蛇) is an asterism (name for a group of stars) in the constellation "Encampment" (Shixiu 室宿) in the Chinese constellation system. It is named after the mythological serpent, tengshe. The Tengshe asterism was a group of "22 stars, occurring in the northern [part] of the "Encampment" () constellation, [representing; or comprising the figure of] the Heavenly Snake, chief of the water reptiles", according to the treatise on astronomy in the Book of Jin (Jin Shu). The Tengshe coincides with the lizard constellation Lacerta, and the northern parts of Lacerta occupy the center of Tengshe. References Chinese constellations
Flying serpent (asterism)
[ "Astronomy" ]
158
[ "Stellar astronomy stubs", "Chinese constellations", "Astronomy stubs", "Constellations" ]
3,012,047
https://en.wikipedia.org/wiki/Combined%20sewer
A combined sewer is a type of gravity sewer with a system of pipes, tunnels, pump stations etc. to transport sewage and urban runoff together to a sewage treatment plant or disposal site. This means that during rain events, the sewage gets diluted, resulting in higher flowrates at the treatment site. Uncontaminated stormwater simply dilutes sewage, but runoff may dissolve or suspend virtually anything it contacts on roofs, streets, and storage yards. As rainfall travels over roofs and the ground, it may pick up various contaminants including soil particles and other sediment, heavy metals, organic compounds, animal waste, and oil and grease. Combined sewers may also receive dry weather drainage from landscape irrigation, construction dewatering, and washing buildings and sidewalks. Combined sewers can cause serious water pollution problems during combined sewer overflow (CSO) events when combined sewage and surface runoff flows exceed the capacity of the sewage treatment plant, or of the maximum flow rate of the system which transmits the combined sources. In instances where exceptionally high surface runoff occurs (such as large rainstorms), the load on individual tributary branches of the sewer system may cause a back-up to a point where raw sewage flows out of input sources such as toilets, causing inhabited buildings to be flooded with a toxic sewage-runoff mixture, incurring massive financial burdens for cleanup and repair. When combined sewer systems experience these higher than normal throughputs, relief systems cause discharges containing human and industrial waste to flow into rivers, streams, or other bodies of water. Such events frequently cause both negative environmental and lifestyle consequences, including beach closures, contaminated shellfish unsafe for consumption, and contamination of drinking water sources, rendering them temporarily unsafe for drinking and requiring boiling before uses such as bathing or washing dishes. Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins, screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems. This type of gravity sewer design is less often used nowadays when constructing new sewer systems. Modern-day sewer designs exclude surface runoff by building sanitary sewers instead, but many older cities and towns continue to operate previously constructed combined sewer systems. Development The earliest sewers were designed to carry street runoff away from inhabited areas and into surface waterways without treatment. Before the 19th century, it was commonplace to empty human waste receptacles, e.g., chamber pots, into town and city streets and slaughter animals in open street "shambles". The use of draft animals such as horses and herding of livestock through city streets meant that most contained large amounts of excrement. Before the development of macadam as a paving material in the 19th century, paving systems were mostly porous, so that precipitation could soak away and not run off, and urban rooftop rainwater was often saved in rainwater tanks. Open sewers, consisting of gutters and urban streambeds, were common worldwide before the 20th century. In the majority of developed countries, large efforts were made during the late 19th and early 20th centuries to cover the formerly open sewers, converting them to closed systems with cast iron, steel, or concrete pipes, masonry, and concrete arches, while streets and footpaths were increasingly covered with impermeable paving systems. Most sewage collection systems of the 19th and early to mid-20th century used single-pipe systems that collect both sewage and urban runoff from streets and roofs (to the extent that relatively clean rooftop rainwater was not saved in butts and cisterns for drinking and washing.) This type of collection system is referred to as a "combined sewer system". The rationale for combining the two was that it would be cheaper to build just a single system. Most cities at that time did not have sewage treatment plants, so there was no perceived public health advantage in constructing a separate "surface water sewerage" (UK terminology) or "storm sewer" (US terminology) system. Moreover, before the automobile era, runoff was likely to be typically highly contaminated with animal waste. Further, until the mid-late 19th century the frequent use of shambles contributed more waste. The widespread replacement of horses with automotive propulsion, paving of city streets and surfaces, construction of municipal slaughterhouses, and provision of mains water in the 20th century changed the nature and volume of urban runoff to be initially cleaner, include water that formerly soaked away and previously saved rooftop rainwater after combined sewers were already widely adopted. When constructed, combined sewer systems were typically sized to carry three to 160 times the average dry weather sewage flows. It is generally infeasible to treat the volume of mixed sewage and surface runoff flowing in a combined sewer during peak runoff events caused by snowmelt or convective precipitation. As cities built sewage treatment plants, those plants were typically built to treat only the volume of sewage flowing during dry weather. Relief structures were installed in the collection system to bypass untreated sewage mixed with surface runoff during wet weather, protecting sewage treatment plants from damage caused if peak flows reached the headworks. Combined sewer overflows (CSOs) These relief structures, called "storm-water regulators" (in American English - or "combined sewer overflows" in British English) are constructed in combined sewer systems to divert flows in excess of the peak design flow of the sewage treatment plant. Combined sewers are built with control sections establishing stage-discharge or pressure differential-discharge relationships which may be either predicted or calibrated to divert flows in excess of sewage treatment plant capacity. A leaping weir may be used as a regulating device allowing typical dry-weather sewage flow rates to fall into an interceptor sewer to the sewage treatment plant, but causing a major portion of higher flow rates to leap over the interceptor into the diversion outfall. Alternatively, an orifice may be sized to accept the sewage treatment plant design capacity and cause excess flow to accumulate above the orifice until it overtops a side-overflow weir to the diversion outfall. CSO statistics may be confusing because the term may describe either the number of events or the number of relief structure locations at which such events may occur. A CSO event, as the term is used in American English, occurs when mixed sewage and stormwater are bypassed from a combined sewer system control section into a river, stream, lake, or ocean through a designed diversion outfall, but without treatment. Overflow frequency and duration varies both from system to system, and from outfall to outfall, within a single combined sewer system. Some CSO outfalls discharge infrequently, while others activate every time it rains. The storm water component contributes pollutants to CSO; but a major faction of pollution is the first foul flush of accumulated biofilm and sanitary solids scoured from the dry weather wetted perimeter of combined sewers during peak flow turbulence. Each storm is different in the quantity and type of pollutants it contributes. For example, storms that occur in late summer, when it has not rained for a while, have the most pollutants. Pollutants like oil, grease, fecal coliform from pet and wildlife waste, and pesticides get flushed into the sewer system. In cold weather areas, pollutants from cars, people and animals also accumulate on hard surfaces and grass during the winter and then are flushed into the sewer systems during heavy spring rains. Health impacts CSO discharges during heavy storms can cause serious water pollution problems. The discharges contain human and industrial waste, and can cause beach closings, restrictions on shellfish consumption and contamination of drinking water sources. Comparison to sanitary sewer overflows CSOs differ from sanitary sewer overflows in that the latter are caused by sewer system obstructions, damage, or flows in excess of sewer capacity (rather than treatment plant capacity.) Sanitary sewer overflows may occur at any low spot in the sewer system rather than at the CSO relief structures. Absence of a diversion outfall often causes sanitary sewer overflows to flood residential structures and/or flow over traveled road surfaces before reaching natural drainage channels. Sanitary sewer overflows may cause greater health risks and environmental damage than CSOs if they occur during dry weather when there is no precipitation runoff to dilute and flush away sewage pollutants. CSOs in the United States About 860 communities in the US have combined sewer systems, serving about 40million people. Pollutants from CSO discharges can include bacteria and other pathogens, toxic chemicals, and debris. These pollutants have also been linked with antimicrobial resistance, posing serious public health concerns. The U.S. Environmental Protection Agency (EPA) issued a policy in 1994 requiring municipalities to make improvements to reduce or eliminate CSO-related pollution problems. The policy is implemented through the National Pollutant Discharge Elimination System (NPDES) permit program. The policy defined water quality parameters for the safety of an ecosystem; it allowed for action that are site specific to control CSOs in most practical way for community; it made sure the CSO control is not beyond a community's budget; and allowed water quality parameters to be flexible, based upon the site specific conditions. The CSO Control Policy required all publicly owned treatment works to have "nine minimum controls" in place by January 1, 1997, in order to decrease the effects of sewage overflow by making small improvements in existing processes. In 2000 Congress amended the Clean Water Act to require the municipalities to comply with the EPA policy. Mitigation of CSOs Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins, screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems. For example, cities with combined sewer overflows employ one or more engineering approaches to reduce discharges of untreated sewage, including: utilizing a green infrastructure approach to improve storm water management capacity throughout the system, and reduce the hydraulic overloading of the treatment plant repair and replacement of leaking and malfunctioning equipment increasing overall hydraulic capacity of the sewage collection system (often a very expensive option). The United Kingdom Environment Agency identified unsatisfactory intermittent discharges and issued an Urban Wastewater Treatment Directive requiring action to limit pollution from combined sewer overflows. In 2009, the Canadian Council of Ministers of the Environment adopted a Canada-wide Strategy for the Management of Municipal Wastewater Effluent including national standards to (1) remove floating material from combined sewer overflows, (2) prevent combined sewer overflows during dry weather, and (3) prevent development or redevelopment from increasing the frequency of combined sewer overflows. Rehabilitation of combined sewer systems to mitigate CSOs require extensive monitoring networks which are becoming more prevalent with decreasing sensor and communication costs. These monitoring networks can identify bottlenecks causing the main CSO problem, or aid in the calibration of hydrodynamic or hydrological models to enable cost effective CSO mitigation. Municipalities in the US have been undertaking projects to mitigate CSO since the 1990s. For example, prior to 1990, the quantity of untreated combined sewage discharged annually to lakes, rivers, and streams in southeast Michigan was estimated at more than per year. In 2005, with nearly $1 billion of a planned $2.4 billion CSO investment put into operation, untreated discharges have been reduced by more than per year. This investment that has yielded an 85 percent reduction in CSO has included numerous sewer separation, CSO storage and treatment facilities, and wastewater treatment plant improvements constructed by local and regional governments. Many other areas in the US are undertaking similar projects (see, for example, in the Puget Sound of Washington). Cities like Pittsburgh, Seattle, Philadelphia, and New York are focusing on these projects partly because they are under federal consent decrees to solve their CSO issues. Both up-front penalties and stipulated penalties are utilized by EPA and state agencies to enforce CSO-mitigating initiatives and the efficiency of their schedules. Municipalities' sewage departments, engineering and design firms, and environmental organizations offer different approaches to potential solutions. Sewer separation Some US cities have undertaken sewer separation projects—building a second piping system for all or part of the community. In many of these projects, cities have been able to separate only portions of their combined systems. High costs or physical limitations may preclude building a completely separate system. In 2011, Washington, D.C., separated its sewers in four small neighborhoods at a cost of $11 million. (The project cost also included improvements to the drinking water piping system.) CSO storage Another solution is to build a CSO storage facility, such as a tunnel that can store flow from many sewer connections. Because a tunnel can share capacity among several outfalls, it can reduce the total volume of storage that must be provided for a specific number of outfalls. Storage tunnels store combined sewage but do not treat it. When the storm is over, the flows are pumped out of the tunnel and sent to a wastewater treatment plant. One of the main concerns with CSO storage is the length of time it is stored before it is released. Without careful management of this storage period, the water in the CSO storage facility runs the risk of going septic. Washington, D.C., is building underground storage capacity as its primary strategy to address CSOs. In 2011, the city began construction on a system of four deep storage tunnels, adjacent to the Anacostia River, that will reduce overflows to the river by 98 percent, and 96 percent system-wide. The system will comprise over of tunnels with a storage capacity of . The first segment of the tunnel system, in length, went online in 2018. The remaining segments of the storage system are scheduled for completion in 2023. (The city's overall "Clean Rivers" project, projected to cost $2.6 billion, includes other components, such as reducing stormwater flows.) The South Boston CSO Storage Tunnel is a similar project, completed in 2011. Indianapolis, Indiana, is building underground storage capacity in the form of a diameter deep rock tunnel system which will connect the two existing wastewater treatment plants, and provide collection of discharge water from the various CSO sites located along the White River, Eagle Creek, Fall Creek, Pogue's Run, and Pleasant Run. Citizens Energy Group is managing the efforts to construct the first phases of the work, which includes a deep Deep Rock Tunnel Connector between the Belmont Wastewater Treatment Plant and the Southport Wastewater Treatment Plant. Additional tunnels will branch under the existing watercourses located in Indianapolis. The planned cost for the project will total $1.9 billion. Fort Wayne, Indiana, is constructing a , diameter, $180M tunnel under the 3RPORT (Three Rivers Protection and Overflow Reduction Tunnel) to address the myriad CSOs which outfall into the St. Mary's, St. Joseph, and Maumee Rivers. The 3RPORT is approximately below grade, and is anticipated to enter service in 2023. Expanding sewage treatment capacity Some cities have expanded their basic sewage treatment capacity to handle some or all of the CSO volume. In 2002 litigation forced the city of Toledo, Ohio, to double its treatment capacity and build a storage basin in order to eliminate most overflows. The city also agreed to study ways to reduce stormwater flows into the sewer system. (See Reducing stormwater flows.) Retention basins Retention treatment basins or large concrete tanks that store and treat combined sewage are another solution. These underground structures can range in storage and treatment capacity from to of combined sewage. While each facility is unique, a typical facility operation is as follows. Flows from the overloaded sewers are pumped into a basin that is divided into compartments. The first flush compartment captures and stores flows with the highest level of pollutants from the first part of a storm. These pollutants include motor oil, sediment, road salt, and lawn chemicals (pesticides and fertilizers) that are picked up by the stormwater as it runs off roads and lawns. The flows from this compartment are stored and sent to the wastewater treatment plant when there is capacity in the interceptor sewer after the storm. The second compartment is a treatment or flow-through compartment. The flows are disinfected by injecting sodium hypochlorite, or bleach, as they enter this compartment. It then takes about 20‑30 minutes for the flows to move to the end of the compartment. During this time, bacteria are killed and large solid materials settle out. At the end of the compartment, any remaining sanitary trash is skimmed off the top and the treated flows are discharged into the river or lake. The City of Detroit, Michigan, utilizes a system of nine CSO retention basins and screening/disinfection facilities that are owned and operated by the Great Lakes Water Authority. These basins are located at original combined sewer outfalls located along the Detroit River and Rouge River within metropolitan Detroit. These facilities are generally designed to contain two inches of stormwater runoff, with the ability to disinfect overflows during extreme wet-weather rainfall events. Screening and disinfection facilities Screening and disinfection facilities treat CSO without ever storing it. Called "flow-through" facilities, they use fine screens to remove solids and sanitary trash from the combined sewage. Flows are injected with sodium hypochlorite for disinfection and mixed as they travel through a series of fine screens to remove debris. The fine screens have openings that range in size from 4 to 6 mm, or a little less than a quarter inch. The flow is sent through the facility at a rate that provides enough time for the sodium hypochlorite to kill bacteria. All of the materials removed by the screens are then sent to the sewage treatment plant through the interceptor sewer. Reducing stormwater flows Communities may implement low impact development techniques to reduce flows of stormwater into the collection system. This includes: constructing new and renovated streets, parking lots and sidewalks with interlocking stones, permeable paving and pervious concrete installing green roofs on buildings installing bioretention systems, also called rain gardens, in landscaped areas installing rainwater harvesting equipment to collect runoff from building roofs during wet weather for irrigating landscapes and gardens during dry weather implementing graywater collection and use on site to reduce sewage discharges at all times Green infrastructure CSO mitigating initiatives that are solely composed of sewer system reconstruction are referred to as gray infrastructure, while techniques like permeable pavement and rainwater harvesting are referred to as green infrastructure. Conflict often occurs between a municipality's sewage authority and its environmentally active organizations between gray and green infrastructural plans. The 2004 EPA Report to Congress on CSO's provides a review of available technologies to mitigate CSO impacts. Real-time decision support systems Recent technological advances in sensing and control have enabled the implementation of real-time decision support systems (RT-DSS) for CSO mitigation. Through the use of internet of things technology and cloud computing, CSO events can now be mitigated by dynamically adjusting setpoints for movable gates, pump stations, and other actuated assets in sewers and storm water management systems. Similar technology, called adaptive traffic control is used to control the flow of vehicles through traffic lights. RT-DSS systems take advantage of storm temporal and spatial variability as well as varying concentration times due to diverse land uses across the sewershed to coordinate and optimize control assets. By maximizing storage and conveyance RT-DSS are able to minimize overflows using existing infrastructure. Successful implementations of RT-DSS have been carried out throughout the United States and Europe. Real-time control (RTC) can be either heuristic or model based. Model-based control is theoretically more optimal, but due to the ease of implementation, heuristic control is more commonly applied. Generating sufficient evidence that RTC is a suitable option for CSO mitigation remains problematic, although new performance methods might make this possible. Regulations United Kingdom There is in the UK a legal difference between a storm sewer and a surface water sewer. There is no right of connection to a storm-water overflow sewer under section 106 of the Water Industry Act. These are normally the pipe line that discharges to a watercourse, downstream of a combined sewer overflow. It takes the excess flow from a combined sewer. A surface water sewer conveys rainwater; legally there is a right of connection for rainwater to this public sewer. A public storm water sewer can discharge to a public surface water, but not the other way around, without a legal change in sewer status by the water company. History Combined sewer systems were common when urban sewerage systems were first developed, in the late 19th and early 20th centuries. Society and culture The image of the sewer recurs in European culture as they were often used as hiding places or routes of escape by the scorned or the hunted, including partisans and resistance fighters in World War II. Fighting erupted in the sewers during the Battle of Stalingrad. The only survivors from the Warsaw Uprising and Warsaw Ghetto made their final escape through city sewers. Some have commented that the engravings of imaginary prisons by Piranesi were inspired by the Cloaca Maxima, one of the world's earliest sewers. In fiction The theme of traveling through, hiding, or even residing in combined sewers is a common plot device in media. Famous examples of sewer dwelling are the Teenage Mutant Ninja Turtles, Stephen King's It, Les Misérables, The Third Man, Ladyhawke, Mimic, The Phantom of the Opera, Beauty and the Beast, and Jet Set Radio Future. The Todd Strasser novel Y2K-9: the Dog Who Saved the World is centered on a dog thwarting terroristic threats to electronically sabotage American sewage treatment plants. Sewer alligators A well-known urban legend, the sewer alligator, is that of giant alligators or crocodiles residing in combined sewers, especially of major metropolitan areas. Two public sculptures in New York depict an alligator dragging a hapless victim into a manhole. Alligators have been known to get into combined storm sewers in the southeastern United States. Closed-circuit television by a sewer repair company captured an alligator in a combined storm sewer on tape. See also Fatberg (sewer obstruction) Sanitary sewer overflow Thames Tideway Scheme Storm drain References External links U.S. EPA – Combined Sewer Overflows Environmental engineering Hydraulic engineering Sewerage infrastructure Water pollution
Combined sewer
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
4,635
[ "Hydrology", "Water treatment", "Chemical engineering", "Sewerage infrastructure", "Water pollution", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
3,012,322
https://en.wikipedia.org/wiki/Glucose%20test
Many types of glucose tests exist and they can be used to estimate blood sugar levels at a given time or, over a longer period of time, to obtain average levels or to see how fast the body is able to normalize changed glucose levels. Eating food for example leads to elevated blood sugar levels. In healthy people, these levels quickly return to normal via increased cellular glucose uptake which is primarily mediated by increase in blood insulin levels. Glucose tests can reveal temporary/long-term hyperglycemia or hypoglycemia. These conditions may not have obvious symptoms and can damage organs in the long-term. Abnormally high/low levels, slow return to normal levels from either of these conditions and/or inability to normalize blood sugar levels means that the person being tested probably has some kind of medical condition like type 2 diabetes which is caused by cellular insensitivity to insulin. Glucose tests are thus often used to diagnose such conditions. Testing methods Tests that can be performed at home are used in blood glucose monitoring for illnesses that have already been diagnosed medically so that these illnesses can be maintained via medication and meal timing. Some of the home testing methods include fingerprick type of glucose meter - need to prick self finger 8-12 times a day. continuous glucose monitor - the CGM monitors the glucose levels every 5 minutes approximately. Laboratory tests are often used to diagnose illnesses and such methods include fasting blood sugar (FBS), fasting plasma glucose (FPG): 10–16 hours after eating glucose tolerance test: continuous testing postprandial glucose test (PC): 2 hours after eating random glucose test Some laboratory tests don't measure glucose levels directly from body fluids or tissues but still indicate elevated blood sugar levels. Such tests measure the levels of glycated hemoglobin, other glycated proteins, 1,5-anhydroglucitol etc. from blood. Use in medical diagnosis Glucose testing can be used to diagnose or indicate certain medical conditions. High blood sugar may indicate gestational diabetes. This temporary form of diabetes appears during pregnancy, and with glucose-controlling medication or insulin symptoms can be improved. type 1 and type 2 diabetes or prediabetes. If diagnosed with diabetes, regular glucose tests can help manage or maintain conditions. Type 1, is commonly seen in children or teenagers whose bodies are not producing enough insulin. Type 2 diabetes, is typically seen in adults who are overweight. The insulin in their bodies are either not working normally, or there is not being enough produced. Low blood sugar may indicate insulin overuse starvation underactive thyroid Addison's disease insulinoma kidney disease Preparing for testing Fasting prior to glucose testing may be required with some test types. Fasting blood sugar test, for example, requires 10–16 hour-long period of not eating before the test. Blood sugar levels can be affected by some drugs and prior to some glucose tests these medications should be temporarily given up or their dosages should be decreased. Such drugs may include salicylates (Aspirin), birth control pills, corticosteroids, tricyclic antidepressants, lithium, diuretics and phenytoin. Some foods contain caffeine (coffee, tea, colas, energy drinks etc.). Blood sugar levels of healthy people are generally not significantly changed by caffeine, but in diabetics caffeine intake may elevate these levels via its ability to stimulate the adrenergic nervous system. Reference ranges Fasting blood sugar A level below 5.6 mmol/L (100 mg/dL) 10–16 hours without eating is normal. 5.6–6 mmol/L (100–109 mg/dL) may indicate prediabetes and oral glucose tolerance test (OGTT) should be offered to high-risk individuals (old people, those with high blood pressure etc.). 6.1–6.9 mmol/L (110–125 mg/dL) means OGTT should be offered even if other indicators of diabetes are not present. 7 mmol/L (126 mg/dL) and above indicates diabetes and the fasting test should be repeated. Glucose tolerance test Postprandial glucose test Random glucose test See also Hyperglycemia Hypoglycemia References Blood tests Diagnostic endocrinology
Glucose test
[ "Chemistry" ]
899
[ "Blood tests", "Chemical pathology" ]
3,012,448
https://en.wikipedia.org/wiki/Nitrogen%20balance
In human physiology, nitrogen balance is the net difference between bodily nitrogen intake (ingestion) and loss (excretion). It can be represented as the following: Nitrogen is a fundamental chemical component of amino acids, the molecular building blocks of protein. As such, nitrogen balance may be used as an index of protein metabolism. When more nitrogen is gained than lost by an individual, they are considered to have a positive nitrogen balance and be in a state of overall protein anabolism. In contrast, a negative nitrogen balance, in which more nitrogen is lost than gained, indicates a state of overall protein catabolism. The body obtains nitrogen from dietary protein, sources of which include meat, fish, eggs, dairy products, nuts, legumes, cereals, and grains. Nitrogen loss occurs largely through urine in the form of urea, as well as through faeces, sweat, and growth of hair and skin. Blood urea nitrogen and urine urea nitrogen tests can be used to estimate nitrogen balance. Physiological and Clinical Implications Positive nitrogen balance is associated with periods of growth, hypothyroidism, tissue repair, and pregnancy. Negative nitrogen balance is associated with burns, serious tissue injuries, fever, hyperthyroidism, wasting diseases, and periods of fasting. A negative nitrogen balance can be used as part of a clinical evaluation of malnutrition. Nitrogen balance is a method traditionally used to measure dietary protein requirements. This approach necessitates the meticulous collection of all nitrogen inputs and outputs to ensure comprehensive accounting of nitrogen exchanges. Nitrogen balance studies typically involve controlled dietary conditions, requiring participants to consume specific diets to determine total nitrogen intake precisely. Furthermore, participants often must remain at the study location for the duration of the study to facilitate the collection of all nitrogen losses. Physical exercise is also known to influence nitrogen excretion, adding another variable that requires control during these studies. Due to the stringent conditions required for accurate results, the nitrogen balance method may pose challenges when studying dietary protein requirements across different demographics, such as children. See also Protein (nutrient) Biological value Net protein utilization Protein efficiency ratio Protein digestibility Protein Digestibility Corrected Amino Acid Score References External links (with clinical information & interpretation related to nitrogen balance and its clinical testing) Nitrogen Proteins
Nitrogen balance
[ "Chemistry" ]
467
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
3,012,612
https://en.wikipedia.org/wiki/Delay-locked%20loop
In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line. A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit. The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal. Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block. A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements. The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required. A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself. In the Control Systems jargon, the DLL is a loop one step lower in order and in type with respect to the PLL, because it lacks the 1/s factor in the controlled block: the delay line has a transfer function phase-out/phase-in that is just a constant, the VCO transfer function is instead GVCO/s. In the comparison made in the previous sentences (that correspond to the figure where the integrator, and not the flat gain, is used), the DLL is a loop of 1st order and type 1 and the PLL of 2nd order and type 2. Without the integration of the error signal, the DLL would be 0th order and type 0, and the PLL 1st order and type 1. The number of elements in the delay chain must be even, or else the duty cycle of the clock at the intermediate nodes of the chain might become irregular. If 2N +1 was the -odd- number of stages, a 50% duty-cycle would become at times N/(2N+1), at times (N+1)/(2N+1), following the jittering of the error signal around the value corresponding to perfect lock. Calling 2N the number of stages of the DLL chain, it is easy to see that the figure above would change from a DLL to a PLL, locked to the same phase and frequency, if the following modifications were made: dividing by two the number of stages making one of the stages an inverting one connecting the input of the chain of stages to its output instead of to the reference clock. The resulting chain becomes a ring oscillator with a period equal to the delay of the previous chain, and the loop locks to the same reference clock with the same level of error signal. The loop order and type are both incremented by one. It may be further remarked that, in the case where the integrator instead of the flat gain is chosen, the PLL that can be obtained is unstable. The phase shift can be specified either in absolute terms (in delay chain gate units), or as a proportion of the clock period, or both. See also Phase-locked loop (PLL) Digital Clock Manager (DCM) Clock signal References The Delay Lock Loop has been derived by J.J. Spilker, JR. and D.T. Magill, "The delay-lock discriminator--an optimum tracking device," Proc. IRE, vol.49, pp. 1403–1416, September 1961. Electronic oscillators Gate arrays Integrated circuits Digital electronics Electronic design
Delay-locked loop
[ "Technology", "Engineering" ]
985
[ "Computer engineering", "Digital electronics", "Electronic design", "Gate arrays", "Electronic engineering", "Design", "Integrated circuits" ]
3,012,919
https://en.wikipedia.org/wiki/Q%20meter
A Q meter is a piece of equipment used in the testing of radio frequency circuits. It has been largely replaced in professional laboratories by other types of impedance measuring devices, though it is still in use among radio amateurs. It was developed at Boonton Radio Corporation in Boonton, New Jersey in 1934 by William D. Loughlin. Description A Q meter measures the quality factor of a circuit, Q, which expresses how much energy is dissipated per cycle in a non-ideal reactive circuit: This expression applies to an RF and microwave filter, bandpass LC filter, or any resonator. It also can be applied to an inductor or capacitor at a chosen frequency. For inductors Where is the reactance of the inductor, is the inductance, is the angular frequency and is the resistance of the inductor. The resistance represents the loss in the inductor, mainly due to the resistance of the wire. A Q meter works on the principle of series resonance. For LC band pass circuits and filters: Where is the resonant frequency (center frequency) and is the filter bandwidth. In a band pass filter using an LC resonant circuit, when the loss (resistance) of the inductor increases, its Q factor is reduced, and so the bandwidth of the filter is increased. In a coaxial cavity filter, there are no inductors and capacitors, but the cavity has an equivalent LC model with losses (resistance) and the Q factor can be applied as well. Operation Internally, a minimal Q meter consists of a tuneable RF generator with a very low (pass) impedance output and a detector with a very high impedance input. There is usually provision to add a calibrated amount of high Q capacitance across the component under test to allow inductors to be measured in isolation. The generator is effectively placed in series with the tuned circuit formed by the components under test, and having negligible output resistance, does not materially affect the Q factor, while the detector measures the voltage developed across one element (usually the capacitor) and being high impedance in shunt does not affect the Q factor significantly either. The ratio of the developed RF voltage to the applied RF current, coupled with knowledge of the reactive impedance from the resonant frequency, and the source impedance, allows the Q factor to be directly read by scaling the detected voltage. See also LCR meter ESR meter References Further reading "An experimental 'Q' meter" — article by Lloyd Butler (originally published in Amateur Radio, November 1988; revised April 2004) Electronic test equipment Radio electronics Measuring instruments
Q meter
[ "Technology", "Engineering" ]
546
[ "Radio electronics", "Electronic test equipment", "Measuring instruments" ]
3,012,929
https://en.wikipedia.org/wiki/C16H13ClN2O
{{DISPLAYTITLE:C16H13ClN2O}} The molecular formula C16H13ClN2O (molar mass: 284.74 g/mol, exact mass: 284.0716 u) may refer to: Diazepam Mazindol Molecular formulas
C16H13ClN2O
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
3,012,938
https://en.wikipedia.org/wiki/Big%20Numbers%20%28comics%29
Big Numbers is an unfinished graphic novel by writer Alan Moore and artist Bill Sienkiewicz. In 1990 Moore's short-lived imprint Mad Love published two of the planned twelve issues. The series was picked up by Kevin Eastman's Tundra Publishing, but the completed third issue did not print, and the remaining issues, whose artwork was to be handled by Sienkiewicz's assistant Al Columbia, were never finished. The work marks a move, on Moore's part, away from genre fiction, in the wake of the success of Watchmen. Moore weaves mathematics (in particular the work of mathematician Benoit Mandelbrot on fractal geometry and chaos theory) into a narrative of socioeconomic changes wrought by an American corporation's building of a shopping mall in a small, traditional English town, and the effects of the economic policies of the Margaret Thatcher administration in the 1980s. Publication history The planned 500-page graphic novel was to be serialised one chapter at a time over twelve issues. The series was printed on high-quality paper in an unusual square format. The first two issues were produced by Alan Moore's self-publishing company Mad Love, with writing by Moore and artwork by Bill Sienkiewicz, but the workload for the comic was intense, and Sienkiewicz stalled. By the time he backed out of the series, the third issue was still incomplete and rising overhead crippled the production. Kevin Eastman, creator of Teenage Mutant Ninja Turtles, stepped in and attempted to have his company Tundra Publishing publish Big Numbers. Moore and Eastman asked Sienkiewicz' assistant, Al Columbia, to become the series' sole artist and Roxanne Starr to be its letterer. Columbia worked on the fourth issue but, for reasons which remain unclear, destroyed his own artwork and abandoned the project as well. Big Numbers #3 and #4 were never published, and the series remains unfinished. In 1999, ten pages of Sienkiewicz's art for Big Numbers #3 were published in the first (and only) issue of the magazine Submedia. In 2009, a photocopy of the complete lettered art for Big Numbers #3 surfaced on eBay. The purchaser contacted Moore, and with his permission published scans of the art on LiveJournal. History Moore announced the series as his popularity was at a peak. The success of Watchmen had made him a star writer. Moore wanted to move away from genre fiction; Big Numbers was to have no genre, and deal with themes of shopping and mathematics. Mad Love ran into a number of difficulties: the proceeds from AARGH! were donated to defending homosexual rights; the production costs of Big Numbers were high; and Moore's polyamorous relationship with wife Phyllis and their lover Debbie Delano fell apart. Kevin Eastman's Tundra Publishing agreed to publish the rest of the series. Sienkiewicz's detailed artwork was time-consuming to produce. He hired the 19-year-old Al Columbia as an assistant, but the pressures of the project combined with personal issues led him to quit Big Numbers after the second issue's publication. He drew the entire third issue which never saw print. Sienkiewicz drew every page and figures for all three issues and also did several painted/multimedia covers for upcoming issues that never saw print. Jon J Muth and Dave McKean were among the names rumoured as replacements. Ultimately the job fell to the inexperienced Columbia. Tundra tried to promote Columbia by publishing his first stand-alone comic book, Doghead, in 1992, and put out a Columbia-drawn poster for Big Numbers. The pressure turned out to be too much for the young artist, who is said to have destroyed the artwork for the fourth issue in 1992, and was not heard from again until the publication of The Biologic Show in 1994. Plot Set in the fictional English town of Hampton, the book explores the socioeconomic changes brought about by globalisation on an insular community, represented by the building of a shopping mall by a large American corporation. Meanwhile, the community also experiences pressure from prime minister Margaret Thatcher's economic policies, including cuts to health care and welfare. Style and analysis Each page is laid out in a rigid twelve-panel grid. The all-white speech balloons are circular, rather than the more common shape. Adaptations In a 2001 interview Moore indicated that he did not believe Big Numbers could ever be completed as comics. However, he spoke of the possibility of the comic being adapted as a television series by Picture Palace Productions, as he had the whole story mapped out on a sheet of A1 paper, and five episodes written. An account of the unravelling of the Big Numbers project is included in Eddie Campbell's 2001 graphic novel Alec: How to Be an Artist. Reception The first issue sold 65,000 copies, the second 40,000. References Works cited External links Big Numbers #3 1990 comics debuts Comics by Alan Moore Mathematics fiction books Unfinished comics
Big Numbers (comics)
[ "Mathematics" ]
1,007
[ "Recreational mathematics", "Mathematics fiction books" ]
3,012,976
https://en.wikipedia.org/wiki/Postface
A postface is the opposite of a preface, a brief article or explanatory information placed at the end of a book. Postfaces are quite often used in books so that the non-pertinent information will appear at the end of the literary work, and not confuse the reader. A postface is a text added to the end of a book or written as a supplement or conclusion, usually to give a comment, an explanation, or a warning. The postface can be written by the author of a document or by another person. The postface is separated from the main body of the book and is placed in the appendices pages. The postface presents information that is not essential to the entire book, but which is considered relevant. See also Afterword References Book design Book terminology
Postface
[ "Engineering" ]
165
[ "Book design", "Design" ]
3,013,012
https://en.wikipedia.org/wiki/2%20Lacertae
2 Lacertae is a binary star in the constellation of Lacerta. With an apparent magnitude of about 4.5, it is faintly visible to the naked eye. Its parallax, measured by the Hipparcos spacecraft, is 5.88 milliarcseconds, corresponding to a distance of about 550 light years (170 parsecs). It is projected against the Lacertae OB1 stellar association to the northeast of the main concentration of stars, but it is likely to be a foreground object. 2 Lacertae is a double-lined spectroscopic binary. Its components are too close to be resolved, however periodic Doppler shifts in its spectrum reveal that there are two stars orbiting each other. Both stars are B-type main-sequence stars, orbiting each other every 2.616 days and with an eccentricity of about 0.04. The primary is estimated to be about one magnitude brighter than the secondary. The primary component is close to moving off the main sequence, and has nearly exhausted its core hydrogen (possibly also its companion). It is estimated to have completed over 90% of its time on the main sequence. 2 Lacertae is a rotating ellipsoidal variable, a binary system in which the stars are close enough to each other for one or both stars to be significantly distorted by tidal forces. The stars' orbital plane is not aligned closely enough to our line of sight for the stars to eclipse each other, but the stars' orbital motion does cause us to view different portions of the non-spherical stars' surfaces, leading to brightness changes. 2 Lacertae varies by about 0.03 magnitudes as the stars orbit each other. References Lacerta Rotating ellipsoidal variables B-type main-sequence stars Lacertae, 02 Suspected variables 8523 212120 110351 Durchmusterung objects Spectroscopic binaries B-type subgiants
2 Lacertae
[ "Astronomy" ]
395
[ "Lacerta", "Constellations" ]
3,013,208
https://en.wikipedia.org/wiki/Barium%20chloride%20%28data%20page%29
This page provides supplementary chemical data on barium chloride. Material Safety Data Sheet SIRI Science Stuff (Dihydrate) Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Barium chloride (data page)
[ "Chemistry" ]
46
[ "Chemical data pages", "nan" ]
3,013,387
https://en.wikipedia.org/wiki/Barium%20hydroxide%20%28data%20page%29
This page provides supplementary chemical data on barium hydroxide. Material Safety Data Sheet SIRI Science Stuff Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Barium hydroxide (data page)
[ "Chemistry" ]
41
[ "Chemical data pages", "nan" ]
3,013,390
https://en.wikipedia.org/wiki/Folk%20theorem%20%28game%20theory%29
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games . The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria. The Folk Theorem suggests that if the players are patient enough and far-sighted (i.e. if the discount factor ), then repeated interaction can result in virtually any average payoff in an SPE equilibrium. "Virtually any" is here technically defined as "feasible" and "individually rational". Setup and definitions We start with a basic game, also known as the stage game, which is an n-player game. In this game, each player has finitely many actions to choose from, and they make their choices simultaneously and without knowledge of the other player's choices. The collective choices of the players leads to a payoff profile, i.e. to a payoff for each of the players. The mapping from collective choices to payoff profiles is known to the players, and each player aims to maximize their payoff. If the collective choice is denoted by x, the payoff that player i receives, also known as player i's utility, will be denoted by . We then consider a repetition of this stage game, finitely or infinitely many times. In each repetition, each player chooses one of their stage game options, and when making that choice, they may take into account the choices of the other players in the prior iterations. In this repeated game, a strategy for one of the players is a deterministic rule that specifies the player's choice in each iteration of the stage game, based on all other player's choices in the prior iterations. A choice of strategy for each of the players is a strategy profile, and it leads to a payout profile for the repeated game. There are a number of different ways such a strategy profile can be translated into a payout profile, outlined below. Any Nash equilibrium payoff profile of a repeated game must satisfy two properties: Individual rationality: the payoff must weakly dominate the minmax payoff profile of the constituent stage game. That is, the equilibrium payoff of each player must be at least as large as the minmax payoff of that player. This is because a player achieving less than their minmax payoff always has incentive to deviate by simply playing their minmax strategy at every history. Feasibility: the payoff must be a convex combination of possible payoff profiles of the stage game. This is because the payoff in a repeated game is just a weighted average of payoffs in the basic games. Folk theorems are partially converse claims: they say that, under certain conditions (which are different in each folk theorem), every payoff profile that is both individually rational and feasible can be realized as a Nash equilibrium payoff profile of the repeated game. There are various folk theorems; some relate to finitely-repeated games while others relate to infinitely-repeated games. Infinitely-repeated games without discounting In the undiscounted model, the players are patient. They do not differentiate between utilities in different time periods. Hence, their utility in the repeated game is represented by the sum of utilities in the basic games. When the game is infinite, a common model for the utility in the infinitely-repeated game is the limit inferior of mean utility: If the game results in a path of outcomes , where denotes the collective choices of the players at iteration t (t=0,1,2,...), player i utility is defined as where is the basic-game utility function of player i. An infinitely-repeated game without discounting is often called a "supergame". The folk theorem in this case is very simple and contains no pre-conditions: every individually rational and feasible payoff profile in the basic game is a Nash equilibrium payoff profile in the repeated game. The proof employs what is called a grim or grim trigger strategy. All players start by playing the prescribed action and continue to do so until someone deviates. If player i deviates, all other players switch to picking the action which minmaxes player i forever after. The one-stage gain from deviation contributes 0 to the total utility of player i. The utility of a deviating player cannot be higher than his minmax payoff. Hence all players stay on the intended path and this is indeed a Nash equilibrium. Subgame perfection The above Nash equilibrium is not always subgame perfect. If punishment is costly for the punishers, the threat of punishment is not credible. A subgame perfect equilibrium requires a slightly more complicated strategy. The punishment should not last forever; it should last only a finite time which is sufficient to wipe out the gains from deviation. After that, the other players should return to the equilibrium path. The limit-of-means criterion ensures that any finite-time punishment has no effect on the final outcome. Hence, limited-time punishment is a subgame-perfect equilibrium. Coalition subgame-perfect equilibria: An equilibrium is called a coalition Nash equilibrium if no coalition can gain from deviating. It is called a coalition subgame-perfect equilibrium if no coalition can gain from deviating after any history. With the limit-of-means criterion, a payoff profile is attainable in coalition-Nash-equilibrium or in coalition-subgame-perfect-equilibrium, if-and-only-if it is Pareto efficient and weakly-coalition-individually-rational. Overtaking Some authors claim that the limit-of-means criterion is unrealistic, because it implies that utilities in any finite time-span contribute 0 to the total utility. However, if the utilities in any finite time-span contribute a positive value, and the value is undiscounted, then it is impossible to attribute a finite numeric utility to an infinite outcome sequence. A possible solution to this problem is that, instead of defining a numeric utility for each infinite outcome sequence, we just define the preference relation between two infinite sequences. We say that agent (strictly) prefers the sequence of outcomes over the sequence , if: For example, consider the sequences and . According to the limit-of-means criterion, they provide the same utility to player i, but according to the overtaking criterion, is better than for player i. See overtaking criterion for more information. The folk theorems with the overtaking criterion are slightly weaker than with the limit-of-means criterion. Only outcomes that are strictly individually rational, can be attained in Nash equilibrium. This is because, if an agent deviates, he gains in the short run, and this gain can be wiped out only if the punishment gives the deviator strictly less utility than the agreement path. The following folk theorems are known for the overtaking criterion: Strict stationary equilibria: A Nash equilibrium is called strict if each player strictly prefers the infinite sequence of outcomes attained in equilibrium, over any other sequence he can deviate to. A Nash equilibrium is called stationary if the outcome is the same in each time-period. An outcome is attainable in strict-stationary-equilibrium if-and-only-if for every player the outcome is strictly better than the player's minimax outcome. Strict stationary subgame-perfect equilibria: An outcome is attainable in strict-stationary-subgame-perfect-equilibrium, if for every player the outcome is strictly better than the player's minimax outcome (note that this is not an "if-and-only-if" result). To achieve subgame-perfect equilibrium with the overtaking criterion, it is required to punish not only the player that deviates from the agreement path, but also every player that does not cooperate in punishing the deviant. The "stationary equilibrium" concept can be generalized to a "periodic equilibrium", in which a finite number of outcomes is repeated periodically, and the payoff in a period is the arithmetic mean of the payoffs in the outcomes. That mean payoff should be strictly above the minimax payoff. Strict stationary coalition equilibria: With the overtaking criterion, if an outcome is attainable in coalition-Nash-equilibrium, then it is Pareto efficient and weakly-coalition-individually-rational. On the other hand, if it is Pareto efficient and strongly-coalition-individually-rational it can be attained in strict-stationary-coalition-equilibrium. Infinitely-repeated games with discounting Assume that the payoff of a player in an infinitely repeated game is given by the average discounted criterion with discount factor 0 < δ < 1: The discount factor indicates how patient the players are. The factor is introduced so that the payoff remain bounded when . The folk theorem in this case requires that the payoff profile in the repeated game strictly dominates the minmax payoff profile (i.e., each player receives strictly more than the minmax payoff). Let a be a strategy profile of the stage game with payoff profile u which strictly dominates the minmax payoff profile. One can define a Nash equilibrium of the game with u as resulting payoff profile as follows: 1. All players start by playing a and continue to play a if no deviation occurs. 2. If any one player, say player i, deviated, play the strategy profile m which minmaxes i forever after. 3. Ignore multilateral deviations. If player i gets ε more than his minmax payoff each stage by following 1, then the potential loss from punishment is If δ is close to 1, this outweighs any finite one-stage gain, making the strategy a Nash equilibrium. An alternative statement of this folk theorem allows the equilibrium payoff profile u to be any individually rational feasible payoff profile; it only requires there exist an individually rational feasible payoff profile that strictly dominates the minmax payoff profile. Then, the folk theorem guarantees that it is possible to approach u in equilibrium to any desired precision (for every ε there exists a Nash equilibrium where the payoff profile is a distance ε away from u). Subgame perfection Attaining a subgame perfect equilibrium in discounted games is more difficult than in undiscounted games. The cost of punishment does not vanish (as with the limit-of-means criterion). It is not always possible to punish the non-punishers endlessly (as with the overtaking criterion) since the discount factor makes punishments far away in the future irrelevant for the present. Hence, a different approach is needed: the punishers should be rewarded. This requires an additional assumption, that the set of feasible payoff profiles is full dimensional and the min-max profile lies in its interior. The strategy is as follows. 1. All players start by playing a and continue to play a if no deviation occurs. 2. If any one player, say player i, deviated, play the strategy profile m which minmaxes i for N periods. (Choose N and δ large enough so that no player has incentive to deviate from phase 1.) 3. If no players deviated from phase 2, all player j ≠ i gets rewarded ε above j min-max forever after, while player i continues receiving his min-max. (Full-dimensionality and the interior assumption is needed here.) 4. If player j deviated from phase 2, all players restart phase 2 with j as target. 5. Ignore multilateral deviations. Player j ≠ i now has no incentive to deviate from the punishment phase 2. This proves the subgame perfect folk theorem. Finitely-repeated games without discount Assume that the payoff of player i in a game that is repeated T times is given by a simple arithmetic mean: A folk theorem for this case has the following additional requirement: In the basic game, for every player i, there is a Nash-equilibrium that is strictly better, for i, than his minmax payoff. This requirement is stronger than the requirement for discounted infinite games, which is in turn stronger than the requirement for undiscounted infinite games. This requirement is needed because of the last step. In the last step, the only stable outcome is a Nash-equilibrium in the basic game. Suppose a player i gains nothing from the Nash equilibrium (since it gives him only his minmax payoff). Then, there is no way to punish that player. On the other hand, if for every player there is a basic equilibrium which is strictly better than minmax, a repeated-game equilibrium can be constructed in two phases: In the first phase, the players alternate strategies in the required frequencies to approximate the desired payoff profile. In the last phase, the players play the preferred equilibrium of each of the players in turn. In the last phase, no player deviates since the actions are already a basic-game equilibrium. If an agent deviates in the first phase, he can be punished by minmaxing him in the last phase. If the game is sufficiently long, the effect of the last phase is negligible, so the equilibrium payoff approaches the desired profile. Applications Folk theorems can be applied to a diverse number of fields. For example: Anthropology: in a community where all behavior is well known, and where members of the community know that they will continue to have to deal with each other, then any pattern of behavior (traditions, taboos, etc.) may be sustained by social norms so long as the individuals of the community are better off remaining in the community than they would be leaving the community (the minimax condition). International politics: agreements between countries cannot be effectively enforced. They are kept, however, because relations between countries are long-term and countries can use "minimax strategies" against each other. This possibility often depends on the discount factor of the relevant countries. If a country is very impatient (pays little attention to future outcomes), then it may be difficult to punish it (or punish it in a credible way). On the other hand, MIT economist Franklin Fisher has noted that the folk theorem is not a positive theory. In considering, for instance, oligopoly behavior, the folk theorem does not tell the economist what firms will do, but rather that cost and demand functions are not sufficient for a general theory of oligopoly, and the economists must include the context within which oligopolies operate in their theory. In 2007, Borgs et al. proved that, despite the folk theorem, in the general case computing the Nash equilibria for repeated games is not easier than computing the Nash equilibria for one-shot finite games, a problem which lies in the PPAD complexity class. The practical consequence of this is that no efficient (polynomial-time) algorithm is known that computes the strategies required by folk theorems in the general case. Summary of folk theorems The following table compares various folk theorems in several aspects: Horizon – whether the stage game is repeated finitely or infinitely many times. Utilities – how the utility of a player in the repeated game is determined from the player's utilities in the stage game iterations. Conditions on G (the stage game) – whether there are any technical conditions that should hold in the one-shot game in order for the theorem to work. Conditions on x (the target payoff vector of the repeated game) – whether the theorem works for any individually rational and feasible payoff vector, or only on a subset of these vectors. Equilibrium type – if all conditions are met, what kind of equilibrium is guaranteed by the theorem – Nash or Subgame-perfect? Punishment type – what kind of punishment strategy is used to deter players from deviating? Folk theorems in other settings In allusion to the folk theorems for repeated games, some authors have used the term "folk theorem" to refer to results on the set of possible equilibria or equilibrium payoffs in other settings, especially if the results are similar in what equilibrium payoffs they allow. For instance, Tennenholtz proves a "folk theorem" for program equilibrium. Many other folk theorems have been proved in settings with commitment. Notes References A set of introductory notes to the Folk Theorem. Game theory equilibrium concepts Theorems
Folk theorem (game theory)
[ "Mathematics" ]
3,413
[ "Game theory", "Game theory equilibrium concepts" ]
3,013,420
https://en.wikipedia.org/wiki/Barium%20oxide%20%28data%20page%29
This page provides supplementary chemical data on barium oxide. Material Safety Data Sheet SDS from Millipore Sigma Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Barium oxide (data page)
[ "Chemistry" ]
43
[ "Chemical data pages", "nan" ]
3,013,586
https://en.wikipedia.org/wiki/Helge%20Tverberg
Helge Arnulf Tverberg (March 6, 1935December 28, 2020) was a Norwegian mathematician. He was a professor in the Mathematics Department at the University of Bergen, his speciality being combinatorics; he retired at the mandatory age of seventy. He was born in Bergen. He took the cand.real. degree at the University of Bergen in 1958, and the dr.philos. degree in 1968. He was a lecturer from 1958 to 1971 and professor from 1971 to his retirement in 2005. He was a visiting scholar at the University of Reading in 1966 and at the Australian National University, in Canberra, from 1980 to 1981, 1987 to 1988 and in 2004. He was a member of the Norwegian Academy of Science and Letters. Tverberg, in 1965, proved a result on intersection patterns of partitions of point configurations that has come to be known as Tverberg's partition theorem. It inaugurated a new branch of combinatorial geometry, with many variations and applications. An account by Günter M. Ziegler of Tverberg's work in this direction appeared in the issue of the Notices of the American Mathematical Society for April, 2011. See also Geometric separator References 1935 births 2020 deaths 20th-century Norwegian mathematicians Combinatorialists Academic staff of the University of Bergen University of Bergen alumni Members of the Norwegian Academy of Science and Letters Scientists from Bergen
Helge Tverberg
[ "Mathematics" ]
287
[ "Combinatorialists", "Combinatorics" ]
3,013,764
https://en.wikipedia.org/wiki/Anopla
Anopla (for changes in taxonomy, see reference from 2019) has long been used as name for a class of marine worms of the phylum Nemertea, characterized by the absence of stylets on the proboscis, the mouth being below or behind the brain, and by having separate openings for the mouth and proboscis. The other long used class of Nemertea are the Enopla (for changes in taxonomy, see reference from 2019). Although Anopla is a paraphyletic grouping, it is used in almost all scientific classifications. Anopla is divided into two orders: Palaeonemertea and Heteronemertea. Palaeonemertea may be para- or polyphyletic, consisting of 3-5 groupings and totalling about 100 species. These worms have several apparently simple features and, as their name suggests, they are often considered to be the most primitive nemerteans. The primary body-wall musculature consists of an outer circular stratum overlying a longitudinal stratum. The group includes genera such as Cephalothrix in which the nerve cords are inside the body-wall longitudinal muscle, and Tubulanus, in which the nerve cords are between the outer circular muscle and the epidermis. Tubulanids are commonly encountered in rocky areas of intertidal zones in the northern hemisphere. They are often bright orange or have very distinctive banding and or stripes and can be many meters long, although only a few mm thick. Heteronemertea is a monophyletic grouping of about 500 species, containing genera such as Lineus and Cerebratulus and including the largest and most muscular nemerteans. Almost all heteronemerteans have three primary body-wall muscle strata, an outer longitudinal, middle circular, and inner longitudinal. The lateral nerve cords are outside the circular muscle, as in palaeonemerteans, but separated from the epidermis by the usually well-developed outer longitudinal muscle. A third subclass of Anopla called the Archinemertea was determined to be paraphyletic and is no longer used by most authors. A trace fossil genus called Archisymplectes from the Pennsylvanian found in central Illinois was formerly placed in this subclass, but is now considered a Palaeonemertea, if indeed it is an Anopla. References Moore, Janet (2001) An introduction to the invertebrates (Studies in Biology) Cambridge University Press, Cambridge, UK, ; Thoney, Dennis A. and Schlager, Neil (eds.) (2004) "Anopla (Anoplans)" Grzimek's Animal Life Encyclopedia: Volume 1 - Lower Metazoans and Lesser Deuterostomes (2nd ed.) Thomson-Gale, Detroit, pp. 245–251 ; Gibson, Ray (2002) The Invertebrate Fauna of New Zealand: Nemertea (Ribbon Worms) (NIWA Biodiversity Memoir No. 118) National Institute of Water and Atmospheric Research, Wellington, New Zealand, ; Sundberg, Per; Turbeville, J. McClintock and Lindh, Susanne (2001) "Phylogenetic Relationships among Higher Nemertean (Nemertea) Taxa Inferred from 18S rDNA Sequences" Molecular Phylogenetics and Evolution 20(3): pp. 327–334; Strand, Malin et al. (2019) "Nemertean taxonomy-Implementing changes in the higher ranks, dismissing Anopla and Enopla" Zoologica Scripta Vol. 48, nr 1, s. 118-119 DOI: 10.1111/zsc.12317 Nemerteans Paraphyletic groups
Anopla
[ "Biology" ]
777
[ "Phylogenetics", "Paraphyletic groups" ]
3,013,792
https://en.wikipedia.org/wiki/4C%20Entity
The 4C Entity is a digital rights management (DRM) consortium formed by IBM, Intel, Panasonic and Toshiba that has established and licensed interoperable cryptographic protection mechanisms for removable media technologies. 4C Entity was founded in 1999 when Warner Music approached the companies to develop stronger DRM technologies for the then-novel DVD-Audio format after Intel’s CSS DRM technology was hacked. The group developed and currently lease the Content Protection for Recordable Media (CPRM) and the Content Protection for Prerecorded Media (CPPM) schemes, which use Media Key Block technology and the Cryptomeria cipher along with audio watermarks. 4C Entity has also written the Content Protection System Architecture (CPSA), which describes how content protection solutions work together and the role of each current technology. CPPM and CPRM are implemented in SD Cards, DVD-Audio, Flash media, and other digital media formats. Like many DRM technologies, 4C Entity and its products have been criticized, with the Associated Press writing that CPRM “spark[ed] privacy concerns.” References External links The 4C Entity Consortia in the United States Digital rights management
4C Entity
[ "Technology" ]
243
[ "Computing stubs" ]
3,014,017
https://en.wikipedia.org/wiki/Zeta%20function%20regularization
In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory. Definition There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by if this sum converges, and by analytic continuation elsewhere. In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Ramanujan to "sum" the series 1 + 2 + 3 + 4 + ... to ζ(−1) = −1/12. showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed. Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues a1, a2, ...., and in this case the zeta function is formally the trace of A−s. showed that if A is the Laplacian of a compact Riemannian manifold then the Minakshisundaram–Pleijel zeta function converges and has an analytic continuation as a meromorphic function to all complex numbers, and extended this to elliptic pseudo-differential operators A on compact Riemannian manifolds. So for such operators one can define the determinant using zeta function regularization. See "analytic torsion." suggested using this idea to evaluate path integrals in curved spacetimes. He studied zeta function regularization in order to calculate the partition functions for thermal graviton and matter's quanta in curved background such as on the horizon of black holes and on de Sitter background using the relation by the inverse Mellin transformation to the trace of the kernel of heat equations. Example The first example in which zeta function regularization is available appears in the Casimir effect, which is in a flat space with the bulk contributions of the quantum field in three space dimensions. In this case we must calculate the value of Riemann zeta function at –3, which diverges explicitly. However, it can be analytically continued to s = –3 where hopefully there is no pole, thus giving a finite value to the expression. A detailed example of this regularization at work is given in the article on the detail example of the Casimir effect, where the resulting sum is very explicitly the Riemann zeta-function (and where the seemingly legerdemain analytic continuation removes an additive infinity, leaving a physically significant finite number). An example of zeta-function regularization is the calculation of the vacuum expectation value of the energy of a particle field in quantum field theory. More generally, the zeta-function approach can be used to regularize the whole energy–momentum tensor both in flat and in curved spacetime. The unregulated value of the energy is given by a summation over the zero-point energy of all of the excitation modes of the vacuum: Here, is the zeroth component of the energy–momentum tensor and the sum (which may be an integral) is understood to extend over all (positive and negative) energy modes ; the absolute value reminding us that the energy is taken to be positive. This sum, as written, is usually infinite ( is typically linear in n). The sum may be regularized by writing it as where s is some parameter, taken to be a complex number. For large, real s greater than 4 (for three-dimensional space), the sum is manifestly finite, and thus may often be evaluated theoretically. The zeta-regularization is useful as it can often be used in a way such that the various symmetries of the physical system are preserved. Zeta-function regularization is used in conformal field theory, renormalization and in fixing the critical spacetime dimension of string theory. Relation to other regularizations Zeta function regularization is equivalent to dimensional regularization, see. However, the main advantage of the zeta regularization is that it can be used whenever the dimensional regularization fails, for example if there are matrices or tensors inside the calculations Relation to Dirichlet series Zeta-function regularization gives an analytic structure to any sums over an arithmetic function f(n). Such sums are known as Dirichlet series. The regularized form converts divergences of the sum into simple poles on the complex s-plane. In numerical calculations, the zeta-function regularization is inappropriate, as it is extremely slow to converge. For numerical purposes, a more rapidly converging sum is the exponential regularization, given by This is sometimes called the Z-transform of f, where z = exp(−t). The analytic structure of the exponential and zeta-regularizations are related. By expanding the exponential sum as a Laurent series one finds that the zeta-series has the structure The structure of the exponential and zeta-regulators are related by means of the Mellin transform. The one may be converted to the other by making use of the integral representation of the Gamma function: which leads to the identity relating the exponential and zeta-regulators, and converting poles in the s-plane to divergent terms in the Laurent series. Heat kernel regularization The sum is sometimes called a heat kernel or a heat-kernel regularized sum; this name stems from the idea that the can sometimes be understood as eigenvalues of the heat kernel. In mathematics, such a sum is known as a generalized Dirichlet series; its use for averaging is known as an Abelian mean. It is closely related to the Laplace–Stieltjes transform, in that where is a step function, with steps of at . A number of theorems for the convergence of such a series exist. For example, by the Hardy-Littlewood Tauberian theorem, if then the series for converges in the half-plane and is uniformly convergent on every compact subset of the half-plane . In almost all applications to physics, one has History Much of the early work establishing the convergence and equivalence of series regularized with the heat kernel and zeta function regularization methods was done by G. H. Hardy and J. E. Littlewood in 1916 and is based on the application of the Cahen–Mellin integral. The effort was made in order to obtain values for various ill-defined, conditionally convergent sums appearing in number theory. In terms of application as the regulator in physical problems, before , J. Stuart Dowker and Raymond Critchley in 1976 proposed a zeta-function regularization method for quantum physical problems. Emilio Elizalde and others have also proposed a method based on the zeta regularization for the integrals , here is a regulator and the divergent integral depends on the numbers in the limit see renormalization. Also unlike other regularizations such as dimensional regularization and analytic regularization, zeta regularization has no counterterms and gives only finite results. See also References Tom M. Apostol, "Modular Functions and Dirichlet Series in Number Theory", "Springer-Verlag New York. (See Chapter 8.)" A. Bytsenko, G. Cognola, E. Elizalde, V. Moretti and S. Zerbini, "Analytic Aspects of Quantum Fields", World Scientific Publishing, 2003, G.H. Hardy and J.E. Littlewood, "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes", Acta Mathematica, 41(1916) pp. 119–196. (See, for example, theorem 2.12) V. Moretti, "Direct z-function approach and renormalization of one-loop stress tensor in curved spacetimes, Phys. Rev.D 56, 7797 ''(1997). D. Fermi, L. Pizzocchero, "Local zeta regularization and the scalar Casimir effect. A general approach based on integral kernels", World Scientific Publishing, (hardcover), (ebook). (2017). Quantum field theory String theory Mathematical analysis Zeta and L-functions Summability methods
Zeta function regularization
[ "Physics", "Astronomy", "Mathematics" ]
1,936
[ "Sequences and series", "Quantum field theory", "Astronomical hypotheses", "Mathematical structures", "Mathematical analysis", "Summability methods", "Quantum mechanics", "String theory" ]
3,014,061
https://en.wikipedia.org/wiki/NPH%20insulin
Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. The words refer to neutral pH (pH = 7), protamine a protein, and Hans Christian Hagedorn, the insulin researcher who invented this formulation. It is designed to improve the delivery of insulin, and is one of the earliest examples of engineered drug delivery. It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin. The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions. Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours. It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included studies. In the same review no other clinically significant differences were found between different insulin analogues in either adults nor children. History Hans Christian Hagedorn (1888–1971) and August Krogh (1874–1949) obtained the rights for insulin from Frederick Banting and Charles Best in Toronto, Canada. In 1923 they formed Nordisk Insulin laboratorium, and in 1926 with August Kongsted he obtained a Danish royal charter as a non-profit foundation. In 1936, Hagedorn and B. Norman Jensen discovered that the effects of injected insulin could be prolonged by the addition of protamine obtained from the "milt" or semen of river trout. The insulin would be added to the protamine, but the solution would have to be brought to pH 7 for injection. University of Toronto, Canada later licensed protamine zinc insulin (PZI), to several manufacturers. This mixture only needs to be shaken before injection. The effects of PZI lasted for 24–36 h. In 1946, Nordisk was able to form crystals of protamine and insulin and marketed it in 1950, as neutral protamine Hagedorn (NPH) insulin. NPH insulin has the advantage that it can be mixed with an insulin that has a faster onset to complement its longer lasting action. Eventually all animal insulins made by Novo Nordisk were replaced by synthetic, recombinant "human" insulin. Synthetic "human" insulin is also complexed with protamine to form NPH. Timeline The timeline is as follows: 1926 Nordisk receives Danish charter to produce insulin 1936 Hagedorn discovers that adding protamine to insulin prolongs the effect of insulin 1936 Canadians D.M. Scott and A.M. Fisher formulate zinc insulin mixture and license to Novo 1946 Nordisk crystallizes a protamine and insulin mixture 1950 Nordisk markets NPH insulin 1953 Nordisk markets "Lente" zinc insulin mixtures. Society and culture Names Brand names include Humulin N, Novolin N, Novolin NPH, Gensulin N, SciLin N, Insulatard, and NPH Iletin II. See also Insulin analogue References Insulin receptor agonists Human proteins Recombinant proteins Peptide hormones Peptide therapeutics Drugs developed by Eli Lilly and Company World Health Organization essential medicines Wikipedia medicine articles ready to translate Medical mnemonics
NPH insulin
[ "Biology" ]
971
[ "Recombinant proteins", "Biotechnology products" ]
3,014,066
https://en.wikipedia.org/wiki/The%20SyncML%20Initiative
The SyncML Initiative, Ltd. was a non-profit corporation formed by a group of companies who co-operated to produce an open standard for data synchronization and device management. Prior to SyncML, data synchronization and device management had been based on a set of different, proprietary protocols, each functioning only with a limited number of devices, systems and data types. The SyncML Initiative, Ltd. consolidated into the Open Mobile Alliance (OMA) in 2002, contributing their technical work to the OMA technical Working Groups: Device Management Working Group and Data Synchronization Working Group. The SyncML legacy specifications were converted to the OMA format with the 1.2.2 versions of OMA SyncML, OMA Data Synchronization and OMA Device Management specifications. References Recent documents OMA SyncML Section (old site) Companies disestablished in 2002 SyncML
The SyncML Initiative
[ "Technology" ]
185
[ "Computer standards", "SyncML" ]
3,014,542
https://en.wikipedia.org/wiki/Motion%20detector
A motion detector is an electrical device that utilizes a sensor to detect nearby motion (motion detection). Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems. It can be achieved by either mechanical or electronic methods. When it is done by natural organisms, it is called motion perception. Overview An active electronic motion detector contains an optical, microwave, or acoustic sensor, as well as a transmitter. However, a passive contains only a sensor and only senses a signature from the moving object via emission or reflection. Changes in the optical, microwave or acoustic field in the device's proximity are interpreted by the electronics based on one of several technologies. Most low-cost motion detectors can detect motion at distances of about . Specialized systems are more expensive but have either increased sensitivity or much longer ranges. Tomographic motion detection systems can cover much larger areas because the radio waves it senses are at frequencies which penetrate most walls and obstructions, and are detected in multiple locations. Motion detectors have found wide use in commercial applications. One common application is activating automatic door openers in businesses and public buildings. Motion sensors are also widely used in lieu of a true occupancy sensor in activating street lights or indoor lights in walkways, such as lobbies and staircases. In such smart lighting systems, energy is conserved by only powering the lights for the duration of a timer, after which the person has presumably left the area. A motion detector may be among the sensors of a burglar alarm that is used to alert the home owner or security service when it detects the motion of a possible intruder. Such a detector may also trigger a security camera to record the possible intrusion. Motion controllers are also used for video game consoles as game controllers. A camera can also allow the body's movements to be used for control, such as in the Kinect system. Sensor technology Motion can be detected by monitoring changes in: Infrared light (passive and active sensors) Visible light (video and camera systems) Radio frequency energy (radar, microwave and tomographic motion detection) Sound (microphones, other acoustic sensors) Kinetic energy (triboelectric, seismic, and inertia-switch sensors) Magnetism (magnetic sensors, magnetometers) Wi-Fi Signals (WiFi Sensing) Several types of motion detection are in wide use: Passive infrared (PIR) Passive infrared (PIR) sensors are sensitive to a person's skin temperature through emitted black-body radiation at mid-infrared wavelengths, in contrast to background objects at room temperature. No energy is emitted from the sensor, thus the name passive infrared. This distinguishes it from the electric eye for instance (not usually considered a motion detector), in which the crossing of a person or vehicle interrupts a visible or infrared beam. These devices can detect objects, people, or animals by picking up one's infrared radiation. Mechanical The most basic forms of mechanical motion detection utilize a switch or trigger. For example, the keys of a typewriter use a mechanical method of detecting motion, where each key is a switch that is either off or on, and each letter that appears is a result of the key's motion. Microwave These detect motion through the principle of Doppler radar, and are similar to a radar speed gun. A continuous wave of microwave radiation is emitted, and phase shifts in the reflected microwaves due to motion of an object toward (or away from) the receiver result in a heterodyne signal at a low audio frequency. Ultrasonic An ultrasonic transducer emits an ultrasonic wave (sound at a frequency higher than a human ear can hear) and receives reflections from nearby objects. Exactly as in Doppler radar, heterodyne detection of the received field indicates motion. The detected doppler shift is also at low audio frequencies (for walking speeds) since the ultrasonic wavelength of around a centimeter is similar to the wavelengths used in microwave motion detectors. One potential drawback of ultrasonic sensors is that the sensor can be sensitive to motion in areas where coverage is undesired, for instance, due to reflections of sound waves around corners. Such extended coverage may be desirable for lighting control, where the goal is the detection of any occupancy in an area, but for opening an automatic door, for example, a sensor selective to traffic in the path toward the door is superior. Tomographic motion detector These systems sense disturbances to radio waves as they pass from node to node of a mesh network. They have the ability to detect over large areas completely because they can sense through walls and other obstructions. RF tomographic motion detection systems may use dedicated hardware, other wireless-capable devices or a combination of the two. Other wireless capable devices can act as nodes on the mesh after receiving a software update. Video camera software With the proliferation of low-cost digital cameras able to shoot video, it is possible to use the output of such a camera to detect motion in its field of view using software. This solution is particularly attractive when the intent is to record video triggered by motion detection, as no hardware beyond the camera and computer is needed. Since the observed field may be normally illuminated, this may be considered another passive technology. However, it can also be used together with near-infrared illumination to detect motion in the dark, that is, with the illumination at a wavelength undetectable by a human eye. More complex algorithms are necessary to detect motion when the camera itself is panning, or when a specific object's motion must be detected in a field containing other, irrelevant movement—for example, a painting surrounded by visitors in an art gallery. With a panning camera, models based on optical flow are used to distinguish between apparent background motion caused by the camera's movement and that of independently moving objects. Gesture detector Photodetectors and infrared lighting elements can support digital screens to detect hand motions and gestures with the aid of machine learning algorithms. Dual-technology motion detectors Many modern motion detectors use combinations of different technologies. While combining multiple sensing technologies into one detector can help reduce false triggering, it does so at the expense of reduced detection probabilities and increased vulnerability. For example, many dual-tech sensors combine both a PIR sensor and a microwave sensor into one unit. For motion to be detected, both sensors must trip together. This lowers the probability of a false alarm since heat and light changes may trip the (passive infrared) PIR but not the microwave, or moving tree branches may trigger the microwave but not the PIR. If an intruder is able to fool either the PIR or microwave, however, the sensor will not detect it. Often, PIR technology is paired with another model to maximize accuracy and reduce energy use. PIR draws less energy than emissive microwave detection, and so many sensors are calibrated so that when the PIR sensor is tripped, it activates a microwave sensor. If the latter also picks up an intruder, then the alarm is sounded. See also Twilight switch Heat detector Motion capture Motion controller for video game consoles Pickup (music technology) Proximity sensor Remote camera Smoke detector References External links Relational Motion Detection www.cs.rochester.edu/~nelson/research Motion Detection Algorithms In Image Processing Motion Detection and Recognition Research Presence and Absence detection explained Motion detection sample algorithm realization video Security technology Home automation Sensors Motion (physics)
Motion detector
[ "Physics", "Technology", "Engineering" ]
1,531
[ "Home automation", "Physical phenomena", "Measuring instruments", "Motion (physics)", "Space", "Mechanics", "Spacetime", "Sensors" ]
3,014,568
https://en.wikipedia.org/wiki/Winter%20Hexagon
The Winter Hexagon or Winter Circle/Oval is an asterism appearing to be in the form of a hexagon with vertices at Rigel, Aldebaran, Capella, Pollux, Procyon, and Sirius. It is mostly upon the Northern Hemisphere's celestial sphere. On most locations on Earth (except the South Island of New Zealand and the south of Chile and Argentina and further south), this asterism is visible in the evening sky at the equator from approximately December to June, and in the morning sky from July to the end of November, while in the evenings on the northern hemisphere it is less months visible between December and June, and on the southern hemisphere less months between July and November. In the tropics and southern hemisphere, this (then called "summer hexagon") can be extended with the bright star Canopus in the south. Smaller and more regularly shaped is the Winter Triangle, an approximately equilateral triangle that shares two vertices (Sirius and Procyon) with the larger asterism. The third vertex is Betelgeuse, which lies near the center of the hexagon. These three stars are three of the ten brightest objects, as viewed from Earth, outside the Solar System. Betelgeuse is also particularly easy to locate, being a shoulder of Orion, which assists stargazers in finding the triangle. After that, the larger hexagon, if none of its stars have set or not risen (or on the cusp of those daily events), may quite easily be found in cloudless skies. Several of the stars in the hexagon can also be found independently by following various lines traced through the stars in Orion. The stars in the hexagon are parts of six constellations. Counter-clockwise around the hexagon, starting with Rigel, these are Orion, Taurus, Auriga, Gemini, Canis Minor, and Canis Major. See also Spring Triangle Summer Triangle Northern Cross References External links The Great Winter Hexagon APOD 2011 January 3 Winter Hexagon Over Stagecoach Colorado Asterisms (astronomy)
Winter Hexagon
[ "Astronomy" ]
438
[ "Constellations", "Sky regions", "Asterisms (astronomy)" ]
3,014,576
https://en.wikipedia.org/wiki/Glass%20break%20detector
A glass break detector is a sensor that detects if a pane of glass has been shattered or broken. These sensors are commonly used near glass doors or glass storefront windows. They are widely used in electronic burglar-alarm systems. The detection process begins with a microphone that picks up noises and vibrations coming from the glass. If the vibrations exceed a certain threshold (which is sometimes user selectable), then they are analyzed by detector circuitry. Simpler detectors merely use narrowband microphones tuned to frequencies typical of glass shattering. These are merely designed to react to sound magnitudes above a certain threshold, whereas more complex designs analytically compare the sound to one or more glass-break profiles using signal transforms similar to DCT and FFT. These digitally sophisticated detectors only react if both the amplitude threshold and statistically expressed similarity threshold are breached. Advances in technology have also led to the use of wireless glass-break detectors. See also Chubb Locks Yale (company) Burglar alarm References Perimeter security Glass engineering and science Detectors
Glass break detector
[ "Materials_science", "Engineering" ]
210
[ "Glass engineering and science", "Materials science" ]
3,014,842
https://en.wikipedia.org/wiki/Infinity%20pool
An infinity pool is a reflecting pool or swimming pool where the water flows over one or more edges, producing a visual effect of water with no boundary. Such pools are often designed so that the edge appears to merge with a larger body of water such as the ocean, or with the sky, and may overlook locations such as natural landscapes and cityscapes. They are often seen at hotels, resorts, estates, and in other luxurious places. History It has been claimed that the infinity pool concept originated in France, and that one of the first vanishing-edge designs was the Stag Fountain at the Palace of Versailles, built in the late 17th century. In the US, architect John Lautner has been credited as one of the first to come up with an infinity pool design in the early 1960s. He included infinity pools in various residential projects, and also created the vanishing-edge pool in the 1971 James Bond movie Diamonds Are Forever. It was introduced to Australia by the architect Douglas Snelling. Structure Infinity pools are expensive and require extensive structural, mechanical, and architectural detailing. Since they are often built in precarious locations, sound structural engineering is paramount. The high cost of these pools often arises from the elaborate foundation systems that anchor them to hillsides. The "infinite" edge of the pool terminates at a weir that is lower than the required pool water level. A trough or catch basin is constructed below the weir. The water spills into the catch basin, from where it is pumped back into the pool. See also Spa Notes References Further reading External links Landscape architecture Luxury Swimming pools
Infinity pool
[ "Engineering" ]
320
[ "Landscape architecture", "Architecture" ]
3,015,029
https://en.wikipedia.org/wiki/Tribometer
A tribometer is an instrument that measures tribological quantities, such as coefficient of friction, friction force, and wear volume, between two surfaces in contact. It was invented by the 18th century Dutch scientist Musschenbroek A tribotester is the general name given to a machine or device used to perform tests and simulations of wear, friction and lubrication which are the subject of the study of tribology. Often tribotesters are extremely specific in their function and are fabricated by manufacturers who desire to test and analyze the long-term performance of their products. An example is that of orthopedic implant manufacturers who have spent considerable sums of money to develop tribotesters that accurately reproduce the motions and forces that occur in human hip joints so that they can perform accelerated wear tests of their products. Theory A simple tribometer is described by a hanging mass and a mass resting on a horizontal surface, connected to each other via a string and pulley. The coefficient of friction, μ, when the system is stationary, is determined by increasing the hanging mass until the moment that the resting mass begins to slide. Then using the general equation for friction force: Where N, the normal force, is equal to the weight (mass x gravity) of the sitting mass (mT) and F, the loading force, is equal to the weight (mass x gravity) of the hanging mass (mH). To determine the kinetic coefficient of friction the hanging mass is increased or decreased until the mass system moves at a constant speed. In both cases, the coefficient of friction is simplified to the ratio of the two masses: In most test applications using tribometers, wear is measured by comparing the mass or surfaces of test specimens before and after testing. Equipment and methods used to examine the worn surfaces include optical microscopes, scanning electron microscopes, optical interferometry and mechanical roughness testers. Types Tribometers are often referred to by the specific contact arrangement they simulate or by the original equipment developer. Several arrangements are: Four ball Pin on disc Ball on disc Ring on ring Ball on three plates Reciprocating pin (usually referred to as SRV or HFRR) Block on ring Bouncing ball Fretting test machine Twin disc Bouncing ball A bouncing ball tribometer consists of a ball which is impacted at an angle against a surface. During a typical test, a ball is slid on an angle along a track until it impacts a surface and then bounces off of the surface. The friction produced in the contact between the ball and the surface results in a horizontal force on the surface and a rotational force on the ball. Frictional force is determined by finding the rotational speed of the ball using high speed photography or by measuring the force on the horizontal surface. Pressure in the contact is very high due to the large instantaneous force caused by the impact with the ball. Bouncing ball tribometers have been used to determine the shear characteristics of lubricants under high pressures such as is found in ball bearings or gears. Pin on disc A pin on disc tribometer consists of a stationary pin that is normally loaded against a rotating disc. The pin can have any shape to simulate a specific contact, but cylindrical tips are often used to simplify the contact geometry. The coefficient of friction is determined by the ratio of the frictional force to the loading force on the pin. The pin on disc test has proved useful in providing a simple wear and friction test for low friction coatings such as diamond-like carbon coatings on valve train components in internal combustion engines. See also Abrasion Twist compression tester Tribology References Tribology Measuring instruments Materials science
Tribometer
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
751
[ "Tribology", "Applied and interdisciplinary physics", "Materials science", "Surface science", "Measuring instruments", "nan", "Mechanical engineering" ]
3,015,059
https://en.wikipedia.org/wiki/Oracle%20metadata
Oracle Database provides information about all of the tables, views, columns, and procedures in a database. This information about information is known as metadata. It is stored in two locations: data dictionary tables (accessed via built-in views) and a metadata registry. Other relational database management systems support an ANSI-standard equivalent called information schema. Views for metadata The total number of these views depends on the Oracle version, but is in a 1000 range. The main built-in views accessing Oracle RDBMS data dictionary tables are few, and are as follows: ALL_OBJECTS – list of all objects in the current database that are accessible to the current user; ALL_TABLES – list of all tables in the current database that are accessible to the current user; ALL_VIEWS – list of all views in the current database that are accessible to the current user; ALL_TAB_COLUMNS – list of all columns in the database that are accessible to the current user; ALL_ARGUMENTS – lists the arguments of functions and procedures that are accessible to the current user; ALL_ERRORS – lists descriptions of errors on all stored objects (views, procedures, functions, packages, and package bodies) that are accessible to the current user; ALL_OBJECT_SIZE – included for backward compatibility with Oracle version 5; ALL_PROCEDURES – (from Oracle 9 onwards) lists all functions and procedures (along with associated properties) that are accessible to the current user; ALL_SOURCE – describes the text (i.e. PL/SQL) source of the stored objects accessible to the current user; ALL_TRIGGERS – list all the triggers accessible to the current user. In addition there are equivalent views prefixed "USER_" which show only the objects owned by the current user (i.e. a more restricted view of metadata) and prefixed "DBA_" which show all objects in the database (i.e. an unrestricted global view of metadata for the database instance). Naturally the access to "DBA_" metadata views requires specific privileges. Example 1: finding tables Find all Tables that have PATTERN in the table name SELECT Owner AS Schema_Name, Table_Name FROM All_Tables WHERE Table_Name LIKE '%PATTERN%' ORDER BY Owner, Table_Name; Example 2: finding columns Find all tables that have at least one column that matches a specific PATTERN in the column name SELECT Owner AS Schema_Name, Table_Name, Column_Name FROM All_Tab_Columns WHERE Column_Name LIKE '%PATTERN%' ORDER BY 1,2,3; Example 3: counting rows of columns Estimate a total number of rows in all tables containing a column name that matches PATTERN (this is SQL*Plus specific script) COLUMN DUMMY NOPRINT COMPUTE SUM OF NUM_ROWS ON DUMMY BREAK ON DUMMY SELECT NULL DUMMY, T.TABLE_NAME, C.COLUMN_NAME, T.NUM_ROWS FROM ALL_TABLES T, ALL_TAB_COLUMNS C WHERE T.TABLE_NAME = C.TABLE_NAME AND C.COLUMN_NAME LIKE '%PATTERN%' AND T.OWNER = C.OWNER ORDER BY T.TABLE_NAME; Note that NUM_ROWS records the number of rows which were in a table when (and if) it was last analyzed. This will most likely deviate from the actual number of rows currently in the table. Example 4: finding view columns Find view columns SELECT TABLE_NAME, column_name, decode(c.DATA_TYPE, 'VARCHAR2', c.DATA_TYPE || '(' || c.DATA_LENGTH || ')', 'NUMBER', DECODE(c.data_precision, NULL, c.DATA_TYPE, 0, c.DATA_TYPE, c.DATA_TYPE || '(' || c.data_precision || DECODE(c.data_scale, NULL, ')', 0, ')' , ', ' || c.data_scale || ')')), c.DATA_TYPE) data_type FROM cols c, obj o WHERE c.TABLE_NAME = o.object_name AND o.object_type = 'VIEW' AND c.table_name LIKE '%PATTERN%' ORDER BY c.table_name, c.column_id; Warning: This is incomplete with respect to multiple datatypes including char, varchar and timestamp and uses extremely old, deprecated dictionary views, back to oracle 5. Use of underscore in table and column names The underscore is a special SQL pattern match to a single character and should be escaped if you are in fact looking for an underscore character in the LIKE clause of a query. Just add the following after a LIKE statement: ESCAPE '_' And then each literal underscore should be a double underscore: __ Example LIKE '%__G' ESCAPE '_' Oracle Metadata Registry The Oracle product Oracle Enterprise Metadata Manager (EMM) is an ISO/IEC 11179 compatible metadata registry. It stores administered metadata in a consistent format that can be used for metadata publishing. In January 2006, EMM was available only through Oracle consulting services. See also Information schema Metadata References External links article on Oracle Metadata Metadata Metadata Articles with example SQL code
Oracle metadata
[ "Technology" ]
1,104
[ "Metadata", "Data" ]
3,015,195
https://en.wikipedia.org/wiki/Newton%E2%80%93Euler%20equations
In classical mechanics, the Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body. Traditionally the Newton–Euler equations is the grouping together of Euler's two laws of motion for a rigid body into a single equation with 6 components, using column vectors and matrices. These laws relate the motion of the center of gravity of a rigid body with the sum of forces and torques (or synonymously moments) acting on the rigid body. Center of mass frame With respect to a coordinate frame whose origin coincides with the body's center of mass for τ(torque) and an inertial frame of reference for F(force), they can be expressed in matrix form as: where F = total force acting on the center of mass m = mass of the body I3 = the 3×3 identity matrix acm = acceleration of the center of mass vcm = velocity of the center of mass τ = total torque acting about the center of mass Icm = moment of inertia about the center of mass ω = angular velocity of the body α = angular acceleration of the body Any reference frame With respect to a coordinate frame located at point P that is fixed in the body and not coincident with the center of mass, the equations assume the more complex form: where c is the vector from P to the center of mass of the body expressed in the body-fixed frame, and denote skew-symmetric cross product matrices. The left hand side of the equation—which includes the sum of external forces, and the sum of external moments about P—describes a spatial wrench, see screw theory. The inertial terms are contained in the spatial inertia matrix while the fictitious forces are contained in the term: When the center of mass is not coincident with the coordinate frame (that is, when c is nonzero), the translational and angular accelerations (a and α) are coupled, so that each is associated with force and torque components. Applications The Newton–Euler equations are used as the basis for more complicated "multi-body" formulations (screw theory) that describe the dynamics of systems of rigid bodies connected by joints and other constraints. Multi-body problems can be solved by a variety of numerical algorithms. See also Euler's laws of motion for a rigid body. Euler angles Inverse dynamics Centrifugal force Principal axes Spatial acceleration Screw theory of rigid body motion. References Rigid bodies Equations
Newton–Euler equations
[ "Mathematics" ]
503
[ "Mathematical objects", "Equations" ]
3,015,586
https://en.wikipedia.org/wiki/Business%20Process%20Model%20and%20Notation
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model. Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2.0 of BPMN was released in January 2011, at which point the name was amended to Business Process Model and Notation to reflect the introduction of execution semantics, which were introduced alongside the existing notational and diagramming elements. Though it is an OMG specification, BPMN is also ratified as ISO 19510. The latest version is BPMN 2.0.2, published in January 2014. Overview Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to support business process management, for both technical users and business users, by providing a notation that is intuitive to business users, yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation and the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL). BPMN has been designed to provide a standard notation readily understandable by all business stakeholders, typically including business analysts, technical developers and business managers. BPMN can therefore be used to support the generally desirable aim of all stakeholders on a project adopting a common language to describe processes, helping to avoid communication gaps that can arise between business process design and implementation. BPMN is one of a number of business process modeling language standards used by modeling tools and processes. While the current variety of languages may suit different modeling environments, there are those who advocate for the development or emergence of a single, comprehensive standard, combining the strengths of different existing languages. It is suggested that in time, this could help to unify the expression of basic business process concepts (e.g., public and private processes, choreographies), as well as advanced process concepts (e.g., exception handling, transaction compensation). Two new standards, using a similar approach to BPMN have been developed, addressing case management modeling (Case Management Model and Notation) and decision modeling (Decision Model and Notation). Topics Scope BPMN is constrained to support only the concepts of modeling applicable to business processes. Other types of modeling done by organizations for non-process purposes are out of scope for BPMN. Examples of modeling excluded from BPMN are: Organizational structures Functional breakdowns Data models In addition, while BPMN shows the flow of data (messages), and the association of data artifacts to activities, it is not a data flow diagram. Elements BPMN models are expressed by simple diagrams constructed from a limited set of graphical elements. For both business users and developers, they simplify understanding of business activities' flow and process. BPMN's four basic element categories are: Flow objects Events, activities, gateways Connecting objects Sequence flow, message flow, association Swim lanes Pool, lane, Dark Pool Artifacts Data object, group, annotation These four categories enable creation of simple business process diagrams (BPDs). BPDs also permit making new types of flow object or artifact, to make the diagram more understandable. Flow objects and connecting objects Flow objects are the main describing elements within BPMN, and consist of three core elements: events, activities, and gateways. Event An Event is represented with a circle and denotes something that happens (compared with an activity, which is something that is done). Icons within the circle denote the type of event (e.g., an envelope representing a message, or a clock representing time). Events are also classified as Catching (for example, if catching an incoming message starts a process) or Throwing (such as throwing a completion message when a process ends). Start event Acts as a process trigger; indicated by a single narrow border, and can only be Catch, so is shown with an open (outline) icon. Intermediate event Represents something that happens between the start and end events; is indicated by a double border, and can Throw or Catch (using solid or open icons as appropriate). For example, a task could flow to an event that throws a message across to another pool, where a subsequent event waits to catch the response before continuing. End event Represents the result of a process; indicated by a single thick or bold border, and can only Throw, so is shown with a solid icon. Activity An activity is represented with a rounded-corner rectangle and describes the kind of work which must be done. An activity is a generic term for work that a company performs. It can be atomic or compound. Task A task represents a single unit of work that is not or cannot be broken down to a further level of business process detail. It is referred to as an atomic activity. A task is the lowest level activity illustrated on a process diagram. A set of tasks may represent a high-level procedure. Sub-process Used to hide or reveal additional levels of business process detail. When collapsed, a sub-process is indicated by a plus sign against the bottom line of the rectangle; when expanded, the rounded rectangle expands to show all flow objects, connecting objects, and artifacts. A sub-process is referred to as a compound activity. Has its own self-contained start and end events; sequence flows from the parent process must not cross the boundary. Transaction A form of sub-process in which all contained activities must be treated as a whole; i.e., they must all be completed to meet an objective, and if any one of them fails, they must all be compensated (undone). Transactions are differentiated from expanded sub-processes by being surrounded by a double border. Call Activity A point in the process where a global process or a global Task is reused. A call activity is differentiated from other activity types by a bolded border around the activity area. Gateway A gateway is represented with a diamond shape and determines forking and merging of paths, depending on the conditions expressed. Exclusive Used to create alternative flows in a process. Because only one of the paths can be taken, it is called exclusive. Event Based The condition determining the path of a process is based on an evaluated event. Parallel Used to create parallel paths without evaluating any conditions. Inclusive Used to create alternative flows where all paths are evaluated. Exclusive Event Based An event is being evaluated to determine which of mutually exclusive paths will be taken. Complex Used to model complex synchronization behavior. Parallel Event Based Two parallel processes are started based on an event, but there is no evaluation of the event. Connections Flow objects are connected to each other using Connecting objects, which are of three types: sequences, messages, and associations. Sequence Flow A Sequence Flow is represented with a solid line and arrowhead, and shows in which order the activities are performed. The sequence flow may also have a symbol at its start, a small diamond indicates one of a number of conditional flows from an activity, while a diagonal slash indicates the default flow from a decision or activity with conditional flows. Message Flow A Message Flow is represented with a dashed line, an open circle at the start, and an open arrowhead at the end. It tells us what messages flow across organizational boundaries (i.e., between pools). A message flow can never be used to connect activities or events within the same pool. Association An Association is represented with a dotted line. It is used to associate an Artifact or text to a Flow Object, and can indicate some directionality using an open arrowhead (toward the artifact to represent a result, from the artifact to represent an input, and both to indicate it is read and updated). No directionality is used when the Artifact or text is associated with a sequence or message flow (as that flow already shows the direction). Pools, Lanes, and artifacts Swim lanes are a visual mechanism of organising and categorising activities, based on cross functional flowcharting, and in BPMN consist of two types: Pool Represents major participants in a process, typically separating different organisations. A pool contains one or more lanes (like a real swimming pool). A pool can be open (i.e., showing internal detail) when it is depicted as a large rectangle showing one or more lanes, or collapsed (i.e., hiding internal detail) when it is depicted as an empty rectangle stretching the width or height of the diagram. Lane Used to organise and categorise activities within a pool according to function or role, and depicted as a rectangle stretching the width or height of the pool. A lane contains the flow objects, connecting objects and artifacts. Artifacts allow developers to bring some more information into the model/diagram. In this way the model/diagram becomes more readable. There are three pre-defined Artifacts, and they are: Data objects: Data objects show the reader which data is required or produced in an activity. Group: A Group is represented with a rounded-corner rectangle and dashed lines. The group is used to group different activities but does not affect the flow in the diagram. Annotation: An annotation is used to give the reader of the model/diagram an understandable impression. Examples of business process diagrams BPMN 2.0.2 The vision of BPMN 2.0.2 is to have one single specification for a new Business Process Model and Notation that defines the notation, metamodel and interchange format but with a modified name that still preserves the "BPMN" brand. The features include: Formalizes the execution semantics for all BPMN elements. Defines an extensibility mechanism for both Process model extensions and graphical extensions. Refines Event composition and correlation. Extends the definition of human interactions. Defines a Choreography model. The current version of the specification was released in January 2014. Comparison of BPMN versions Types of BPMN sub-model Business process modeling is used to communicate a wide variety of information to a wide variety of audiences. BPMN is designed to cover this wide range of usage and allows modeling of end-to-end business processes to allow the viewer of the Diagram to be able to easily differentiate between sections of a BPMN Diagram. There are three basic types of sub-models within an end-to-end BPMN model: Private (internal) business processes, Abstract (public) processes, and Collaboration (global) processes: Private (internal) business processes Private business processes are those internal to a specific organization and are the type of processes that have been generally called workflow or BPM processes. If swim lanes are used then a private business process will be contained within a single Pool. The Sequence Flow of the Process is therefore contained within the Pool and cannot cross the boundaries of the Pool. Message Flow can cross the Pool boundary to show the interactions that exist between separate private business processes. Abstract (public) processes This represents the interactions between a private business process and another process or participant. Only those activities that communicate outside the private business process are included in the abstract process. All other “internal” activities of the private business process are not shown in the abstract process. Thus, the abstract process shows to the outside world the sequence of messages that are required to interact with that business process. Abstract processes are contained within a Pool and can be modeled separately or within a larger BPMN Diagram to show the Message Flow between the abstract process activities and other entities. If the abstract process is in the same Diagram as its corresponding private business process, then the activities that are common to both processes can be associated. Collaboration (global) processes A collaboration process depicts the interactions between two or more business entities. These interactions are defined as a sequence of activities that represent the message exchange patterns between the entities involved. Collaboration processes may be contained within a Pool and the different participant business interactions are shown as Lanes within the Pool. In this situation, each Lane would represent two participants and a direction of travel between them. They may also be shown as two or more Abstract Processes interacting through Message Flow (as described in the previous section). These processes can be modeled separately or within a larger BPMN Diagram to show the Associations between the collaboration process activities and other entities. If the collaboration process is in the same Diagram as one of its corresponding private business process, then the activities that are common to both processes can be associated. Within and between these three BPMN sub-models, many types of Diagrams can be created. The following are the types of business processes that can be modeled with BPMN (those with asterisks may not map to an executable language): High-level private process activities (not functional breakdown)* Detailed private business process As-is or old business process* To-be or new business process Detailed private business process with interactions to one or more external entities (or “Black Box” processes) Two or more detailed private business processes interacting Detailed private business process relationship to Abstract Process Detailed private business process relationship to Collaboration Process Two or more Abstract Processes* Abstract Process relationship to Collaboration Process* Collaboration Process only (e.g., ebXML BPSS or RosettaNet)* Two or more detailed private business processes interacting through their Abstract Processes and/or a Collaboration Process BPMN is designed to allow all the above types of Diagrams. However, it should be cautioned that if too many types of sub-models are combined, such as three or more private processes with message flow between each of them, then the Diagram may become difficult to understand. Thus, the OMG recommends that the modeler pick a focused purpose for the BPD, such as a private or collaboration process. Comparison with other process modeling notations Event-driven process chains (EPC) and BPMN are two notations with similar expressivity when process modeling is concerned. A BPMN model can be transformed into an EPC model. Conversely, an EPC model can be transformed into a BPMN model with only a slight loss of information. A study showed that for the same process, the BPMN model may need around 40% fewer elements than the corresponding EPC model, but with a slightly larger set of symbols. The BPMN model would therefore be easier to read. The conversion between the two notations can be automated. UML activity diagrams and BPMN are two notations that can be used to model the same processes: a subset of the activity diagram elements have a similar semantic than BPMN elements, despite the smaller and less expressive set of symbols. A study showed that both types of process models appear to have the same level of readability for inexperienced users, despite the higher formal constraints of an activity diagram. BPM Certifications The Business Process Management (BPM) world acknowledges the critical importance of modeling standards for optimizing and standardizing business processes. The Business Process Model and Notation (BPMN) version 2 has brought significant improvements in event and subprocess modeling, significantly enriching the capabilities for documenting, analyzing, and optimizing business processes. Elemate positions itself as a guide in exploring the various BPM certifications and dedicated training paths, thereby facilitating the mastery of BPMN and continuous improvement of processes within companies. OMG OCEB certification The Object Management Group (OMG), the international consortium behind the BPMN standard, offers the OCEB certification (OMG Certified Expert in BPM). This certification specifically targets business process modeling with particular emphasis on BPMN 2. The OCEB certification is structured into five levels: Fundamental, Business Intermediate (BUS INT), Technical Intermediate (TECH INT), Business Advanced (BUS ADV), and Technical Advanced (TECH ADV), thus providing a comprehensive pathway for BPM professionals. Other BPM certifications Beyond the OCEB, there are other recognized certifications in the BPM field: CBPA (Certified Business Process Associate): Offered by the ABPMP (Association of Business Process Management Professionals), this certification is aimed at professionals starting in BPM. CBPP (Certified Business Process Professional): Also awarded by the ABPMP, the CBPP certification targets experienced professionals, offering validation of their global expertise in BPM. The interest of a BPMN certification While BPMN 2 has established itself as an essential standard in business process modeling, a specific certification for BPMN could provide an additional guarantee regarding the quality and compliance of the models used. This becomes particularly relevant when companies employ external providers for the modeling of their business processes. BPM certifying training with BPMN 2 Although OMG does not offer a certification exclusively dedicated to BPMN 2, various organizations provide certifying training that encompasses this standard. These trainings cover not just BPMN but also the principles of management, automation, and digitization of business processes. They enable learners to master process mapping and modeling using BPMN 2, essential for optimizing business operations. See also DRAKON Business process management Business process modeling Comparison of Business Process Model and Notation modeling tools CMMN (Case Management Model and Notation) Process Driven Messaging Service Function model Functional software architecture Workflow patterns Service Component Architecture XPDL YAWL References Further reading Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. ISSN 1463-7154. PDF External links OMG BPMN Specification BPMN Tool Matrix BPMN Information Home Page OMG information page for BPMN. Diagrams Business process modelling ISO standards Specification languages Modeling languages Notation
Business Process Model and Notation
[ "Mathematics", "Engineering" ]
3,672
[ "Software engineering", "Specification languages", "Symbols", "Notation" ]
3,015,678
https://en.wikipedia.org/wiki/Kent%20%28cigarette%29
Kent is an American brand of cigarettes, currently owned and manufactured by R.J. Reynolds Tobacco Company in the United States and British American Tobacco elsewhere. The brand is named after Herbert Kent, a former executive at Lorillard Tobacco Company. History Widely recognized by many as the first popular filtered cigarette, Kent was introduced by the Lorillard Tobacco Company in 1952 around the same time a series of articles entitled "cancer by the carton", published by Reader's Digest, scared American consumers into seeking out a filter brand at a time when most brands were filterless. (Viceroy cigarettes had been the first to introduce filters, in 1936.) Kent widely touted its "famous micronite filter" and promised consumers the "greatest health protection in history". Sales of Kent skyrocketed, and it has been estimated that in Kent's first four years on the market, Lorillard sold some 13 billion Kent cigarettes. From March 1952 until at least May 1956, however, the Micronite filter in Kent cigarettes contained compressed blue asbestos within the crimped crepe paper, which is the most carcinogenic type of asbestos. It has been suspected that many cases of mesothelioma have been caused specifically by smoking the original Kent cigarettes, and various lawsuits followed over the years because of it. Lorillard quietly changed the filter material from asbestos to the more common cellulose acetate in mid-1956. Kent continued to grow until the late 1960s, then began a long, steady decline as more filtered cigarette brands promising even lower tar (and appealing to smokers' desires for a "safer" smoke) were introduced. Kent Cigarettes sponsored The Dick Van Dyke Show during its second season, and actor Dick Van Dyke filmed many spots smoking them, along with Rose Marie and Morey Amsterdam. The cigarettes were touted as being packaged in a "crush proof box". However, Kent continued to stay in the top ten cigarette brand list until 1979. While continuing domestic sale and production, Lorillard sold the overseas rights of Kent and all of its other brands in 1977, and today Kents manufactured outside the U.S. are property of British American Tobacco. It eventually became one of their most popular brands, along with Dunhill, Lucky Strike, Pall Mall, and Rothmans. On June 15, 2014, Reynolds American offered to buy the Lorillard tobacco company for $27.4 billion and effective June 12, 2015, the Kent brand became the property of R. J. Reynolds Tobacco Company. Various advertising posters were made for Kent cigarettes, ranging from 1955 until 1986. One particular series of ads implied that smoking and eating were synonymous — in both pleasure and necessity. Kent in Romania Between 1970 and 1990 Kent was the most sought after cigarette in Romania and in some parts of the domestic market used as payment or bribe. In the latter part of this era, Kent was no longer available in regular retail, being sold officially only in hard currency shops. Obviously, the black market was thriving at the time, as most Kents were being smuggled in by those relatively few Romanians who were allowed to travel abroad (sea and air crew, diplomatic staff, etc.) The 2004 debut short film () by Cristi Puiu is titled after the bribes discussed in the film. Markets Kent is or was sold in the following countries: Jordan, Belgium, Brazil, Republic of Ireland, United Kingdom, Norway, Sweden, Finland, Estonia, Luxembourg, Netherlands, Greece, Switzerland, Austria, Spain, Italy, Poland, Romania, Israel, Moldova, Czech Republic, Croatia, Iraq, Albania, Latvia, Lithuania, Belarus, Ukraine, Russia, Azerbaijan, Georgia, Kazakhstan, Uzbekistan, Egypt, South Africa, Syria, Iran, United States, Kosovo, Mexico, El Salvador, Chile, Turkey, Peru, Brazil, Paraguay, Argentina, Vietnam, Australia, Singapore, Mongolia, China, Saudi Arabia, Hong Kong, Lebanon, Japan and South Korea. See also Tobacco smoking References 1952 establishments in the United States Asbestos disasters Asbestos British American Tobacco brands Products introduced in 1952 R. J. Reynolds Tobacco Company brands Socialist Republic of Romania
Kent (cigarette)
[ "Environmental_science" ]
842
[ "Toxicology", "Asbestos" ]
3,015,704
https://en.wikipedia.org/wiki/Agent%20Pink
Agent Pink is the code name for a powerful herbicide and defoliant used by the U.S. military in its herbicidal warfare program during the Vietnam War. The name comes from the pink stripe painted on the barrels to identify the contents. Largely inspired by the British use of herbicides and defoliants during the Malayan Emergency, it was one of the rainbow herbicides that included the more infamous Agent Orange. Agent Pink was only used during the early "testing" stages of the spraying program before 1964. Agent Pink's only active ingredient was 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), one of the common phenoxy herbicides of the era. Agent Pink contained about 60%–40% of this active substance. Even prior to Operation Ranch Hand (1962–1971) it was known that a dioxin, 2,3,7,8-tetrachlorodibenzo-para-dioxin (TCDD), is produced as a byproduct of the manufacture of 2,4,5-T, and was present in any of the herbicides that used it, but to greater proportion in the earlier Agents, such as Pink. A 2003 Nature paper by Stellman et al., which re-apprised the average TCDD content of Agent Orange from the 3 ppm that USAF had reported to a level of 13 ppm, also estimated that Agent Pink may have had 65.5 ppm of TCDD on average. The comparatively smaller amounts of Agent Pink and Agent Purple, with the spraying of of Agent Pink is documented, but an additional appear on procurement records, probably depositing a larger percentage of the total dioxin. References Auxinic herbicides Defoliants Military equipment of the Vietnam War
Agent Pink
[ "Chemistry" ]
379
[ "Defoliants", "Chemical weapons" ]
3,015,758
https://en.wikipedia.org/wiki/Maximum%20entropy%20thermodynamics
In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review. Maximum Shannon entropy Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy, This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function). A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables: kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant). However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising: This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time. For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking. Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy", Hc is the negative of the Kullback–Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform – i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics. Philosophical implications Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below. The nature of the probabilities in statistical mechanics Jaynes (1985, 2003, et passim) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle, and stands reliable today. Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists". The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality. The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account. Is entropy "real"? The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt STh is as "real" as the entropy in classical thermodynamics. Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description. Is ergodic theory relevant? The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in. However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time. If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model. The second law According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time: However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities. Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored. The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy STh(1), should reproduce the expectation values of the observed macroscopic variables at time t2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy STh(2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect: At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6N-dimensional probability distribution, this result represents coarse graining—i.e., information loss by smoothing out very fine-scale detail. Caveats with the argument Some caveats should be considered with the above. 1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system. It is also sometimes suggested that quantum measurement, especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy). 2. The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1). (Note that if we allow ourselves the abilities of Laplace's demon, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t1 is now also reduced from SI(1) to SI(2)). We know that STh(2) > SI(2); but we can now no longer be certain that it is greater than STh(1) = SI(1). This then leaves open the possibility for fluctuations in STh. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture. 3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past. The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see arrow of time). Criticisms The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium. The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result". Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". The physically defined second entropy can also be considered from an informational viewpoint. See also Edwin Thompson Jaynes First law of thermodynamics Second law of thermodynamics Principle of maximum entropy Principle of Minimum Discrimination Information Kullback–Leibler divergence Quantum relative entropy Information theory and measure theory Entropy power inequality References Bibliography of cited references Guttmann, Y.M. (1999). The Concept of Probability in Statistical Physics, Cambridge University Press, Cambridge UK, . Further reading Shows invalidity of Dewar's derivations (a) of maximum entropy production (MaxEP) from fluctuation theorem for far-from-equilibrium systems, and (b) of a claimed link between MaxEP and self-organized criticality. Grandy, W. T., 1987. Foundations of Statistical Mechanics. Vol 1: Equilibrium Theory; Vol. 2: Nonequilibrium Phenomena. Dordrecht: D. Reidel. Vol. 1: . Vol. 2: . Extensive archive of further papers by E.T. Jaynes on probability and physics. Many are collected in Statistical mechanics Philosophy of thermal and statistical physics Non-equilibrium thermodynamics Information theory Thermodynamics Thermodynamic entropy
Maximum entropy thermodynamics
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
3,833
[ "Telecommunications engineering", "Philosophy of thermal and statistical physics", "Physical quantities", "Applied mathematics", "Non-equilibrium thermodynamics", "Thermodynamic entropy", "Computer science", "Entropy", "Information theory", "Thermodynamics", "Statistical mechanics", "Dynamical...
3,015,816
https://en.wikipedia.org/wiki/Hot%20water%20bottle
A hot-water bottle is a bottle filled with hot water and sealed with a stopper, used to provide warmth, typically while in bed, but also for the application of heat to a specific part of the body. Early history Containers for warmth in bed were in use as early as the 16th century. The earliest versions contained hot coals from the dying embers of the fire, and these bed warmers were used to warm the bed before getting into it. Containers using hot water were soon also used, with the advantages that they could remain in the bed with the sleeper and were not so hot as to be a fire risk. Prior to the invention of rubber that could withstand sufficient heat, these early hot-water bottles were made of a variety of materials, such as zinc, copper, brass, glass, earthenware or wood. To prevent burning, the metal hot water flasks were wrapped in a soft cloth bag. Rubber bottles "India rubber" hot-water bottles were in use in Britain at least by 1875. Modern conventional hot-water bottles were patented in 1903 and are manufactured in natural rubber or PVC, to a design patented by the Croatian inventor Slavoljub Eduard Penkala. They are now commonly covered in fabric, sometimes with a novelty design. Some newer products function like the older bottles, but use a polymer gel or wax in a heating pad. The pads can be heated in a microwave oven, and they are marketed as safer than liquid-filled bottles or electrically heated devices. Some newer bottles now use a silicone-based material instead of rubber, which resists very hot water better, and does not deteriorate as much as rubber. Although the stopper size in Ireland and the United Kingdom has been largely standard for many decades, some newer bottles use a wider mouth which is easier to fill (and a larger stopper to fit it). While generally used for keeping warm, conventional hot-water bottles can be used to some effect for the local application of heat as a medical treatment, for example for period pain relief, but newer items such as purpose-designed heating pads are often used now. Regulation The United Kingdom defined British Standards for hot-water bottles to regulate their manufacture and sale as well as to ensure their compliance with all safety standards. The British Standards BS 1970 and BS 1970:2012 (updated version) define, for instance, the bottles’ filling characteristics, safety instructions, allowed materials and components as well as testing methods such as tensile tests for PVC bottles. Most regulations applied to a country are generally harmonized in order to be applied and applicable in a larger area, such as a trade zone. Electric hot water bottle An electric hot water bottle is a heat therapy device that uses electrical energy to warm water inside a sealed bag. It typically features a heating element and insulation to retain heat, providing warmth for relieving muscle pain. It was first invented by Chen Juncheng and Liu Rongren in China. Problems There have been problems with premature failure of rubber hot-water bottles due to faulty manufacture. The rubber may fail strength or fitness tests, or become brittle if manufacturing is not controlled closely. Natural rubber filled with calcium carbonate is the most common material used, but is susceptible to oxidation and polymer degradation at the high temperatures used in shaping the product. Even though the brittle cracks may not be visible externally, the bottle can fracture suddenly after filling with hot water, and can scald the user—sometimes requiring hospitalization for severe burn cases. Boiling water is not recommended for use in hot-water bottles. This is due to risks of the rubber being degraded from high-temperature water, and the risk of injury in case of breakage. Hot water bottle rash (Erythema ab igne) is a skin condition caused by long-term exposure to heat (infrared radiation) or excessive use of a hot water bottle. See also Hot water bottle blowing Bed warmer - or warming pan, a common household item in countries with cold winters which utilises hot ashes. Electric blanket - a blanket that contains integrated electrical heating. References External links Bottles Medical equipment Medical treatments Heating Croatian inventions
Hot water bottle
[ "Biology" ]
837
[ "Medical equipment", "Medical technology" ]
3,015,926
https://en.wikipedia.org/wiki/Meta%20Data%20Services
Meta Data Services was an object-oriented repository technology that could be integrated with enterprise information systems or with applications that process metadata. Meta Data Services was originally named the Microsoft Repository and was delivered as part of Visual Basic 5 in 1997. The original intent was to provide an extensible programmatic interface via Microsoft's OLE automation to metadata describing software artifacts and to facilitate metadata interchange between software tools from multiple vendors. The Repository became part of SQL Server 7 and a number of SQL Server tools took dependencies on the Repository, especially the OLAP features. In 1998, Microsoft joined the Meta Data Coalition and transferred management of the underlying Open Information Model (OIM) of the Repository to the standards body. The Repository was renamed Meta Data Services with the release of SQL Server 2000. Support for Meta Data Services was withdrawn from support with the release of SQL Server 2005. A number of Microsoft technologies used Meta Data Services as a native store for object definitions or as a platform for deploying metadata. One of the ways in which Microsoft SQL Server 2000 used Meta Data Services was to store versioned DTS Packages. In Microsoft Visual Studio Meta Data Services supported the exchange of model data with other development tools. Users could use Meta Data Services for their own purposes: as a component of an integrated information system, as a native store for custom applications that process metadata, or as a storage and management service for sharing reusable models. Users could also extend Meta Data Services to provide support for new tools for resale or customize it to satisfy internal tool requirements References External links Download page of Microsoft Meta Data Services SDK Metadata Microsoft server technology
Meta Data Services
[ "Technology" ]
326
[ "Metadata", "Data" ]
3,016,310
https://en.wikipedia.org/wiki/Fission%20track%20dating
Fission track dating is a radiometric dating technique based on analyses of the damage trails, or tracks, left by fission fragments in certain uranium-bearing minerals and glasses. Fission-track dating is a relatively simple method of radiometric dating that has made a significant impact on understanding the thermal history of continental crust, the timing of volcanic events, and the source and age of different archeological artifacts. The method involves using the number of fission events produced from the spontaneous decay of uranium-238 in common accessory minerals to date the time of rock cooling below closure temperature. Fission tracks are sensitive to heat, and therefore the technique is useful at unraveling the thermal evolution of rocks and minerals. Most current research using fission tracks is aimed at: a) understanding the evolution of mountain belts; b) determining the source or provenance of sediments; c) studying the thermal evolution of basins; d) determining the age of poorly dated strata; and e) dating and provenance determination of archeological artifacts. In the 1930s it was discovered that uranium (specifically U-235) would undergo fission when struck by neutrons. This caused damage tracks in solids which could be revealed by chemical etching. Method Unlike other isotopic dating methods, the "daughter" in fission track dating is an effect in the crystal rather than a daughter isotope. Uranium-238 undergoes spontaneous fission decay at a known rate, and it is the only isotope with a decay rate that is relevant to the significant production of natural fission tracks; other isotopes have fission decay rates too slow to be of consequence. The fragments emitted by this fission process leave trails of damage (fossil tracks or ion tracks) in the crystal structure of the mineral that contains the uranium. The process of track production is essentially the same by which swift heavy ions produce ion tracks. Chemical etching of polished internal surfaces of these minerals reveals spontaneous fission tracks, and the track density can be determined. Because etched tracks are relatively large (in the range 1 to 15 micrometres), counting can be done by optical microscopy, although other imaging techniques are used. The density of fossil tracks correlates with the cooling age of the sample and with uranium content, which needs to be determined independently. To determine the uranium content, several methods have been used. One method is by neutron irradiation, where the sample is irradiated with thermal neutrons in a nuclear reactor, with an external detector, such as mica, affixed to the grain surface. The neutron irradiation induces fission of uranium-235 in the sample, and the resulting induced tracks are used to determine the uranium content of the sample because the 235U:238U ratio is well known and assumed constant in nature. However, it is not always constant. To determine the number of induced fission events that occurred during neutron irradiation an external detector is attached to the sample and both sample and detector are simultaneously irradiated by thermal neutrons. The external detector is typically a low-uranium mica flake, but plastics such as CR-39 have also been used. The resulting induced fission of the uranium-235 in the sample creates induced tracks in the overlying external detector, which are later revealed by chemical etching. The ratio of spontaneous to induced tracks is proportional to the age. Another method of determining uranium concentration is through LA-ICPMS, a technique where the crystal is hit with a laser beam and ablated, and then the material is passed through a mass spectrometer. Applications Unlike many other dating techniques, fission-track dating is uniquely suited for determining low-temperature thermal events using common accessory minerals over a very wide geological range (typically 0.1 Ma to 2000 Ma). Apatite, sphene, zircon, micas and volcanic glass typically contain enough uranium to be useful in dating samples of relatively young age (Mesozoic and Cenozoic) and are the materials most useful for this technique. Additionally low-uranium epidotes and garnets may be used for very old samples (Paleozoic to Precambrian). The fission-track dating technique is widely used in understanding the thermal evolution of the upper crust, especially in mountain belts. Fission tracks are preserved in a crystal when the ambient temperature of the rock falls below the annealing temperature. This annealing temperature varies from mineral to mineral and is the basis for determining low-temperature vs. time histories. While the details of closure temperatures are complicated, they are approximately 70 to 110 °C for typical apatite, c. 230 to 250 °C for zircon, and c. 300 °C for titanite. Because heating of a sample above the annealing temperature causes the fission damage to heal or anneal, the technique is useful for dating the most recent cooling event in the history of the sample. This resetting of the clock can be used to investigate the thermal history of basin sediments, kilometer-scale exhumation caused by tectonism and erosion, low temperature metamorphic events, and geothermal vein formation. The fission track method has also been used to date archaeological sites and artifacts. It was used to confirm the potassium-argon dates for the deposits at Olduvai Gorge. Provenance analysis of detrital grains A number of datable minerals occur as common detrital grains in sandstones, and if the strata have not been buried too deeply, these minerals grains retain information about the source rock. Fission track analysis of these minerals provides information about the thermal evolution of the source rocks and therefore can be used to understand provenance and the evolution of mountain belts that shed the sediment. This technique of detrital analysis is most commonly applied to zircon because it is very common and robust in the sedimentary system, and in addition it has a relatively high annealing temperature so that in many sedimentary basins the crystals are not reset by later heating. Fission-track dating of detrital zircon is a widely applied analytical tool used to understand the tectonic evolution of source terrains that have left a long and continuous erosional record in adjacent basin strata. Early studies focused on using the cooling ages in detrital zircon from stratigraphic sequences to document the timing and rate of erosion of rocks in adjacent orogenic belts (mountain ranges). A number of recent studies have combined U/Pb and/or Helium dating (U+Th/He) on single crystals to document the specific history of individual crystals. This double-dating approach is an extremely powerful provenance tool because a nearly complete crystal history can be obtained, and therefore researchers can pinpoint specific source areas with distinct geologic histories with relative certainty. Fission-track ages on detrital zircon can be as young as 1 Ma to as old as 2000 Ma. See also Radiometric dating Thermochronology References Further reading Naeser, C. W., Fission-Track Dating and Geologic Annealing of Fission Tracks, in: Jäger, E. and J. C. Hunziker, Lectures in Isotope Geology, Springer-Verlag, 1979, Garver, J.I., 2008, Fission-track dating. In Encyclopedia of Paleoclimatology and Ancient Environments, V. Gornitz, (Ed.), Encyclopedia of Earth Science Series, Kluwer Academic Press, p. 247-249. Wagner, G. A., and Van den Haute, P., 1992, Fission-Track Dating; Kluwer Academic Publishers, 285 pp. Enkelmann, E., Garver, J.I., and Pavlis, T.L., 2008, Rapid exhumation of ice-covered rocks of the Chugach-St. Elias Orogen, Southeast Alaska. Geology, v. 36, n.12, p. 915-918. Garver, J.I. and Montario, M.J., 2008. Detrital fission-track ages from the Upper Cambrian Potsdam Formation, New York: implications for the low-temperature thermal history of the Grenville terrane. In: Garver, J.I., and Montario, M.J. (eds.) Proceedings from the 11th International Conference on thermochronometry, Anchorage Alaska, Sept. 2008, p. 87-89. Bernet, M., and Garver, J.I., 2005, Chapter 8: Fission-track analysis of Detrital zircon, In P.W. Reiners, and T. A. Ehlers, (eds.), Low-Temperature thermochronology: Techniques, Interpretations, and Applications, Reviews in Mineralogy and Geochemistry Series, v. 58, p. 205-237. Radiometric dating Nuclear fission Uranium
Fission track dating
[ "Physics", "Chemistry" ]
1,792
[ "Nuclear fission", "Radiometric dating", "Radioactivity", "Nuclear physics" ]
16,017,163
https://en.wikipedia.org/wiki/SIGCHI
The Special Interest Group on Computer–Human Interaction (SIGCHI) is one of the Association for Computing Machinery's special interest groups which is focused on human–computer interactions (HCI). It hosts the flagship annual international HCI conference, CHI, with over 3,000 attendees, and publishes ACM Interactions and ACM Transactions on Computer-Human Interaction (TOCHI). It also sponsors over 20 specialized conferences and provides in-cooperation support to over 30 conferences. SIGCHI has two membership publications, the ACM TechNews - SIGCHI Edition and ACM Interactions. Until 2000, the SIGCHI Bulletin was also published as a membership publication. History SIGCHI was formed in 1982 by renaming and refocusing the Special Interest Group on Social and Behavioral Computing (SIGSOC). Lorraine Borman, previously editor of the SIGSOC Bulletin, was its first chair. The formation of the ACM SIGCHI was first publicly announced in 1982 during the Human Factors in Computer Systems conference in Gaithersburg, Maryland, US, organized by Bill Curtis and Ben Shneiderman. The inaugural CHI conference was hosted the year after, in 1983. In 1988, the UIST and CSCW conferences were added. Publications Apart from conference proceedings, SIGCHI publishes a number of periodicals. SIGCHI Bulletin interactions ACM Transactions on Computer-Human Interaction Awards Each year SIGCHI inducts around 7 or 8 people into the CHI Academy, honouring them for their significant contribution to the field of human–computer interaction. It also gives out a CHI Lifetime Achievement Award for research and practice, the CHI Lifetime Service Award, and the CHI Social Impact Award. Since 2018, SIGCHI also awards the Outstanding Dissertation Award to recognize excellent thesis by Ph.D. recipients in HCI. SIGCHI Lifetime Achievement Award 1998 - Douglas C. Engelbart (award called the SIGCHI Special Recognition Award in 1998) 2000 - Stuart K. Card 2001 - Ben Shneiderman 2002 - Donald A. Norman 2003 - John M. Carroll 2004 - Tom Moran 2005 - Tom Landauer 2006 - Judith S. Olson and Gary M. Olson 2007 - James D. Foley 2008 - Bill Buxton 2009 - Sara Kiesler 2010 Practice: Karen Holtzblatt Research: Lucy Suchman 2011 Practice: Larry Tesler Research: Terry Winograd 2012 Practice: Joy Mountford Research: Dan R. Olsen, Jr. 2013 Practice: Jakob Nielsen Research: George G. Robertson 2014 Practice: Gillian Crampton Smith Research: Steve Whittaker Special Recognition: Ted Nelson 2015 Practice: Susan M. Dray Research: James D. Hollan 2016 Practice: Jeff A. Johnson Research: Robert E. Kraut 2017 Practice: Ernest Edmonds Research: Brad A. Myers 2018 Practice: Arnold M. Lund Research: Steven K. Feiner 2019 Practice: Daniel Rosenberg Research: Hiroshi Ishii 2020 Practice: David Canfield Smith Research: Susan T. Dumais 2021 Practice: John T. Richards Research: Scott Hudson 2022 Practice: Steven Pemberton Research: Yvonne Rogers 2023 Practice: Deborah J. Mayhew Research: Gregory Abowd SIGCHI Lifetime Service Award 2001 - Austin Henderson 2002 - Dan R. Olsen, Jr. 2003 - Lorraine Borman 2004 - Robin Jeffries and Gene Lynch 2005 - Gary Perlman, Marilyn Mantei Tremaine, Sara Bly, Don J. Patterson, and John Morris 2006 - Susan M. Dray 2007 - Richard I. Anderson 2008 - John Karat and Marian Williams 2009 - Clare-Marie Karat and Steven Pemberton 2010 - Mary Czerwinski 2011 - Arnie Lund and Jim Miller 2012 - Michael Atwood and Kevin Schofield 2013 - Joseph A. Konstan 2014 - Wendy Mackay and Tom Hewett 2015 - Michel Beaudouin-Lafon and Jean Scholtz 2016 - Gary M. Olson and Gerrit van der Veer 2017 - Scott E. Hudson and Zhengjie Liu 2018 - Maria Francesca Constabile and John C. Thomas 2019 - Bill Hefley 2020 - Gilbert Cockton and Catherine Plaisant 2021 - Wendy Kellogg and Philippe Palanque 2022 - Geraldine Fitzpatrick 2023 - Elizabeth Churchill and Loren Terveen SIGCHI Social Impact Award 2005 - Gregg Vanderheiden 2006 - Ted Henter 2007 - Gregory Abowd and Gary Marsden 2008 - Vicki Hanson 2009 - Helen Petrie 2010 - Ben Bederson and Allison Druin 2011 - Allen Newell and Clayton Lewis 2012 - Batya Friedman 2013 - Sara J. Czaja 2014 - Richard E. Ladner 2015 - Leysia Palen 2016 - Jonathan Lazar 2017 - Jacob O. Wobbrock and Indrani Medhi Thies 2018 - Lorrie Faith Cranor 2019 - Gillian R. Hayes 2020 - Ronald M. Baecker and Bonnie Nardi 2021 - Maria Cecília Calani Baranauskas, Andy Dearden and Juan Gilbert 2022 - Liz Gerber, Jennifer Mankoff and Aaditeshwar Seth 2023 - Shaowen Bardzell, Munmun de Choudhury and Nicola Dell SIGCHI Outstanding Dissertation Award 2018 - Stefanie Mueller and Blase Ur 2019 - Chris Elsden, Anna Maria Feit and Robert Xiao 2020 - Katta Spiel and Paul Strohmeier 2021 - Josh Andres, Arunesh Mathur and Qian Yang 2022 - Aakash Gautam, Fred Hohman and Anna Lisa Martin-Niedecken 2023 - Megan Hofmann, Dhruv Jain, Kai Lukoff SIGCHI Executive Committee SIGCHI is governed by a set of by-laws and SIGCHI’s Elected Officers are the President, the Executive Vice-President, the Vice-President for Membership and Communications, the Vice-President for Finance, and two Vice-Presidents at large. The Executive Committee (EC) also includes editors of membership publications and appointed officers including the Vice-President for Publications, the Vice-President for Conferences, the Vice-President for Chapters, the Vice-President for Operations, and the immediate past Chair. 2009–2012 President: Gerrit van der Veer 2012–2015 President: Gerrit van der Veer 2015–2018 From July 2015 to July 2018, the SIGCHI President was Loren Terveen of GroupLens Research at the University of Minnesota and the Vice President was Helena Mentis of University of Maryland Baltimore County 2018–2021 From July 2018 to July 2021, the SIGCHI President was Helena Mentis of University of Maryland Baltimore County with Vice President Cliff Lampe of University of Michigan. 2021–2024 The current SIGCHI President is Neha Kumar with Vice President Shaowen Bardzell. The President and VP run as a team and were elected to the positions for a three-year term. Sponsored Conferences Apart from CHI, SIGCHI sponsors or co-sponsors over 20 specialized conferences in topics related to HCI. ACM Conference on Supporting Groupwork (GROUP) International Conference on Tangible, Embedded and Embodied Interaction (TEI) International Conference on Intelligent User Interfaces (IUI) ACM/IEEE International Conference on Human Robot Interaction (HRI) Symposium on Eye Tracking Research and Applications (ETRA) ACM International Conference on Interactive Media Experiences (IMX) Collective Intelligence (CI) Interaction, Design and Children (IDC) ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS) Designing Interactive Systems Conference (DIS) International Conference on User Modeling, Adaptation, and Personalization (UMAP) ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp) International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI) ACM Conference on Recommender Systems (RecSys) International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI) Computer-Supported Cooperative Work (CSCW) ACM Symposium on User Interface Software and Technology (UIST) International Conference on Multimodal Interaction (ICMI) Symposium on Spatial User Interaction (SUI) ACM Symposium and Virtual Reality Software and Technology Symposium on Computer-Human Interaction in Play (CHIPLAY) Interactive Surfaces and Spaces (ISS) Creativity and Cognition (C&C) Grants SIGCHI provides resources for the community to expand, grow and communicate in the form of Grants. SIGCHI Development Fund: Intended to support community-led initiatives to spur communication among local communities. SIGCHI Early Career Mentoring fund: to support early-career scholars to participate in a meeting for mentorship. SIGCHI Student Travel Grants: to provide students the opportunity to attend any of the SIGCHI Conferences. References External links ACM SIGCHI Organizations established in 1982 Association for Computing Machinery Special Interest Groups Human–computer interaction
SIGCHI
[ "Engineering" ]
1,799
[ "Human–computer interaction", "Human–machine interaction" ]
16,017,237
https://en.wikipedia.org/wiki/Style%20guide
A style guide is a set of standards for the writing, formatting, and design of documents. A book-length style guide is often called a style manual or a manual of style (MoS or MOS). A short style guide, typically ranging from several to several dozen pages, is often called a style sheet. The standards documented in a style guide are applicable for either general use, or prescribed use in an individual publication, particular organization, or specific field. A style guide establishes standard style requirements to improve communication by ensuring consistency within and across documents. They may require certain best practices in writing style, usage, language composition, visual composition, orthography, and typography by setting standards of usage in areas such as punctuation, capitalization, citing sources, formatting of numbers and dates, table appearance and other areas. For academic and technical documents, a guide may also enforce the best practice in ethics (such as authorship, research ethics, and disclosure) and compliance (technical and regulatory). For translations, a style guide may even be used to enforce consistent grammar, tones, and localization decisions such as units of measure. Style guides may be categorized into three types: comprehensive style for general use; discipline style for specialized use, which is often specific to academic disciplines, medicine, journalism, law, government, business, and other industries; and house or corporate style, created and used by a particular publisher or organization. Varieties Style guides vary widely in scope and size. Writers working in large industries or professional sectors may reference a specific style guide, written for usage in specialized documents within their fields. For the most part, these guides are relevant and useful for peer-to-peer specialist documentation or to help writers working in specific industries or sectors communicate highly technical information in scholarly articles or industry white papers. Professional style guides of different countries can be referenced for authoritative advice on their respective language(s), such as the United Kingdom's New Oxford Style Manual from Oxford University Press; and the United States' The Chicago Manual of Style from the University of Chicago Press. Australia has a style guide, available online, created by its government. Sizes The variety in scope and length is enabled by the cascading of one style over another, analogous to how styles cascade in web development and in desktop cascade over CSS styles. In many cases, a project such as a book, journal, or monograph series typically has a short style sheet that cascades over the larger style guide of an organization such as a publishing company, whose specific content is usually called house style. Most house styles, in turn, cascade over an industry-wide or profession-wide style manual that is even more comprehensive. Examples of industry style guides include: The Associated Press Stylebook (AP Stylebook) and The Canadian Press Stylebook for journalism The Chicago Manual of Style (CMoS) and Oxford style for general academic writing and publishing Modern Humanities Research Association (MHRA) style and American Sociological Association (ASA) style for the arts and humanities Oxford Standard for Citation of Legal Authorities (OSCOLA) and Bluebook style for law US Government Publishing Office (USGPO) style and Australian Government Publishing Service (AGPS) style for government publications Finally, these reference works cascade over the orthographic norms of the language in use (for example, English orthography for English-language publications). This, of course, may be subject to national variety, such as British, American, Canadian, and Australian English. Topics Some style guides focus on specific topic areas such as graphic design, including typography. Website style guides cover a publication's visual and technical aspects as well as text. Guides in specific scientific and technical fields may cover nomenclature to specify names or classifying labels that are clear, standardized, and ontologically sound (e.g., taxonomy, chemical nomenclature, and gene nomenclature). Style guides that cover usage may suggest descriptive terms for people which avoid racism, sexism, homophobia, etc. Style guides increasingly incorporate accessibility conventions for audience members with visual, mobility, or other disabilities. Web style guides Since the rise of the digital age, websites have allowed for an expansion of style guide conventions that account for digital behavior such as screen reading. Screen reading requires web style guides to focus more intently on a user experience subjected to multichannel surfing. Though web style guides can also vary widely, they tend to prioritize similar values concerning brevity, terminology, syntax, tone, structure, typography, graphics, and errors. Updating Most style guides are revised periodically to accommodate changes in conventions and usage. The frequency of updating and the revision control are determined by the subject. For style manuals in reference-work format, new editions typically appear every 1 to 20 years. For example, the AP Stylebook is revised every other year (since 2020). The Chicago Manual of Style is in its 18th edition, while the APA and ASA styles are both in their 7th as of 2025. Many house styles and individual project styles change more frequently, especially for new projects. See also List of style guides Graphic charter Diction Documentation Disputed usage English writing style Prescription and description Sentence spacing in language and style guides Spelling Style sheet (disambiguation) References External links But the stylebook says ... – Blog post about stylebook abuse, by Bill Walsh of The Washington Post Handouts about writing style guides, from a conference of the American Copy Editors Society in 2007 Language Log » Searching 43 stylebooks Bibliography Communication design Design Graphic design Technical communication Linguistics books
Style guide
[ "Engineering" ]
1,129
[ "Design", "Communication design" ]
16,017,389
https://en.wikipedia.org/wiki/Renewable%20energy%20industry
The renewable-energy industry is the part of the energy industry focusing on new and appropriate renewable energy technologies. Investors worldwide are increasingly paying greater attention to this emerging industry. In many cases, this has translated into rapid renewable energy commercialization and considerable industry expansion. The wind power, solar power and hydroelectric power industries provide good examples of this. In 2020, the global renewable energy market was valued at $881.7 billion and consumption grew 2.9 EJ. China was the largest contributor to renewable growth, accounting an increment of 1.0 EJ in consumption, followed by the US, Japan, the United Kingdom, India, and Germany. In Europe, renewable consumption incremented 0.7 EJ. Overview Net-zero and 100% renewable energy global goals create market opportunities for renewable industries such as solar and wind energy and lithium-ion batteries. By 2050, it’s estimated that the renewable market will reach a value of one trillion dollars, the same size as the current oil market. In 2020, renewable sources incorporated into energy consumption at its fastest rate in two decades. During 2006/2007, several renewable energy companies went through high profile initial public offerings (IPOs), resulting in market capitalization near or above $1 billion. These corporations included the solar PV companies First Solar (USA), Trina Solar (USA), Centrosolar (Germany), and Renesola (U.K.), wind power company Iberdrola (Spain), and U.S. biofuels producers VeraSun Energy, Aventine, and Pacific Ethanol. Renewable energy industries expanded during most of 2008, with large increases in manufacturing capacity, diversification of manufacturing locations, and shifts in leadership. By August 2008, there were at least 160 publicly traded renewable energy companies with a market capitalization greater than $100 million. The number of companies in this category has expanded from around 60 in 2005. Some $150 billion was invested in renewable energy globally in 2009, including new capacity (asset finance and projects) and biofuels refineries. This is more than double the 2006 investment figure of $63 billion. Almost all of the increase was due to greater investment in wind power, solar PV, and biofuels. In 2000, venture capital (VC) investment in renewable energy was about 1% of total VC investment. In 2007 that figure was closer to 10%, with solar power alone making up about 3% of the entire Venture Capital asset class of ~$33B. More than 60 start-ups have been funded by VCs in the last three years. Venture capital and private equity investments in renewable energy companies increased by 167 percent in 2006, according to investment analysts at New Energy Finance Limited. New investment into the sector jumped US$148 billion in 2007, up 60 per cent over 2006, noted a report by the Sustainable Energy Finance Initiative (SEFI). Wind energy attracted one-third of the new capital and solar one-fifth. But interest in solar is growing rapidly on the back of major technological advances which saw solar investment increase 254 per cent. The IEA predicts US$20 trillion will be invested into alternative energy projects over the next 22 years. Wind power In 2020, wind power accounted for more than six percent of global electricity with 743 GW of global capacity. In the same year, 93 GW capacity was installed. For reach a 'net zero' emission status, the world needs to install at least 180 GW of new wind energy capacity by year. Companies Vestas was the largest wind turbine manufacturer in the world with and 16% market share in 2020. The company operates plants in Denmark, Germany, India, Italy, Britain, Spain, Sweden, Norway, Australia and China, and employs more than 20,000 people globally. After a sales slump in 2005, Vestas recovered and was voted Top Green Company of 2006. In 2020, Siemens Gamesa was the world's second largest wind turbine manufacturer in 2020 thank to its position in the offshore sector of India. The company lead the offshore wind market. Other major wind power companies include GE Power, Suzlon, Sinovel and Goldwind. Wind potential Africa's onshore wind energy potential is calculated of almost 180,000 Terawatt hours (TWh) per annum, which is able to satisfy the electricity demands of the continent 250 times over. In 2009, a technical study by the Wind Energy Technologies Office estimated that the onshore wind energy potential for the United States is 10,500 gigawatt (GW) capacity at 80 meters. Photovoltaics Trends Solar production has been increasing by an average of some 20 percent each year since 2002, making it the world’s fastest-growing energy technology. At the end of 2009, the cumulative global PV installations surpassed 21,000 megawatts. According to the China Greentech Report 2009, jointly issued by the PricewaterhouseCoopers and American Chamber of Commerce in Shanghai and released on 10 Sept in Dalian, China, the estimated size of China's green technology market could be between US$500 billion and US$1 trillion annually, or as much as 15 percent of China's forecasted GDP, in 2013. With the positive drivers from the Chinese government’s policies to develop green technology solution, China has already played a more important role in green technology market development. Following the announcements of the Chinese government in 2009 about the new subsidy scheme of “Golden Sun” to support solar industry development in China, some of the worldwide industry players have announced their development plans in this region, such as the agreement signed by LDK Solar regarding a solar project in Jiangsu province with a total capacity of 500MW, manufacturing facilities of polysilicon ingots and wafers, PV cells and PV modules to be built by Yingli Green Energy in Hainan Province, and the new thin film manufacturing plants of Tianwei Baoding and Anwell Technologies. In 2022, solar power market is expected to reach a value of $422 billion. Companies In 2017, main manufacturers of photovoltaics cells are based in Asia. Nine out of twelve major companies are based in China. The manufacturer Jinko Solar was the leader company in the sector, with 9.86% of the market share, followed by Trina Solar, JA Solar, Canadian Solar and Hanwha Q-Cells. Tengger Desert Solar Park is the largest solar park in the world, with a capacity of 1,547MW. The park is located in Zhongwei, Ningxia, and it's called the Great Wall of Solar. Biofuels Brazil continued its ethanol expansion plans which began in the 70's and now has the largest ethanol distribution and the largest fleet of cars run by any mix of ethanol and gasoline. In the ethanol fuel industry, the United States dominated, with 130 operating ethanol plants in 2007, and production capacity of 26 billion liters/year (6.87 billion gallons/year), a 60 percent increase over 2005. Another 84 plants were under construction or undergoing expansion, and this will result in a doubled production capacity. The biodiesel industry opened many new production facilities during 2006/2007 and continued expansion plans in several countries. New biodiesel capacity appeared throughout Europe, including in Belgium, Czech Republic, France, Germany, Italy, Poland, Portugal, Spain, Sweden, and the United Kingdom. Commercial investment in second-generation biofuels began in 2006/2007, and much of this investment went beyond pilot-scale plants. The world’s first commercial wood-to-ethanol plant began operation in Japan in 2007, with a capacity of 1.4 million liters/year. The first wood-to-ethanol plant in the United States is planned for 2008 with an initial output of 75 million liters/year. Employment Renewable energy use tends to be more labor-intensive than fossil fuels, and so a transition toward renewables promises employment gains. In 2019, 11.5 million people work either directly in renewables or indirectly in supplier industries. The wind power industry employs some 1.7 million people, the photovoltaic sector accounts for an estimated 3.7 million jobs, and the solar thermal industry accounts for about 820,000. More than 3.58 million jobs are located in the biomass and biofuels sector. See also Clean Energy Trends List of concentrating solar thermal power companies List of countries by electricity production from renewable source List of the largest hydroelectric power stations List of large wind farms List of solar thermal power stations Symbiocity Renewable-energy economy Renewable energy policy Renewable energy development Sustainable industries The Clean Tech Revolution Clean Technology Fund Green-collar worker References Bibliography External links Energy development Energy economics Green politics Renewable energy commercialization Renewable energy economics Sustainable technologies Industries (economics)
Renewable energy industry
[ "Environmental_science" ]
1,785
[ "Energy economics", "Environmental social science" ]
16,018,002
https://en.wikipedia.org/wiki/Trans-Spliced%20Exon%20Coupled%20RNA%20End%20Determination
Trans-Spliced Exon Coupled RNA End Determination (TEC-RED) is a transcriptomic technique that, like SAGE, allows for the digital detection of messenger RNA sequences. Unlike SAGE, detection and purification of transcripts from the 5’ end of the messenger RNA require the presence of a trans-spliced leader sequence. Trans-splicing Background Spliced leader sequences are short sequences of non coding RNA, not found within a gene itself, that are attached to the 5’ end of all, or a portion of, mRNAs transcribed in an organism. They have been found in several species to be responsible for separating polycistronic transcripts into single gene mRNAs, and in others to splice onto monocistronic transcripts. The major role of trans-splicing on monocistronic transcripts is largely unknown. It has been proposed that they may act as an independent promoter that aids in tissue specific expression of independent protein isoforms. Spliced leaders have been seen in trypanosomatids, Euglena, flatworms, Caenorhabditis. Some species contain only one spliced leader sequence found on all mRNAs. In C. elegans two are seen and are labeled SL1 and SL2. TEC-RED Methods Total RNA is purified from the specimen of interest. Poly A messenger RNA is then purified from total RNA and subsequently translated into cDNA using a reverse transcription reaction. The cDNA produced from the mRNA is labeled using primers homologous to the spliced leader sequences of the organism. In a nine step PCR reaction the cDNAs are concurrently embedded with the BpmI restriction endonuclease site (though any class IIs restriction endonuclease may work) and a biotin label which are present in the primers. These tagged cDNAs are then cleaved 14 bp downstream from the recognition site using BpmI restriction endonuclease and blunt ended with T4 DNA polymerase. The fragments are further purified away from extraneous DNA material by using the biotin labels to bind them to a strepdavidin matrix. They are then ligated to adapter DNA, in six separate reactions, containing six different restriction endonuclease recognition sites. These tags are then amplified by PCR with primers containing a mismatch changing the Bpm1 site to a Xho1 site. The amplicons are concatenated and ligated into a plasmid vector. The clonal vectors are then sequenced and mapped to the genome. Concatenation Concatenation of the tags, as developed in 2004, is different from that seen in SAGE. The cleavage of the tags with Xho1 and mixture of the different samples, followed by ligation, form the first concatenation step. The second step uses one of the restriction endonucleases with consensus to the adapter molecule attached to the 3’ end. They are again ligated, and PCR is performed to purify samples for the next joining. The concatenation is continued with the second restriction endonuclease, followed by the third and finally the fourth. This results in the concatamer formed by the six endonuclease ligations containing 32 tags, arranged 5’ to 5’ around the Xho1 site. In SAGE, concatenation takes place after ditags are formed and amplified by PCR. The linkers on the outside of the ditags are cleaved with the enzyme that provided their binding and these sticky end ditags are concatenated randomly and placed into a cloning vector. Advantages The advantage of TEC-RED over SAGE is that no restriction endonuclease is needed for the initial linker binding. This prevents bias associated with restriction site sequences that will be missing from some genes, as is seen in SAGE. The ability to have a snapshot of specific RNA isoforms allows the deduction of differential regulation of isoforms through alternative selection of promoters. This may also aid in the discernment of expression patterns unique to the SL1 or SL2 sequence. TEC-RED also allows characterization of the 5’ ends of RNA produced and therefore of isoforms that differ by the amino terminal splicing. The technology permits the determination and verification of all known and unknown genes that may be predicted as well as the 5’ splice isoforms or 5’ RNA ends that may be produced. Using TEC-RED in conjunction with SAGE or a modified protocol will allow discernment of the 5’ and 3’ ends of transcripts, respectively. The identification of alternative splice variants, and possibly the relative quantities, containing a trans-spliced leader sequence is therefore possible. Variations Two alternate techniques have been described that allow for 5’ tag analysis in organisms that do not have trans-spliced leader sequences. The techniques presented by Toshiyuki et al. and Shin-ichi et al. are called CAGE and 5’ SAGE respectively. CAGE utilizes biotinylated cap-trapper technology to maintain mRNA signal long enough to create and select full length cDNAs, which have adapter sequences ligated on the 5‘ end. 5’ SAGE utilizes oligo-capping technology. Both use their adapter sequence to prime from after the cDNA is created. Both of these methods have disadvantages though. CAGE has shown tags with addition of a guanine on the first position and oligo-capping may lead to sequence bias due to the use of RNA ligase. See also RNA-seq DNA microarray References External links CAGE Tags http://genome.gsc.riken.jp/absolute/ 5’ SAGE results https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/" https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/ TEC RED Tags seen in wormbase https://web.archive.org/web/20080909025225/http://www.wormbase.org/db/searches/advanced/dumper RNA Molecular biology
Trans-Spliced Exon Coupled RNA End Determination
[ "Chemistry", "Biology" ]
1,310
[ "Biochemistry", "Molecular biology" ]
16,018,591
https://en.wikipedia.org/wiki/Jan%20%C5%81opusza%C5%84ski%20%28physicist%29
Jan Łopuszański (; 21 October 1923 – 30 April 2008) was a Polish theoretical physicist and author of several textbooks about classical, statistical and quantum physics. In the field of quantum field theory, he is most famous as co-author of the Haag–Lopuszanski–Sohnius theorem concerning the possibility of supersymmetry in renormalizable QFT's. Career Jan Łopuszański was born on 21 October 1923 in Lwów, Poland. During 1945-50 he studied physics at the University of Wrocław. In 1950, he received his M.A. in Wrocław, in 1955 his Ph.D. at the Jagiellonian University in Cracow. Since 1947, he was part of regular staff member at the University of Wrocław. In 1968, he became a full professor. During the years 1957–59, he was the vice dean and during the years 1962–65, was dean of the Faculty of Mathematics, Physics and Chemistry. From 1970 to 1984, he was director of the Institute of Theoretical Physics (Instytut Fizyki Teoretycznej Uniwersytet Wroclawski). From 1960 onward, he held the chair for mathematical methods in physics until he retired in 1994. In 1976, he was elected a corresponding member and in 1986 permanent member of the Polish Academy of Sciences. In 1996, he became a corresponding member of the Poland Academy of Arts and Sciences in Cracow. He had visiting professorships in Utrecht, NYU, IAS in Princeton, New Jersey, SUNY at Stony Brook, University of Göttingen, Bielefeld, Max Planck Institute in Munich, CERN in Geneva and ICTP in Trieste. He was also member of the editor board of Reports on Mathematical Physics and Fortschritte der Physik. He wrote about 80 original professional papers, 40 review articles and 5 major textbooks. Personal life J. Łopuszański had a son named Maciej with Halina Pidek, and after divorcing her, was married to Barbara Zasłonka. His hobbies are reported to be baroque music and gardening. On 30 April 2008 Jan Łopuszański died of a heart attack in his home in Wrocław. Sources R. Haag, J. T. Lopuszanski and M. Sohnius, "All Possible Generators of Supersymmetries of the S Matrix", Nucl. Phys. B 88 (1975) 257. Textbooks by J. Łopuszański: Łopuszański, J., Pawlikowski A.: Fizyka Statystyczna, PWN Warszawa (1969). Łopuszański, J.: An Introduction to the Conventional Quantum Field Theory, Wroclaw University (1976). Łopuszański, J.: Rachunek Spinorow, PWN Warszawa (1985). Łopuszański, J.: Introduction to Symmetry and Supersymmetry in Quantum Field Theory, World Scientific (1991) Łopuszański, J.: The Inverse Variational Problem in Classical Mechanics, Eds. Word Scientific (1999) References 1923 births 2008 deaths Jagiellonian University alumni 20th-century Polish physicists University of Wrocław alumni Academic staff of the University of Wrocław Theoretical physicists People associated with CERN Recipients of the Medal of the 10th Anniversary of the People's Republic of Poland
Jan Łopuszański (physicist)
[ "Physics" ]
702
[ "Theoretical physics", "Theoretical physicists" ]
16,018,870
https://en.wikipedia.org/wiki/Centre%20stick
A centre stick (or center stick in the United States), or simply control stick, is an aircraft cockpit arrangement where the control column (or joystick) is located in the center of the cockpit either between the pilot's legs or between the pilots' positions. Since the throttle controls are typically located to the left of the pilot, the right hand is used for the stick, although left-hand or both-hands operation is possible if required. The centre stick is a part of an aircraft's flight control system and is typically linked to its ailerons and elevators, or alternatively to its elevons, by control rods or control cables on basic aircraft. On heavier, faster, more advanced aircraft the centre stick may also control power-assist modules. Modern aircraft centre sticks are also usually equipped with a number of electrical control switches within easy finger reach, in order to reduce the pilot's workload. History The centre stick originated at the turn of the twentieth century. In 1900, Wilhelm Kress of Austria developed a control stick for aircraft, but did not apply for a patent. Instead, a patent was awarded to the French aviator, Robert Esnault-Pelterie who applied for it in 1907. Split stick A two-handed variation of the centre stick has existed as a split stick, with a similar arrangement to a yoke as it is bifurcated for the pilot to operate with both hands. This is not only used to operate the aircraft but for the pilot to also use radar controls. The F-8 Crusader is an example of an aircraft that used a split stick. Popularity The centre stick is used in many military fighter jets such as the Eurofighter Typhoon and the Mirage III, but also in light aircraft such as Piper Cubs and the Diamond Aircraft line of products such as the DA20, DA40 and DA42. This arrangement contrasts with the more recently developed "side-stick", which is used in such military fighter jets as the F-16, the F-35 Lightning II and Rafale and also on civil aircraft such as the Airbus A320. See also Index of aviation articles Aircraft flight control system Dual control (aviation) HOTAS Rudder pedals Side-stick Yoke (aeronautics) References Design Aircraft controls
Centre stick
[ "Engineering" ]
458
[ "Design" ]
16,019,630
https://en.wikipedia.org/wiki/Adil%20Shamoo
Adil E. Shamoo (born August 1, 1941) is an Iraqi biochemist with an interest in biomedical ethics and foreign policy. He is currently a professor at the Department of Biochemistry and Molecular Biology at the University of Maryland. Professional In 1998, he founded the journal Accountability in Research, and has served as its editor-in-chief since its inception. He is on the editorial boards of several other journals, including the Drug Information Journal. From 2000 to 2002, he served on the advisory committee for National Human Research Protections. Although he has an extensive list of publications in the fields of biochemistry and microbiology, he is currently busied by his work as an analyst for Foreign Policy In Focus, a project of the Institute for Policy Studies, a think tank, to which he has been contributing since 2005. Shamoo has also authored and co-authored many op-eds on U.S. foreign policy that have been published in newspapers across the country. Shamoo is also currently occupied with his work in the field of ethics. Since 1991, he has taught a graduate course at the University of Maryland entitled "Responsible Conduct of Research". In 1995, he co-founded the human rights organization, Citizens for Responsible Care and Research (CIRCARE). In 2003, he chaired a Special Issue GlaxoSmithKline Pharmaceuticals' Ethics Advisory Group. Shamoo was then appointed to the Armed Forces Epidemiological Board (AFEB) of the United States Department of Defense as ethics consultant (2003–2004). Because he served as chairman on nine international conferences in ethics in research and human research protection, he was asked to testify before a congressional committee and the National Bioethics Advisory Commission. Since 2006, he has served on the Defense Health Board. And from 2006 to 2007,Shamoo was a member of the new Maryland Governor's Higher Education Transition Working Group. He was an invited participant and presenter in the 2007 New Year Renaissance Weekend. Shamoo has held visiting professorships at the Institute for Political Studies in Paris, France and at East Carolina University. Shamoo has been cited and/or appeared frequently in local and national media both print and television. He has published numerous articles and books. Personal Shamoo currently resides in Columbia, MD with his wife and occasional co-author, Bonnie Bricker; his daughter, and stepdaughter. He has two sons and another stepdaughter who also all reside in the Washington Metropolitan Area. Early life and education Shamoo was born and raised in Baghdad, Iraq. He is an ethnic Assyrian. He attended the University of Baghdad and graduated with a degree in physics in 1962. In 1966, he earned a Master's of Science in physics from the University of Louisville. Four years later, in 1970, he finished his Ph.D. in the program in Biology at the City University of New York. References External links Shamoo's Faculty Page at University of Maryland CIRCARE Website US Department of Health and Human Services Bio for Shamoo American people of Iraqi-Assyrian descent Biochemists East Carolina University faculty Bioethicists Living people 1941 births Iraqi emigrants to the United States University of Baghdad alumni University of Louisville alumni CUNY Graduate Center alumni University of Maryland, College Park faculty
Adil Shamoo
[ "Chemistry", "Biology" ]
658
[ "Biochemistry", "Biochemists" ]
16,019,737
https://en.wikipedia.org/wiki/Tiling%20array
Tiling arrays are a subtype of microarray chips. Like traditional microarrays, they function by hybridizing labeled DNA or RNA target molecules to probes fixed onto a solid surface. Tiling arrays differ from traditional microarrays in the nature of the probes. Instead of probing for sequences of known or predicted genes that may be dispersed throughout the genome, tiling arrays probe intensively for sequences which are known to exist in a contiguous region. This is useful for characterizing regions that are sequenced, but whose local functions are largely unknown. Tiling arrays aid in transcriptome mapping as well as in discovering sites of DNA/protein interaction (ChIP-chip, DamID), of DNA methylation (MeDIP-chip) and of sensitivity to DNase (DNase Chip) and array CGH. In addition to detecting previously unidentified genes and regulatory sequences, improved quantification of transcription products is possible. Specific probes are present in millions of copies (as opposed to only several in traditional arrays) within an array unit called a feature, with anywhere from 10,000 to more than 6,000,000 different features per array. Variable mapping resolutions are obtainable by adjusting the amount of sequence overlap between probes, or the amount of known base pairs between probe sequences, as well as probe length. For smaller genomes such as Arabidopsis, whole genomes can be examined. Tiling arrays are a useful tool in genome-wide association studies. Synthesis and manufacturers The two main ways of synthesizing tiling arrays are photolithographic manufacturing and mechanical spotting or printing. The first method involves in situ synthesis where probes, approximately 25bp, are built on the surface of the chip. These arrays can hold up to 6 million discrete features, each of which contains millions of copies of one probe. The other way of synthesizing tiling array chips is via mechanically printing probes onto the chip. This is done by using automated machines with pins that place the previously synthesized probes onto the surface. Due to the size restriction of the pins, these chips can hold up to nearly 400,000 features. Three manufacturers of tiling arrays are Affymetrix, NimbleGen and Agilent. Their products vary in probe length and spacing. ArrayExplorer.com is a free web-server to compare tiling arrays. Applications and types ChIP-chip ChIP-chip is one of the most popular usages of tiling arrays. Chromatin immunoprecipitation allows binding sites of proteins to be identified. A genome-wide variation of this is known as ChIP-on-chip. Proteins that bind to chromatin are cross-linked in vivo, usually via fixation with formaldehyde. The chromatin is then fragmented and exposed to antibodies specific to the protein of interest. These complexes are then precipitated. The DNA is then isolated and purified. With traditional DNA microarrays, the immunoprecipitated DNA is hybridized to the chip, which contains probes that are designed to cover representative genome regions. Overlapping probes or probes in very close proximity can be used. This gives an unbiased analysis with high resolution. Besides these advantages, tiling arrays show high reproducibility and with overlapping probes spanning large segments of the genome, tiling arrays can interrogate protein binding sites, which harbor repeats. ChIP-chip experiments have been able to identify binding sites of transcription factors across the genome in yeast, drosophila and a few mammalian species. Transcriptome mapping Another popular use of tiling arrays is in finding expressed genes. Traditional methods of gene prediction for annotation of genomic sequences have had problems when used to map the transcriptome, such as not producing an accurate structure of the genes and also missing transcripts entirely. The method of sequencing cDNA to find transcribed genes also runs into problems, such as failing to detect rare or very short RNA molecules, and so do not detect genes that are active only in response to signals or specific to a time frame. Tiling arrays can solve these issues. Due to the high resolution and sensitivity, even small and rare molecules can be detected. The overlapping nature of the probes also allows detection of non-polyadenylated RNA and can produce a more precise picture of gene structure. Earlier studies on chromosome 21 and 22 showed the power of tiling arrays for identifying transcription units. The authors used 25-mer probes that were 35bp apart, spanning the entire chromosomes. Labeled targets were made from polyadenylated RNA. They found many more transcripts than predicted and 90% were outside of annotated exons. Another study with Arabidopsis used high-density oligonucleotide arrays that cover the entire genome. More than 10 times more transcripts were found than predicted by ESTs and other prediction tools. Also found were novel transcripts in the centromeric regions where it was thought that no genes are actively expressed. Many noncoding and natural antisense RNA have been identified using tiling arrays. MeDIP-chip Methyl-DNA immunoprecipitation followed by tiling array allows DNA methylation mapping and measurement across the genome. DNA is methylated on cytosine in CG di-nucleotides in many places in the genome. This modification is one of the best-understood inherited epigenetic changes and is shown to affect gene expression. Mapping these sites can add to the knowledge of expressed genes and also epigenetic regulation on a genome-wide level. Tiling array studies have generated high-resolution methylation maps for the Arabidopsis genome to generate the first "methylome". DNase-chip DNase chip is an application of tiling arrays to identify hypersensitive sites, segments of open chromatin that are more readily cleaved by DNaseI. DNaseI cleaving produces larger fragments of around 1.2kb in size. These hypersensitive sites have been shown to accurately predict regulatory elements such as promoter regions, enhancers and silencers. Historically, the method uses Southern blotting to find digested fragments. Tiling arrays have allowed researchers to apply the technique on a genome-wide scale. Comparative genomic hybridization (CGH) Array-based CGH is a technique often used in diagnostics to compare differences between types of DNA, such as normal cells vs. cancer cells. Two types of tiling arrays are commonly used for array CGH, whole genome and fine tiled. The whole genome approach would be useful in identifying copy number variations with high resolution. On the other hand, fine-tiled array CGH would produce ultrahigh resolution to find other abnormalities such as breakpoints. Procedure Several different methods exist for tiling an array. One protocol for analyzing gene expression involves first isolating total RNA. This is then purified of rRNA molecules. The RNA is copied into double stranded DNA, which is subsequently amplified and in vitro transcribed to cRNA. The product is split into triplicates to produce dsDNA, which is then fragmented and labeled. Finally, the samples are hybridized to the tiling array chip. The signals from the chip are scanned and interpreted by computers. Various software and algorithms are available for data analysis and vary in benefits depending on the manufacturer of the chip. For Affymetrix chips, the model-based analysis of tiling array (MAT) or hypergeometric analysis of tiling-arrays (HAT) are effective peak-seeking algorithms. For NimbleGen chips, TAMAL is more suitable for locating binding sites. Alternative algorithms include MA2C and TileScope, which are less complicated to operate. The Joint binding deconvolution algorithm is commonly used for Agilent chips. If sequence analysis of binding site or annotation of the genome is required then programs like MEME, Gibbs Motif Sampler, Cis-regulatory element annotation system and Galaxy are used. Advantages and disadvantages Tiling arrays provide an unbiased tool to investigate protein binding, gene expression and gene structure on a genome-wide scope. They allow a new level of insight in studying the transcriptome and methylome. Drawbacks include the cost of tiling array kits. Although prices have fallen in the last several years, the price makes it impractical to use genome-wide tiling arrays for mammalian and other large genomes. Another issue is the "transcriptional noise" produced by its ultra-sensitive detection capability. Furthermore, the approach provides no clearly defined start or stop to regions of interest identified by the array. Finally, arrays usually give only chromosome and position numbers, often necessitating sequencing as a separate step (although some modern arrays do give sequence information.) References Microarrays Computational biology
Tiling array
[ "Chemistry", "Materials_science", "Biology" ]
1,799
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques", "Computational biology" ]
16,019,808
https://en.wikipedia.org/wiki/Network%20block%20device
On Linux, network block device (NBD) is a network protocol that can be used to forward a block device (typically a hard disk or partition) from one machine to a second machine. As an example, a local machine can access a hard disk drive that is attached to another computer. The protocol was originally developed for Linux 2.1.55 and released in 1997. In 2011 the protocol was revised, formally documented, and is now developed as a collaborative open standard. There are several interoperable clients and servers. There are Linux-compatible NBD implementations for FreeBSD and other operating systems. The term 'network block device' is sometimes also used generically. Technically, a network block device is realized by three components: the server part, the client part, and the network between them. On the client machine, which uses the exported device node, a kernel driver implements a virtual device. Whenever a program tries to access the device, the kernel driver forwards the request (if the client part is not fully implemented in the kernel it can be done with help of a userspace program) to the server machine, on which the data reside physically. On the server machine, requests from the client are handled by a userspace program. Network block device servers are typically implemented as a userspace program running on a general-purpose computer. All of the function specific to network block device servers can reside in a userspace process because the process communicates with the client via conventional sockets and accesses the storage via a conventional file system interface. The network block device client module is available on Unix-like operating systems, including Linux. Alternative protocols iSCSI: The "target-utils" iscsi package on many Linux distributions. NVMe-oF: an equivalent mechanism, exposing block devices as NVMe namespaces over TCP, Fibre Channel, RDMA, &c., native to most operating systems Loop device: a similar mechanism, but uses a local file instead of a remote one DRBD: Distributed Replicated Block Device is a distributed storage system for the Linux platform ATA over Ethernet: send ATA commands over Ethernet USB/IP: A protocol that provides network access to USB devices via IP. External links nbdkit is a plugin-based NBD server and libnbd is a high-performance C client qemu-nbd A nbd tool from qemu project BNBD is an alternative NBD server implementation xNBD was another NBD server program for Linux The Network Block Device, the Linux Journal References Computer networking Computer storage technologies Linux kernel features
Network block device
[ "Technology", "Engineering" ]
539
[ "Computer networking", "Computer science", "Computer engineering" ]
16,020,608
https://en.wikipedia.org/wiki/Lean%20laboratory
A lean laboratory is one which is focused on processes, procedures, and infrastructure that deliver results in the most efficient way in terms of cost, speed, or both. Lean laboratory is a management and organization process derived from the concept of lean manufacturing and the Toyota Production System (TPS). The goal of a lean laboratory is to reduce resource usage and costs while improving productivity, staff morale, and laboratory-driven outcomes. Overview Manufacturing companies, including medical device and pharmaceutical manufacturers, operate in highly regulated environments which often necessitate a great deal of resources, time, and money being expended in the testing, release, and quality assurance of their products. Since the early 1990s, there has been a more widespread drive to adopt more lean approaches both in the manufacturing and testing of products. The advances in lean thinking developed and refined in the automotive industry initially by Toyota (TPS) are now being used as best practices across most manufacturing sectors. The idea of lean laboratory shares its origins with lean manufacturing and uses the same tools to deliver the most efficient and least wasteful processes, tools such as Kaizen, Just In Time (JIT), Heijunka, Kanban, and Six Sigma. The principles of lean manufacturing have been difficult at times to migrate to laboratories because they are quite different from manufacturing environments. In the hospital laboratory, for example, difficulties arise with the "staunch adherence to traditional laboratory practices, complexity of workflow, and marked variability in sample numbers." In pharmaceutical and biopharmaceutical labs, "the limiting belief" that procedures are so different that lean won't work often slow down adoption. Compared to manufacturing environments, most analytical and microbiological laboratories have a relatively low volume of samples but a high degree of variability and complexity. Many standard lean tools are not a good fit; however, lean can still be applied to these types of labs. A generic approach is not suitable for laboratories, but careful adaptation of the techniques based on a thorough understanding of lab operations will deliver significant benefits in terms of cost, speed, or both. Conventional laboratories It is a common occurrence for testing laboratories to suffer from long and variable lead times. Some of the problems or issues which can be attributed to conventional or “non lean” laboratories include the following issues. Lack of focus Analysts and microbiologists are typically focused on test accuracy and individual test run efficiency. Very often, personnel are dedicated to specific tests and there is little or no control of the progress of individual samples through a sometimes highly variable test routing that can be dependent on product type and/or the intended market. Long and variable lead times In many test laboratories, it is normal to find queues in front of each test where individual samples wait until enough similar samples arrive to constitute an "efficient test run." This approach causes long and variable lead times and, contrary to popular belief, does not result in higher productivity. Ineffective "fast track" systems To deal with the long lead times, "fast track" systems are often developed in an effort to deal with urgent samples, but these often become unworkable. Frequently, the proportion of samples designated as priority becomes so large that fast tracking quickly becomes ineffective. High levels of work in progress Laboratories often maintain high levels of work in process (WIP), which inevitably results in significant (non value adding) effort being expended in controlling, tracking, and prioritizing samples and in planning analyst work. Companies often respond to this situation by investing in a laboratory information management system (LIMS) or some other IT system. However these systems do not in themselves improve performance. The underlying process by which work is organized and moves through the lab must first be re-engineered based on lean principles. Volatile incoming workload For many testing laboratories, the incoming workload is inherently volatile, with significant peaks and dips. This causes low productivity (during dips) and/or poor lead time performance (during peaks). Very often the capacity of the lab is not well understood, and there is no mechanism to level or smooth the workload. Implementing lean in the lab To address the above problems and issues, a lean laboratory uses lean principles to eliminate waste or Muda. There are a number of principles that can be used, but the goal is always primarily focused on improving measurable performance and/or reducing costs. Specify value The first step in designing any lean laboratory is to specify value. Every activity in the laboratory is identified and categorizing as "value added," "non-value added" (from the customers perspective), and "incidental." Incidental work is non value add in itself but is essential to enable value add tasks to be carried out. A significant focus of any lean lab initiative will be to eliminate or reduce the non value add activities. Identify the value stream Another key lean step is to develop value stream maps of the overall release process. This should avoid the error of working on point solutions that only end up moving a bottleneck to another process and therefore do not deliver overall improvements. For example, there is no real value in reducing analytical laboratory lead times below the time of a release constraint test in a microbiology lab. You can however use increased velocity to help "level the load" or to maximize individual test run efficiency. Make value flow and create pull A lean laboratory will normally have a defined sequence of tests and associated analyst roles that make good use of people and equipment. A key principle is to flow work through the laboratory so that once testing begins on a sample, it is kept moving and not allowed to queue between tests. This creates a focus and drive to reduce throughput time, which can be converted into a lead-time reduction or used to allow samples to wait in an incoming queue to facilitate level loading and /or grouping for efficiency. "Pull" is interpreted as testing according to customer priority. If this is not inherent in the order in which samples arrive, then the samples are taken from an incoming queue according to customer demand and thereafter processed in FIFO order with no overtaking. Level the load and the mix At its simplest, leveling the load (overall workload) and the mix (the mix of sample types) is about putting the same amount of work into the lab on a daily basis. This is probably the most critical step and potentially the most beneficial for the majority of testing laboratories. Successfully leveling a volatile load and mix will significantly improve productivity and/or lead time. The productivity improvement can be used to provide additional capacity or converted into a cost reduction. Eliminate waste (muda) Lean laboratories continuously look to develop solutions and re-engineer processes to eliminate or reduce the non-value added and incidental tasks identified when specifying value. Manage performance An essential part of lean in the laboratory is to manage and review a lab's performance daily, ensuring that Key Performance Indicators (KPI's) are good and that the overall laboratory process is in control. References External links Thinking Lean, by Tom Reynolds Articles and reports on Lean Laboratory. Selected case studies on lean laboratory implementation Power of Lean in the Laboratory: A Clinical Application, by Jennifer Blaha and MariJane White Laboratory types Lean manufacturing cs:Lean
Lean laboratory
[ "Chemistry", "Engineering" ]
1,457
[ "Laboratory types", "Lean manufacturing" ]
16,020,703
https://en.wikipedia.org/wiki/Single-molecule%20real-time%20sequencing
Single-molecule real-time (SMRT) sequencing is a parallelized single molecule DNA sequencing method. Single-molecule real-time sequencing utilizes a zero-mode waveguide (ZMW). A single DNA polymerase enzyme is affixed at the bottom of a ZMW with a single molecule of DNA as a template. The ZMW is a structure that creates an illuminated observation volume that is small enough to observe only a single nucleotide of DNA being incorporated by DNA polymerase. Each of the four DNA bases is attached to one of four different fluorescent dyes. When a nucleotide is incorporated by the DNA polymerase, the fluorescent tag is cleaved off and diffuses out of the observation area of the ZMW where its fluorescence is no longer observable. A detector detects the fluorescent signal of the nucleotide incorporation, and the base call is made according to the corresponding fluorescence of the dye. Technology The DNA sequencing is done on a chip that contains many ZMWs. Inside each ZMW, a single active DNA polymerase with a single molecule of single stranded DNA template is immobilized to the bottom through which light can penetrate and create a visualization chamber that allows monitoring of the activity of the DNA polymerase at a single molecule level. The signal from a phospho-linked nucleotide incorporated by the DNA polymerase is detected as the DNA synthesis proceeds which results in the DNA sequencing in real time. Template preparation To prepare the library, DNA fragments are put into a circular form using hairpin adapter ligations. Phospholinked nucleotide For each of the nucleotide bases, there is a corresponding fluorescent dye molecule that enables the detector to identify the base being incorporated by the DNA polymerase as it performs the DNA synthesis. The fluorescent dye molecule is attached to the phosphate chain of the nucleotide. When the nucleotide is incorporated by the DNA polymerase, the fluorescent dye is cleaved off with the phosphate chain as a part of a natural DNA synthesis process during which a phosphodiester bond is created to elongate the DNA chain. The cleaved fluorescent dye molecule then diffuses out of the detection volume so that the fluorescent signal is no longer detected. Zero-Mode Waveguide The zero-mode waveguide (ZMW) is a nanophotonic confinement structure that consists of a circular hole in an aluminum cladding film deposited on a clear silica substrate. The ZMW holes are ~70 nm in diameter and ~100 nm in depth. Due to the behavior of light when it travels through a small aperture, the optical field decays exponentially inside the chamber. The observation volume within an illuminated ZMW is ~20 zeptoliters (20 X 10−21 liters). Within this volume, the activity of DNA polymerase incorporating a single nucleotide can be readily detected. Sequencing Performance Sequencing performance can be measured in read length, accuracy, and total throughput per experiment. PacBio sequencing systems using ZMWs have the advantage of long read lengths, although error rates are on the order of 5-15% and sample throughput is lower than Illumina sequencing platforms. On 19 Sep 2018, Pacific Biosciences [PacBio] released the Sequel 6.0 chemistry, synchronizing the chemistry version with the software version. Performance is contrasted for large-insert libraries with high molecular weight DNA versus shorter-insert libraries below ~15,000 bases in length. For larger templates average read lengths are up to 30,000 bases. For shorter-insert libraries, average read length are up to 100,000 bases while reading the same molecule in a circle several times. The latter shorter-insert libraries then yield up to 50 billion bases from a single SMRT Cell. History Pacific Biosciences (PacBio) commercialized SMRT sequencing in 2011, after releasing a beta version of its RS instrument in late 2010. RS and RS II At commercialization, read length had a normal distribution with a mean of about 1100 bases. A new chemistry kit released in early 2012 increased the sequencer's read length; an early customer of the chemistry cited mean read lengths of 2500 to 2900 bases. The XL chemistry kit released in late 2012 increased average read length to more than 4300 bases. On August 21, 2013, PacBio released a new DNA polymerase Binding Kit P4. This P4 enzyme has average read lengths of more than 4,300 bases when paired with the C2 sequencing chemistry and more than 5,000 bases when paired with the XL chemistry. The enzyme’s accuracy is similar to C2, reaching QV50 between 30X and 40X coverage. The resulting P4 attributes provided higher-quality assemblies using fewer SMRT Cells and with improved variant calling. When coupled with input DNA size selection (using an electrophoresis instrument such as BluePippin) yields average read length over 7 kilobases. On October 3, 2013, PacBio released new reagent combination for PacBio RS II, the P5 DNA polymerase with C3 chemistry (P5-C3). Together, they extend sequencing read lengths to an average of approximately 8,500 bases, with the longest reads exceeding 30,000 bases. Throughput per SMRT cell is around 500 million bases demonstrated by sequencing results from the CHM1 cell line. On October 15, 2014, PacBio announced the release of new chemistry P6-C4 for the RS II system, which represents the company's 6th generation of polymerase and 4th generation chemistry--further extending the average read length to 10,000 - 15,000 bases, with the longest reads exceeding 40,000 bases. The throughput with the new chemistry was estimated between 500 million to 1 billion bases per SMRT Cell, depending on the sample being sequenced. This was the final version of chemistry released for the RS instrument. Throughput per experiment for the technology is both influenced by the read length of DNA molecules sequenced as well as total multiplex of a SMRT Cell. The prototype of the SMRT Cell contained about 3000 ZMW holes that allowed parallelized DNA sequencing. At commercialization, the SMRT Cells were each patterned with 150,000 ZMW holes that were read in two sets of 75,000. In April 2013, the company released a new version of the sequencer called the "PacBio RS II" that uses all 150,000 ZMW holes concurrently, doubling the throughput per experiment. The highest throughput mode in November 2013 used P5 binding, C3 chemistry, BluePippin size selection, and a PacBio RS II officially yielded 350 million bases per SMRT Cell though a human de novo data set released with the chemistry averaging 500 million bases per SMRT Cell. Throughput varies based on the type of sample being sequenced. With the introduction of P6-C4 chemistry typical throughput per SMRT Cell increased to 500 million bases to 1 billion bases. Sequel In September 2015, the company announced the launch of a new sequencing instrument, the Sequel System, that increased capacity to 1 million ZMW holes. With the Sequel instrument initial read lengths were comparable to the RS, then later chemistry releases increased read length. On January 23, 2017, the V2 chemistry was released. It increased average read lengths to between 10,000 and 18,000 bases. On March 8, 2018, the 2.1 chemistry was released. It increased average read length to 20,000 bases and half of all reads above 30,000 bases in length. Yield per SMRT Cell increased to 10 or 20 billion bases, for either large-insert libraries or shorter-insert (e.g. amplicon) libraries respectively. On 19 September 2018, the company announced the Sequel 6.0 chemistry with average read lengths increased to 100,000 bases for shorter-insert libraries and 30,000 for longer-insert libraries. SMRT Cell yield increased up to 50 billion bases for shorter-insert libraries. 8M Chip In April 2019 the company released a new SMRT Cell with eight million ZMWs, increasing the expected throughput per SMRT Cell by a factor of eight. Early access customers in March 2019 reported throughput over 58 customer run cells of 250 GB of raw yield per cell with templates about 15 kb in length, and 67.4 GB yield per cell with templates in higher weight molecules. System performance is now reported in either high-molecular-weight continuous long reads or in pre-corrected HiFi (also known as Circular Consensus Sequence (CCS)) reads. For high-molecular-weight reads roughly half of all reads are longer than 50 kb in length. The HiFi performance includes corrected bases with quality above Phred score Q20, using repeated amplicon passes for correction. These take amplicons up to 20kb in length. Application Single-molecule real-time sequencing may be applicable for a broad range of genomics research. For de novo genome sequencing, read lengths from the single-molecule real-time sequencing are comparable to or greater than that from the Sanger sequencing method based on dideoxynucleotide chain termination. The longer read length allows de novo genome sequencing and easier genome assemblies. Scientists are also using single-molecule real-time sequencing in hybrid assemblies for de novo genomes to combine short-read sequence data with long-read sequence data. In 2012, several peer-reviewed publications were released demonstrating the automated finishing of bacterial genomes, including one paper that updated the Celera Assembler with a pipeline for genome finishing using long SMRT sequencing reads. In 2013, scientists estimated that long-read sequencing could be used to fully assemble and finish the majority of bacterial and archaeal genomes. The same DNA molecule can be resequenced independently by creating the circular DNA template and utilizing a strand displacing enzyme that separates the newly synthesized DNA strand from the template. In August 2012, scientists from the Broad Institute published an evaluation of SMRT sequencing for SNP calling. The dynamics of polymerase can indicate whether a base is methylated. Scientists demonstrated the use of single-molecule real-time sequencing for detecting methylation and other base modifications. In 2012 a team of scientists used SMRT sequencing to generate the full methylomes of six bacteria. In November 2012, scientists published a report on genome-wide methylation of an outbreak strain of E. coli. Long reads make it possible to sequence full gene isoforms, including the 5' and 3' ends. This type of sequencing is useful to capture isoforms and splice variants. SMRT sequencing has several applications in reproductive medical genetics research when investigating families with suspected parental gonadal mosaicism. Long reads enable haplotype phasing in patients to investigate parent-of-origin of mutations. Deep sequencing enables determination of allele frequencies in sperm cells, of relevance for estimation of recurrence risk for future affected offspring. References External links Report from the BioIT World.com Report from New York Times Bioinformatics DNA sequencing methods Genomics
Single-molecule real-time sequencing
[ "Engineering", "Biology" ]
2,277
[ "Genetics techniques", "Biological engineering", "Bioinformatics", "DNA sequencing methods", "DNA sequencing" ]
16,020,896
https://en.wikipedia.org/wiki/HD%2029697
HD 29697 (Gliese 174, V834 Tauri) is a variable star of BY Draconis type in the constellation Taurus. It has an apparent magnitude around 8 and is approximately 43 ly away. Description HD 29697 is the Henry Draper Catalogue number of this star. It is also known by its designation in the Gliese Catalogue of Nearby Stars, Gliese 174, and its variable star designation V834 Tauri. V834 Tauri is a BY Draconis variable with maximum and minimum apparent magnitudes of 7.94 and 8.33 respectively, so it is never visible to the naked eye. The star has been examined for indications of a circumstellar disk using the Spitzer Space Telescope, but no statistically-significant infrared excess was detected. References BY Draconis variables Taurus (constellation) Tauri, V834 029697 021818 Gliese and GJ objects Durchmusterung objects K-type main-sequence stars
HD 29697
[ "Astronomy" ]
210
[ "Taurus (constellation)", "Constellations" ]
16,021,556
https://en.wikipedia.org/wiki/DialogOS
DialogOS is a graphical programming environment to design computer system which can converse through voice with the user. Dialogs are clicked together in a Flowchart. DialogOS includes bindings to control Lego Mindstorms robots by voice and has bindings to SQL databases, as well as a generic plugin architecture to integrate with other types of backends. DialogOS is used in computer science courses in schools and universities to teach programming and to introduce beginners in the basic principles of human/computer interaction and dialog design. It has also been used in research systems. DialogOS was initially developed commercially by CLT Sprachtechnologie GmbH until its liquidation in 2017. The rights were then acquired by Saarland University and the software was released as open-source. Bindings to Lego Mindstorms NXT DialogOS can control the LEGO Mindstorms NXT Series. It uses sensor-nodes to obtain values for the following sensors: noise sensor ultrasonic sensor touch sensor luminosity sensor References External links User interfaces Speech synthesis Applications of artificial intelligence Computational linguistics Robotics suites Pedagogic integrated development environments
DialogOS
[ "Technology" ]
232
[ "User interfaces", "Natural language and computing", "Interfaces", "Computational linguistics" ]
16,021,637
https://en.wikipedia.org/wiki/Sliding%20window%20based%20part-of-speech%20tagging
Sliding window based part-of-speech tagging is used to part-of-speech tag a text. A high percentage of words in a natural language are words which out of context can be assigned more than one part of speech. The percentage of these ambiguous words is typically around 30%, although it depends greatly on the language. Solving this problem is very important in many areas of natural language processing. For example in machine translation changing the part-of-speech of a word can dramatically change its translation. Sliding window based part-of-speech taggers are programs which assign a single part-of-speech to a given lexical form of a word, by looking at a fixed sized "window" of words around the word to be disambiguated. The two main advantages of this approach are: It is possible to automatically train the tagger, getting rid of the need of manually tagging a corpus. The tagger can be implemented as a finite state automaton (Mealy machine) Formal definition Let be the set of grammatical tags of the application, that is, the set of all possible tags which may be assigned to a word, and let be the vocabulary of the application. Let be a function for morphological analysis which assigns each its set of possible tags, , that can be implemented by a full-form lexicon, or a morphological analyser. Let be the set of word classes, that in general will be a partition of with the restriction that for each all of the words will receive the same set of tags, that is, all of the words in each word class belong to the same ambiguity class. Normally, is constructed in a way that for high frequency words, each word class contains a single word, while for low frequency words, each word class corresponds to a single ambiguity class. This allows good performance for high frequency ambiguous words, and doesn't require too many parameters for the tagger. With these definitions it is possible to state problem in the following way: Given a text each word is assigned a word class (either by using the lexicon or morphological analyser) in order to get an ambiguously tagged text . The job of the tagger is to get a tagged text (with ) as correct as possible. A statistical tagger looks for the most probable tag for an ambiguously tagged text : Using Bayes formula, this is converted into: where is the probability that a particular tag (syntactic probability) and is the probability that this tag corresponds to the text (lexical probability). In a Markov model, these probabilities are approximated as products. The syntactic probabilities are modelled by a first order Markov process: where and are delimiter symbols. Lexical probabilities are independent of context: One form of tagging is to approximate the first probability formula: where is the right context of the size . In this way the sliding window algorithm only has to take into account a context of size . For most applications . For example to tag the ambiguous word "run" in the sentence "He runs from danger", only the tags of the words "He" and "from" are needed to be taken into account. Further reading Sanchez-Villamil, E., Forcada, M. L., and Carrasco, R. C. (2005). "Unsupervised training of a finite-state sliding-window part-of-speech tagger". Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence, vol. 3230, p. 454-463 Computational linguistics
Sliding window based part-of-speech tagging
[ "Technology" ]
736
[ "Natural language and computing", "Computational linguistics" ]
16,021,852
https://en.wikipedia.org/wiki/Mood%20board
A mood board is a type of visual presentation or 'collage' consisting of images, text, and samples of objects in a composition. It can be based on a set topic or can be any material chosen at random. A mood board can be used to convey a general idea or feeling about a particular topic. They may be physical or digital, and can be effective presentation tools. Uses Graphic designers, interior designers, industrial designers, photographers, user interface designers and other creative artists use mood boards to visually illustrate the style they wish to pursue. Amateur and professional designers alike may use them as an aid for more subjective purposes such as how they want to decorate their bedroom, or the vibe they want to convey through their fashion. Mood boards can also be used by authors to visually explain a certain style of writing, or an imaginary setting for a story line. In short, mood boards are not limited to interior decorating purposes, but serve as a visual tool to quickly inform others of the overall "feel" (or "flow") of an idea. In creative processes, mood boards can balance coordination and creative freedom. Mood boards can be used in marketing for advertisements and branding. They are used to help creative teams stay on the same page while also adhering to the image that the brand wants to project outward. They can also be helpful for sticking to a specific creative concept when creating a series of ads. Types Physical One way of creating a mood board is using a foam board which can be cut up with a scalpel and can also have spray mounted cut-outs put onto it. Cardboard, paper, and cork-board can also be used as an alternative base for a mood board. Some examples of ideas used to convey a mood are food, music, and colors. Mood boards can be decorated with string, stickers, pretty tape, magazine pictures, original art, original pictures, and fabrics, as well as any other decoration that happens to inspire the creator. They can take the form of various shapes and sizes. Digital Creating mood boards in a digital form allows for easier collaboration and modification. They can be created with digital design software, such as Adobe Creative Cloud Express and Figma, or online via sites such as Pinterest, Shuffles (a product of Pinterest), and ShopLook. Users of these platforms often use images that others have shared online to create a vision that they might not have necessarily been able to create themselves with the physical objects around them. See also Scrapbooking Concept art Aesthetics References Design Posters
Mood board
[ "Engineering" ]
513
[ "Design" ]
16,022,035
https://en.wikipedia.org/wiki/Planetary%20mnemonic
A planetary mnemonic refers to a phrase created to remember the planets and dwarf planets of the Solar System, with the order of words corresponding to increasing sidereal periods of the bodies. One simple visual mnemonic is to hold out both hands side-by-side with thumbs in the same direction (typically left-hand facing palm down, and right-hand palm up). The fingers of hand with palm down represent the terrestrial planets where the left pinkie represents Mercury and its thumb represents the asteroid belt, including Ceres. The other hand represents the giant planets, with its thumb representing trans-Neptunian objects, including Pluto. Nine planets Before 2006, Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto were considered as planets. Below are partial list of these mnemonics: "Men Very Easily Make Jugs Serve Useful Needs, Perhaps" – The structure of this sentence, which is current in the 1950s, suggests that it may have originated before Pluto's discovery. It can easily be trimmed back to reflect Pluto's demotion to dwarf planet. "My Very Elegant Mother Just Sat Upon Nine Porcupines" "Mary's violet eyes make Johnnie stay up nights pondering" With the IAU's 2006 definition of planet which reclassified Pluto as a dwarf planet, along with Ceres and Eris, these mnemonics became obsolete. Eight planets When Pluto's significance was changed to dwarf planet, mnemonics could no longer include the final "P". The first notable suggestion came from Kyle Sullivan of Lumberton, Mississippi, USA, whose mnemonic was published in the Jan. 2007 issue of Astronomy magazine: "My Violent Evil Monster Just Scared Us Nuts". In August 2006, for the eight planets recognized under the new definition, Phyllis Lugger, professor of astronomy at Indiana University suggested the following modification to the common mnemonic for the nine planets: "My Very Educated Mother Just Served Us Nachos". She proposed this mnemonic to Owen Gingerich, Chair of the International Astronomical Union (IAU) Planet Definition Committee and published the mnemonic in the American Astronomical Society Committee on the Status of Women in Astronomy Bulletin Board on August 25, 2006. It also appeared in Indiana University's IU News Room Star Trak on August 30, 2006. This mnemonic is used by the IAU on their website for the public. Others angry at the IAU's decision to "demote" Pluto composed sarcastic mnemonics in protest: "Many Very Educated Men Justify Stealing Unique Ninth" – found in Schott's Miscellany by Ben Schott. "Many Very Educated Men Just Screwed Up Nature" – this mnemonic is mentioned by Mike Brown, who discovered Eris. Another mnemonic which was changed from 9 to 8 planets was , "Most Very Elderly Men Just Slept Under Newspapers". Slightly risque versions include, "Mary's 'Virgin' Explanation Made Joseph Suspect Upstairs Neighbor". Eleven planets and dwarf planets In 2007, the National Geographic Society sponsored a contest for a new mnemonic of MVEMCJSUNPE, incorporating the then-eleven known planets and dwarf planets, including Eris, Ceres, and the newly demoted Pluto. On February 22, 2008, "My Very Exciting Magic Carpet Just Sailed Under Nine Palace Elephants", coined by 10-year-old Maryn Smith of Great Falls, Montana, was announced as the winner. The phrase was featured in the song 11 Planets by Grammy-nominated singer and songwriter Lisa Loeb and in the book 11 Planets: A New View of the Solar System by David Aguilar (). Thirteen planets and dwarf planets Since the National Geographic competition, two additional bodies were designated as dwarf planets, Makemake and Haumea, on July 11 and September 17, 2008 respectively. A 2015 New York Times article suggested some mnemonics including, "My Very Educated Mother Cannot Just Serve Us Nine Pizzas—Hundreds May Eat!" Longer mnemonics will be required in the future, if more of the possible dwarf planets are recognized as such by the IAU. However, at some point enthusiasm for new mnemonics will wane as the number of dwarf planets exceeds the number that people will want to learn (it is estimated that there may be up to 200 dwarf planets). See also Lists of astronomical objects References Science mnemonics Mnemonic Mnemonic Mnemonic Solar System
Planetary mnemonic
[ "Astronomy" ]
919
[ "Definition of planet", "Outer space", "Pluto's planethood", "Astronomical controversies", "Astronomical classification systems", "Solar System" ]
16,022,113
https://en.wikipedia.org/wiki/Transition%20metal%20hydride
Transition metal hydrides are chemical compounds containing a transition metal bonded to hydrogen. Most transition metals form hydride complexes and some are significant in various catalytic and synthetic reactions. The term "hydride" is used loosely: some of them are acidic (e.g., H2Fe(CO)4), whereas some others are hydridic, having H−-like character (e.g., ZnH2). Classes of metal hydrides Binary metal hydrides Many transition metals form compounds with hydrogen. These materials are called binary hydrides, because they contain only two elements. The hydrogenic ligand is assumed to have hydridic (H−-like) character. These compounds are invariably insoluble in all solvents, reflecting their polymeric structures. They often exhibit metal-like electrical conductivity. Many are nonstoichiometric compounds. Electropositive metals (Ti, Zr, Hf, Zn) and some other metals form hydrides with the stoichiometry MH or sometimes MH2 (M = Ti, Zr, Hf, V, Zn). The best studied are the binary hydrides of palladium, which readily forms a limiting monohydride. In fact, hydrogen gas diffuses through Pd windows via the intermediacy of PdH. Ternary metal hydrides Ternary metal hydrides have the formula AMH, where A is an alkali or alkaline earth metal cation, e.g. K and Mg. A celebrated example is KReH, a salt containing two K ions and the ReH anion. Other homoleptic metal hydrides include the anions in MgFeH and MgNiH. Some of these anionic polyhydrides satisfy the 18-electron rule, many do not. Because of their high lattice energy, these salts are typically not soluble in any solvents, a well known exception being KReH. Coordination complexes The most prevalent hydrides of the transition metals are metal complexes that contain a mix of ligands in addition to hydride. The range of coligands is large. Virtually all of the metals form such derivatives. The main exceptions include the late metals silver, gold, cadmium, and mercury, which form few or unstable complexes with direct M-H bonds. Examples of an industrially useful hydrides are HCo(CO) and HRh(CO)(PPh), which are catalysts for hydroformylation. The first molecular hydrides of the transition metals were first reported in the 1930s by Walter Hieber and coworkers. They described HFe(CO) and HCo(CO). After a hiatus of several years, and following the release of German war documents on the postulated role of HCo(CO) in hydroformylation, several new hydrides were reported in the mid-1950s by three prominent groups in organometallic chemistry: HRe(CH) by Geoffrey Wilkinson, HMo(CH)(CO) by E. O. Fischer, and HPtCl(PEt) by Joseph Chatt. Thousands of such compounds are now known. Cluster hydrides Like hydrido coordination complexes, many clusters feature terminal (bound by one M–H bond) hydride ligands. Hydride ligands can also bridge pairs of metals, as illustrated by [HW(CO)]. The cluster HOs(CO) features both terminal and doubly bridging hydride ligands. Hydrides can also span the triangular face of a cluster as in [Ag3{(PPh2)2CH2}(μ-H)(μ-Cl)]BF. In the cluster [CoH(CO)], the hydride is "interstitial", occupying a position at the center of the Co octahedron. The assignment for cluster hydrides can be challenging as illustrated by studies on Stryker's reagent [Cu(PPh)H]. Synthesis Hydride transfer Nucleophilic main group hydrides convert many transition metal halides and cations into the corresponding hydrides: MLnX + LiBHEt3 → HMLn + BEt3 + LiX These conversions are metathesis reactions, and the hydricity of the product is generally less than of the hydride donor. Classical (and relatively cheap) hydride donor reagents include sodium borohydride and lithium aluminium hydride. In the laboratory, more control is often offered by "mixed hydrides" such as lithium triethylborohydride and Red-Al. Alkali metal hydrides, e.g. sodium hydride, are not typically useful reagents. Elimination reactions Beta-hydride elimination and alpha-hydride elimination are processes that afford hydrides. The former a common termination pathway in homogeneous polymerization. It also allows some transition metal hydride complexes to be synthesized from organolithium and Grignard reagents: MLnX + LiC4H9 → C4H9MLn + LiX C4H9MLn → HMLn + Oxidative additions Oxidative addition of dihydrogen to a low valent transition metal center is common. Several metals react directly with H, though usually heat to a few hundred degrees is required. One example is titanium dihydride, which forms when titanium sponge is heated to 400-700 °C under an atmosphere of hydrogen. These reactions typically require high surface area metals. The direct reaction of metals with H is a step in catalytic hydrogenation. For solutions, classic example involves Vaska's complex: IrCl(CO)(PPh) + H ⇌ HIrCl(CO)(PPh) Oxidative addition also can occur to dimetallic complexes, e.g.: + H ⇌ 2 HCo(CO) Many acids participate in oxidative additions, as illustrated by the addition of HCl to Vaska's complex: IrICl(CO)(PPh3)2 + HCl → HIrIIICl2(CO)(PPh3)2 Heterolytic cleavage of dihydrogen Some metal hydrides form when a metal complex is treated with hydrogen in the presence of a base. The reaction involves no changes in the oxidation state of the metal and can be viewed as splitting H2 into hydride (which binds to the metal) and proton (which binds to the base). MLnx+ + base + H2 ⇌ HMLn(x-1)+ + Hbase+ Such reaction are assumed to involve the intermediacy of dihydrogen complexes. Bifunctional catalysts activate H2 in this way. Thermodynamic considerations The values shift by <6 kJ/mol upon substitution of CO by a phosphine ligand. The M-H bond can in principle cleave to produce a proton, hydrogen radical, or hydride. HMLn ⇌ MLn− + H+ HMLn ⇌ MLn + H HMLn ⇌ MLn+ + H− Although these properties are interrelated, they are not interdependent. A metal hydride can be a thermodynamically a weak acid and a weak H− donor; it could also be strong in one category but not the other or strong in both. The H− strength of a hydride also known as its hydride donor ability or hydricity corresponds to the hydride's Lewis base strength. Not all hydrides are powerful Lewis bases. The base strength of hydrides vary as much as the pKa of protons. This hydricity can be measured by heterolytic cleaving hydrogen between a metal complex and base with a known pKa then measuring the resulting equilibrium. This presupposes that the hydride doesn't heterolytically or homolytically react with itself to reform hydrogen. A complex would homolytically react with itself if the homolytic M-H bond is worth less than half of the homolytic H-H bond. Even if the homolytic bond strength is above that threshold the complex is still susceptible to radical reaction pathways. 2 HMLnz ⇌ 2 MLnz + H2 A complex will heterolytically react with itself when its simultaneously a strong acid and a strong hydride. This conversion results in disproportionation producing a pair of complexes with oxidation states that differ by two electrons. Further electrochemical reactions are possible. 2HMLnz ⇌ MLnz+1 + MLnz-1 + H2 As noted some complexes heterolytically cleave dihydrogen in the presence of a base. A portion of these complexes result in hydride complexes acidic enough to be deprotonated a second time by the base. In this situation the starting complex can be reduced by two electrons with hydrogen and base. Even if the hydride is not acidic enough to be deprotonated it can homolytically react with itself as discussed above for an overall one electron reduction. Two deprotonations: MLnz + H2 + 2Base ⇌ MLnz-2 + 2H+base Deprotonation followed by homolysis: 2MLnz + H2 + 2base ⇌ 2MLnz-1 + 2H+Base Hydricity The affinity for a hydride ligand for a Lewis acid is called its hydricity: MLnHn− ⇌ MLn(n+1)− + H− Since hydride does not exist as a stable anion in solution, this equilibrium constant (and its associated free energy) are calculated from measurable equilibria. The reference point is the hydricity of a proton, which in acetonitrile solution is calculated at −76 kcal mol−1: H+ + H− ⇌ H2 ΔG298 = −76 kcal mol−1 Relative to a proton, most cations exhibit a lower affinity for H−. Some examples include: [Ni(dppe)2]2+ + H− ⇌ [HNi(dppe)2]+ ΔG298 = −63 kcal mol−1 [Ni(dmpe)2]2+ + H− ⇌ [HNi(dmpe)2]+ ΔG298 = −50.7 kcal mol−1 [Pt(dppe)2]2+ + H− ⇌ [HPt(dppe)2]+ ΔG298 = −53 kcal mol−1 [Pt(dmpe)2]2+ + H− ⇌ [HPt(dmpe)2]+ ΔG298 = −42.6 kcal mol−1 These data suggest that [HPt(dmpe)2]+ would be a strong hydride donor, reflecting the relatively high stability of [Pt(dmpe)2]2+. Kinetics and mechanism The rates of proton-transfer to and between metal complexes are often slow. Many hydrides are inaccessible to study through Bordwell thermodynamic cycles. As a result, kinetic studies are employed to elucidate both the relevant thermodynamic parameters. Generally hydrides derived from first row transition metals display the most rapid kinetics followed by the second and third row metal complexes. Structure and bonding The determination of structures of metal hydrides can be challenging since hydride ligands do not scatter X-ray well, especially in comparison to the attached metal. Consequently M-H distances are often underestimated, especially in early studies. Often the presence of a hydride ligand was deduced by the absence of a ligand at an apparent coordination site. Classically, the structures of metal hydrides was addressed by neutron diffraction since hydrogen strongly scatters neutrons. Metal complexes containing terminal hydrides are common. In bi- and polynuclear compounds, hydrides usually are bridging ligands. Of these bridging hydrides many are oligomeric, such as Stryker's reagent. [(Ph3P)CuH]6 and clusters such as [Rh6(PR3)6H12]2+. The final bonding motif is the non-classical dihydride also known as sigma bond dihydrogen adducts or simply dihydrogen complexes. The [W(PR3)2(CO)3(H2)] complex was the first well characterized example of both a non-classical dihydride and sigma-bond complex in general. X-ray diffraction is generally insufficient to locate hydrides in crystal structures and thus their location must be assumed. It requires Neutron diffraction to unambiguously locate a hydride near a heavy atom crystallographically. Non-classical hydrides have also been studied with a variety of variable temperature NMR techniques and HD Couplings. Classical terminal: M—H Classical bridging: M—H—M nonclassical: M—H2 Spectroscopy Late transition metal hydrides characteristically show up-field shifts in their proton NMR spectra. It is common for the M-H signal to appear between δ-5 and -25 with many examples outside this range but generally all appear below 0 ppm. The large shifts arise from the influence of the excited states and due to strong spin–orbit coupling (in contrast, 1H NMR shifts for organic compounds typically occur in the range δ12-1). At one extreme is the 16e complex IrHCl2(PMe(t-Bu)2)2 with a shift of -50.5. The signals often exhibit spin–spin coupling to other ligands, e.g. phosphines. Metal hydrides exhibit IR bands near 2000 cm−1 for νM-H, although the intensities are variable. These signals can be identified by deuterium labeling. History An ill-defined copper hydride had been described in the 1844 as resulting from treatment of copper salts with hypophosphorous acid. It was subsequently found that hydrogen gas was absorbed by mixtures of transition metal salts and Grignard reagents. The first well defined metal hydrido complex was H2Fe(CO)4, obtained by the low temperature protonation of an iron carbonyl anion. The next reported hydride complex was (C5H5)2ReH. The latter complex was characterized by NMR spectroscopy, which demonstrated the utility of this technique in the study of metal hydride complexes. In 1957, Joseph Chatt, Bernard L. Shaw, and L. A. Duncanson described trans-PtHCl(PEt3)2 the first non-organometallic hydride (i.e., lacking a metal-carbon bond). It was shown to be air-stable, correcting long-held prejudice that metal hydrides would be unstable. References Transition metals Metal hydrides
Transition metal hydride
[ "Chemistry" ]
3,207
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
16,022,841
https://en.wikipedia.org/wiki/Shaft%20passer
A shaft passer is a device that allows a spoked wheel to rotate despite having a shaft (such as the axle of another wheel) passing between its spokes. The device is usually mentioned as a joke between nerds, in the manner of a fool's errand, however, examples do exist. In ~100 C.E. Heron describes a horse statue with the neck connected to its body with a shaft passer. A sword (acting as the "shaft") could slice through the neck but the head would not detach. In 2023 Blonder created a two and three dimensional shaft passer that allows a wire mesh cube to penetrate a mesh screen under its own weight. One of the earliest modern references to these devices was made by Richard Feynman, who was told by a colleague at Frankford Arsenal in Philadelphia that the cable-passing version of the device had been used during both world wars on German naval mine mooring cables, to prevent the mines from being caught by British cables swept along the sea bottom. The device was supposed to work using a spoked, rimless wheel that allows cables to pass through as it rotates. The ends of the spokes are widened, and the cable is held together by a short curved sleeve through which these spoke ends slide. External links , with diagram of the device. Tesseract cube, Blonder, 2023 References Practical jokes Wheels Mechanisms (engineering)
Shaft passer
[ "Engineering" ]
291
[ "Mechanical engineering", "Mechanisms (engineering)" ]
16,024,245
https://en.wikipedia.org/wiki/Airborne%20wind%20shear%20detection%20and%20alert%20system
The airborne wind shear detection and alert system, fitted in an aircraft, detects and alerts the pilot both visually and aurally of a wind shear condition. A reactive wind shear detection system is activated by the aircraft flying into an area with a wind shear condition of sufficient force to pose a hazard to the aircraft. A predictive wind shear detection system is activated by the presence of a wind shear condition ahead of the aircraft. In 1988, the U.S. Federal Aviation Administration (FAA) mandated that all turbine-powered commercial aircraft must have on-board wind shear detection systems by 1993. Airlines successfully lobbied to have commercial turbo-prop aircraft exempted from this requirement. In the predictive wind shear detection mode, the weather radar processor of the aircraft detects the presence of a microburst, a type of vertical wind shear condition, by detecting the Doppler frequency shift of the microwave pulses caused by the microburst ahead of the aircraft, and displays the area where it is present in the Navigation Display Unit (of the Electronic Flight Instrument System) along with an aural warning. History of development In June 1975, Eastern Air Lines Flight 66 crashed on approach to New York JFK Airport due to microburst-induced wind shear. Then, in July 1982, Pan Am Flight 759 crashed on takeoff from New Orleans International Airport in similar weather conditions. Finally, in August 1985, wind shear and inadequate reactions by the pilots caused the crash of Delta Air Lines Flight 191 on approach to Dallas/Fort Worth International Airport in a thunderstorm. On July 24, 1986, the FAA and NASA signed a memorandum of agreement to formally begin the Airborne Wind-Shear Detection and Avoidance Program (AWDAP). As a result, a wind-shear program was established in the Flight Systems Directorate of NASA's Langley Research Center. After five years of intensely studying various weather phenomena and sensor technologies, the researchers decided to validate their findings in actual flight conditions. They chose an extensively modified Boeing 737, which was equipped with a rear research cockpit in place of the forward section of the passenger cabin. A modified Rockwell Collins model 708 X-band ground-based radar unit was used in the AWDAP experiments. The real-time radar processor system used during 1992 flight experiments was a VME bus-based system with a Motorola 68030 host processor and three DSP boards. On September 1, 1994, the weather radar model RDR-4B of the Allied-Signal/Bendix (now Honeywell) became the first predictive wind-shear system to be certified for commercial airline operations. In the same year, Continental Airlines became the first commercial carrier to install an airborne predictive wind-shear detection system on its aircraft. By June 1996, Rockwell Collins and Westinghouse's Defense and Electronics Group (now Grumman/Martin) also came up with FAA-certified predictive wind-shear detection systems. The IEEE Intelligent Transportation Systems Society is conducting research for further development of this system. See also USAir Flight 1016 Delta Air Lines Flight 191 Pan Am Flight 759 Eastern Air Lines Flight 66 Terminal Doppler Weather Radar Low-level windshear alert system References Avionics Aircraft instruments Warning systems
Airborne wind shear detection and alert system
[ "Technology", "Engineering" ]
650
[ "Safety engineering", "Avionics", "Measuring instruments", "Aircraft instruments", "Warning systems" ]
16,025,799
https://en.wikipedia.org/wiki/Desktop%20video
Desktop video refers to a phenomenon lasting from the mid-1980s to the early 1990s when the graphics capabilities of personal computers such as the Commodore Amiga, the Apple Macintosh II and specially-upgraded IBM PC compatibles had advanced to the point where individuals and local broadcasters could use them for analog non-linear editing and vision mixing in video production. Despite the use of computers, desktop video should not be confused with digital video since the video data remained analog and it uses items like a VCR and a camcorder to record the video. Full-screen, full-motion video's vast storage requirements meant that the promise of digital encoding would not be realized on desktop computers for at least another decade. Description There were multiple models of genlock cards available to synchronize the content; the Newtek Video Toaster was commonly used in Amiga and PC systems, while Mac systems had the SuperMac Video Spigot and Radius VideoVision cards. Apple later introduced the Macintosh Quadra 840AV and Centris 660AV systems to specifically address this market. Desktop video was a parallel development to desktop publishing and enabled many small production houses and local TV stations to produce their own original content for the first time. Along with the advent of public-access cable channels, desktop video meant that television advertising became affordable for local businesses such as retailers, restaurants, real estate agents, contractors and auto dealers. As with the phrase desktop publishing, use of the term died out as the technologies to which it referred become the norm for any kind of video production. Broadcasting Home video Film and video terminology Film and video technology Multimedia Video editing software References
Desktop video
[ "Technology" ]
325
[ "Multimedia" ]
5,507,044
https://en.wikipedia.org/wiki/Cyclic%20nucleotide%20phosphodiesterase
3′,5′-cyclic-nucleotide phosphodiesterases (EC 3.1.4.17) are a family of phosphodiesterases. Generally, these enzymes hydrolyze a nucleoside 3′,5′-cyclic phosphate to a nucleoside 5′-phosphate: nucleoside 3′,5′-cyclic phosphate + H2O = nucleoside 5′-phosphate They thus control the cellular levels of the cyclic second messengers and the rates of their degradation. Some examples of nucleoside 3′,5′-cyclic phosphate include: 3′,5′-cyclic AMP 3′,5′-cyclic dAMP 3′,5′-cyclic IMP 3′,5′-cyclic GMP 3′,5′-cyclic CMP There are 11 distinct phosphodiesterase families (PDE1–PDE11) with a variety in isoforms and splicing having unique three-dimensional structure, kinetic properties, modes of regulation, intracellular localization, cellular expression, and inhibitor sensitivities. Nomenclature The systematic name for this enzyme is 3′,5′-cyclic-nucleotide 5'-nucleotidohydrolase. Other names in use include: PDE, cyclic 3′,5′-mononucleotide phosphodiesterase, cyclic 3',5'-nucleotide phosphodiesterase, cyclic 3′,5′-phosphodiesterase, 3′,5′-nucleotide phosphodiesterase, 3':5'-cyclic nucleotide 5′-nucleotidohydrolase, 3′,5′-cyclonucleotide phosphodiesterase, 3′,5′-cyclic nucleoside monophosphate phosphodiesterase, 3′:5′-monophosphate phosphodiesterase (cyclic CMP), cytidine 3′:5′-monophosphate phosphodiesterase (cyclic CMP), cyclic 3′,5-nucleotide monophosphate phosphodiesterase, nucleoside 3′,5′-cyclic phosphate diesterase, and nucleoside-3′,5-monophosphate phosphodiesterase. Function Phototransduction Retinal 3′,5′-cGMP phosphodiesterase (PDE) is located in photoreceptor outer segments and is an important enzyme in phototransduction. 3′,5′-cyclic-nucleotide phosphodiesterases in rod cells are oligomeric, made up of two heavy catalytic subunits, α (90 kDa) and β (85 kDa,) and two lighter inhibitory γ subunits (11 kDa each). PDE in rod cells are activated by transducin. Transducin is a G protein which upon GDP/GTP exchange in the transducin α subunit catalyzed by photolyzed rhodopsin. The transducin α subunit (Tα) is released from the β and γ complex and diffuses into the cytoplasmic solution to interact and activate PDE. Activation by Tα There are two proposed mechanisms for the activation of PDE. The first proposes that the two inhibitory subunits are differentially bound, sequentially removable and exchangeable between the native complex PDEαβγ2 and PDEαβ. GTP-bound-Tα removes the inhibitory γ subunits one at a time from the αβ catalytic subunits. The second and more likely mechanism states that the GTP-Tα complex binds to the γ subunits but rather than dissociating from the catalytic subunits, it stays with the PDEαβ complex. Binding of the GTP-Tα complex to the PDE γ subunits likely causes a conformational shift in the PDE, allowing better access to the site of cGMP hydrolysis on PDEαβ. Structure The binding site for PDE α and β subunits are likely to be in the central region of the PDE γ subunits. The C-terminal of the PDE γ subunit is likely to be involved in inhibition of PDE α and β subunits, the binding site for Tα and GTPase accelerating activity for the GTP-bound Tα. In cones, PDE is a homodimer of α chains, associated with several smaller subunits. Both rod and cone PDEs catalyze the hydrolysis of cAMP or cGMP to their 5′ monophosphate form. Both enzymes also bind cGMP with high affinity. The cGMP-binding sites are located in the N-terminal half of the protein sequence, while the catalytic core resides in the C-terminal portion. Examples Human genes encoding proteins containing this domain include: PDE1A, PDE1B, PDE1B2, PDE1C, PDE2A, PDE3A, PDE3B, PDE4A, PDE4B, PDE4B5, PDE4C, PDE4D, PDE5A, PDE6A, PDE6B, PDE6C, PDE7A, PDE7B, PDE8A, PDE8B, PDE9A, PDE10A, PDE10A2, PDE11A, References Protein domains
Cyclic nucleotide phosphodiesterase
[ "Biology" ]
1,168
[ "Protein domains", "Protein classification" ]
5,507,084
https://en.wikipedia.org/wiki/H.%20Newell%20Martin
Henry Newell Martin, FRS (1 July 1848 – 27 October 1896) was a British physiologist and vivisection activist. Biography He was born in Newry, County Down, the son of Henry Martin, a Congregational minister. He was educated at University College, London and Christ's College, Cambridge, where he matriculated in 1870, took the Part I Natural Sciences in 1873, and graduated B.A. in 1874. At the University of London, where he had graduated B.Sc. in 1870, he went on to become M.B. in 1871, and D.Sc. in 1872. Martin worked as demonstrator to Michael Foster of Trinity College from 1870 to 1876; and was a Fellow of Christ's College for five years from 1874. Daniel Coit Gilman of Johns Hopkins University, on advice from Foster and Thomas Huxley, hired Martin in 1876 and set up the university's Biology Department around him. Martin was appointed to the university's first professorship of physiology, one of the first five full professors appointed to the Hopkins faculty. It was understood that he would be laying the foundation for a medical school: Johns Hopkins School of Medicine eventually opened in 1893. Having delivered the Croonian Lecture in 1883 on "The Direct Influence of Gradual Variations of Temperature upon the Rate of Beat of the Dog's Heart", Martin was elected a Fellow of the Royal Society in 1885. Martin's scientific career was curtailed around 1893, by alcoholism. He died on 27 October 1896 in Burley-in-Wharfedale, Yorkshire. Work Martin developed the first isolated mammalian heart lung preparation (described in 1881), which Ernest Henry Starling later used. He collaborated with George Nuttall, at Baltimore for a year around 1885. With the hiring of William Keith Brooks came the opening of the Chesapeake Zoological Laboratory. It conducted its work at stations from Beaufort, North Carolina, to the Bahamas, studying marine life and interdependencies between species. Views Martin represented and spread the views of the Cambridge school of physiology around Michael Foster, which took account in a basic way of the theory of evolution. He co-wrote A Course of Practical Instruction in Elementary Biology (1875) with Thomas Huxley, a leading proponent of evolution. It was based on Huxley's annual summer course, given since 1871, of laboratory teaching for future science teachers; and concentrated on a small number of types of plants and animals. Biology labs were under attack by those opposed to experiments on live animals, a procedure known as vivisection. Martin defended vivisection, stating "Physiology is concerned with the phenomena going on in living things, and vital processes cannot be observed in dead bodies." He invited visitors to his lab to observe experiments. Selected publications Introductory lecture, 23 October 1876. Various co-authors (including his wife for the 1st edition).10th edition online. Quoted by Fye. Collected articles. Personal life In 1879, Martin married Hetty Cary, widow of Confederate General John Pegram. References External links H. Newell Martin bibliography, medicalarchives.jhmi.edu. Retrieved 6 May 2014. 1848 births 1896 deaths British physiologists Fellows of the Royal Society People from Newry Vivisection activists
H. Newell Martin
[ "Chemistry" ]
665
[ "Vivisection activists", "Vivisection" ]
5,507,437
https://en.wikipedia.org/wiki/Blue%20pages
Blue pages are a telephone directory listing of American and Canadian state agencies, government agencies, federal government and other official entities, along with specific offices, departments, or bureaus located therein. Canada Canadian yellow-page listings currently indicate "Government Of Canada-See Government Listings In The Blue Pages"; in markets where the local telephone directory is a single volume, the blue pages and community information normally appear after the alphabetical white-page listings but before the yellow pages advertising. The blue page listings include both provincial and federal entities. United States In the United States, the blue pages included state, federal, and local offices, including service districts such as school districts, port authorities, public utility providers, parks districts, fire districts, and the like. The blue pages also provided information about government services, in addition to officials' names, addresses, telephone numbers, and other contact information. The color blue is likely derived from so-called government blue books, official publications printed by a government (such as that of a state) describing its organization, and providing a list of contact information. (The blue pages published in a printed telephone directory is usually quite abridged, compared to official blue books). Other The name "blue pages" has been used for various specialised directories by private-sector entities such as the internal IBM Staff directory. References External links "USA Blue Pages" - officialusa.com, an unofficial blue pages directory for US federal and state agencies Telephone numbers Directories
Blue pages
[ "Mathematics" ]
300
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
5,507,641
https://en.wikipedia.org/wiki/Context-sensitive%20half-life
Context-sensitive half-life or context sensitive half-time is defined as the time taken for blood plasma concentration of a drug to decline by one half after an infusion designed to maintain a steady state (i.e. a constant plasma concentration) has been stopped. The "context" is the duration of infusion. When a drug which has a multicompartmental pharmacokinetic model is given by intravenous infusion it initially will distribute to the central compartment and then move out of this compartment into one or two peripheral compartments. Once this infusion is discontinued, drug continues to move into the peripheral compartments until an equilibrium is reached. At this time, the only way drug may leave plasma is by metabolism or excretion. As the plasma concentration falls, the concentration gradient of drug reverses and drug moves from peripheral compartments back into plasma, maintaining the plasma concentration of the drug, often prolonging the pharmacological effect. If an infusion has reached steady state then the context-sensitive half-life is equal to the terminal plasma half-life of the drug. Otherwise it will be shorter than the terminal elimination half-life. Remifentanil is relatively context insensitive whilst fentanyl and thiopentone are examples of drugs which have significant context-sensitive changes in their half-life. The Context-Sensitive Half-Time describes the time required for the plasma drug concentration to decline by 50% after terminating an infusion of a particular duration. Definition It is the time required for drug plasma concentration to decrease by 50% after stopping administration. The drug is administered continuously. Pharmacokinetics Many drugs follow the multi-compartment model. In other words, a drug may have a preference for a particular body compartment which will result in the majority of that drug to ultimately settle in that particular compartment. For highly polar drugs very little will be in the tissues of the body For highly lipophilic drugs majority of drug will be located in body tissues Initially, because the drug is given intravenously, the drug will distribute to the central compartment (i.e. circulation system). The drug (e.g. lipophilic drug) will move out of the central compartment and move into the peripheral compartments The movement from one compartment to another is influenced by passive diffusion (Moves from an area of high concentration to one with low concentration) A drug like fentanyl is very fat soluble. Initial doses ‘wear off’ relatively quickly because the drug redistributes to adipose tissue. However, if the infusion is ongoing then the peripheral compartment (body tissue) will have a large store of fentanyl The duration of infusion determines whether or not steady state was reached Once a drug enters the body, elimination and distribution begins. Initially the drug present in central compartment (i.e. circulation system) is being distributed into the tissues, and being eliminated At steady state, the concentration of free drug in the central compartment (i.e. circulation system) is equal to the concentration of free drug in the peripheral compartment (i.e. body tissues) If steady state is reached, context-sensitive half-life is equal to elimination half-life Only free drug that is in the plasma is metabolised Metabolism results in the concentration of free drug in the peripheral compartment to decrease Due to passive diffusion, free drug will leave the peripheral compartment (i.e. tissues) and enter the central compartment, replenishing any drug that was metabolised from the plasma If steady state is not reached, context-sensitive half-life is shorter than elimination half-life Only free drug that is in the plasma is metabolised Overall the entire body has less lipophilic drug. The infusion was stopped earlier. Not as much drug was able to enter the peripheral compartment. Because steady state is not reached, the peripheral compartment (i.e. tissues) has less free drug than the central compartment The drug continues to move into the peripheral compartment until equilibrium is reached. Remember the drug moves due to passive diffusion. It moves into the peripheral compartment because it has less free drug Once equilibrium is reached, the only other way the drug is able to leave the plasma is by elimination. This causes the free drug concentration in the central compartment to fall As the plasma concentration falls, the concentration gradient of drug reverses and drug moves from peripheral compartment (i.e. tissues) back into plasma, maintaining the plasma concentration of the drug Remifentanil is relatively context insensitive. Fentanyl and thiopental are examples of drugs which have significant context-sensitive changes in their half-life. References Pharmacokinetics
Context-sensitive half-life
[ "Chemistry" ]
956
[ "Pharmacology", "Pharmacokinetics" ]
5,507,751
https://en.wikipedia.org/wiki/Race%2C%20Evolution%2C%20and%20Behavior
Race, Evolution, and Behavior: A Life History Perspective is a book by Canadian psychologist and author J. Philippe Rushton. Rushton was a professor of psychology at the University of Western Ontario for many years, and the head of the controversial Pioneer Fund. The first unabridged edition of the book came out in 1995, and the third, latest unabridged edition came out in 2000; abridged versions were also distributed. Rushton argues that race is a valid biological concept and that racial differences frequently range in a continuum across 60 different behavioral and anatomical variables, with Mongoloids (East Asians) at one end of the continuum, Negroids (Sub-Saharan Black Africans) at the opposite extreme, and Caucasoids (Europeans) in the middle. The book was generally received negatively, its methodology and conclusions being criticized by many experts. The aggressive marketing strategy also received a lot of criticism. The book received positive reviews by some researchers, many of whom were personally associated with Rushton and with the Pioneer Fund which funded much of Rushton's research. The book has been examined as an example of Pioneer's funding of scientific racism, while psychologist Michael Howe has identified the book as part of a movement, begun in the 1990s, to promote a racial agenda in social policy. Summary The book grew out of Rushton's 1989 paper, "Evolutionary Biology and Heritable Traits (With Reference to Oriental-White-Black Difference)". The 1st unabridged edition was published in 1995, the 2nd unabridged edition in 1997, and the 3rd unabridged edition in 2000. Rushton argues that Mongoloid, Caucasoid and Negroid populations fall consistently into the same one-two-three way pattern when compared on a list of sixty distinctly different behavioral and anatomical traits and variables. Rushton uses averages of hundreds of studies, modern and historical, to assert the existence of this pattern. Rushton's book is focused on what he considers the three broadest racial groups, and does not address other populations such as Southeast Asians and Australian Aborigines. The book argues that Mongoloids, on average, are at one end of a continuum, that Negroids, on average, are at the opposite end of that continuum, and that Caucasoids rank in between Mongoloids and Negroids, but closer to Mongoloids. His continuum includes both external physical characteristics and personality traits. Differential K theory Differential K theory is a debunked theory proposed by Rushton, which attempts to apply r/K selection theory to human races. According to Rushton, this theory explains race differences in fertility, IQ, criminality, and sexual anatomy and behavior. The theory also hypothesizes that a single factor, the "K factor", affects multiple population statistics Rushton referred to as "life-history traits". This theory has been widely rejected as unscientific or pseudoscientific. Rushton's work includes logical errors, cites poor-quality sources, ignored contrary sources, and cites sources which Rushton had misinterpreted or misunderstood. Responses According to Richard R. Valencia, the response to the first edition of Rushton's book was "overwhelmingly negative", with only a small number of supporters, many being, like Rushton, Pioneer Fund grantees, such as psychologists Arthur Jensen, Michael Levin, Richard Lynn, and Linda Gottfredson. Valencia identified the main areas of criticism as focusing on Rushton's use of "race" as a biological concept, a failure to appreciate the extent of variation within populations compared with that between populations, a false separation of genetics and environment, poor statistical methodology, a failure to consider alternative hypotheses, and the use of unreliable and inappropriate data to draw conclusions about the relationship between brain size and intelligence. According to Valencia, "experts in life history conclude that Rushton's (1995) work is pseudoscientific and racist." A more favorable review of the book came from Gottfredson, who wrote in Politics and the Life Sciences that the book "confronts us as few books have with the dilemmas wrought in a democratic society by individual and group differences in key human traits". Another favorable review of the book appeared in the National Review. Richard Lewontin (1996) argued that in claiming the existence of "major races", and that these categories reflected large biological differences, "Rushton moves in the opposite direction from the entire development of physical anthropology and human genetics for the last thirty years. Anthropologists no longer regard "race" as a useful concept in understanding human evolution and variation." The anthropologist C. Loring Brace (1996) concurred, stating that the book was an amalgamation of bad biology and inexcusable anthropology. It is not science but advocacy, and advocacy of 'racialism'". Similarly, anthropologist John Relethford (1995) criticized Rushton's model as "faulty at many points." Mailing controversy The first special abridged edition published under the Transaction Press name in 1999 caused considerable controversy when 40,000 copies were "mailed, unsolicited, to psychologists, anthropologists, and sociologists, many of whom were angered when they discovered that their identities and addresses had been obtained from their respective professional associations' mailing lists." The director of Transaction Press Irving Louis Horowitz, although he had defended the original edition of the book, "condemned the abridged edition as a 'pamphlet' that he had never seen or approved prior to its publication." A subsequent 2nd special abridged edition was published in 2000 with a rejoinder to Horowitz's criticisms under a new entity called The Charles Darwin Research Institute. According to Tucker, many academics who received the book unsolicited were outraged at its content, calling it "racial pornography" and a "vile piece of work"; at least one insisting on returning it to the publisher. Hermann Helmuth, a professor of anthropology at Trent University, said, "It is in a way personal and political propaganda. There is no basis to his scientific research." As an example of Pioneer Fund activity Race, Evolution, and Behavior has been cited as an example of the Pioneer Fund's activities in promoting scientific racism. Valencia notes that many of the supportive comments for the book come from Pioneer grantees like Rushton himself, and that a 100,000 copy print-run of the third edition was financed by Pioneer. The book is cited by psychologist William H. Tucker as an example of the Pioneer Fund's continued role "to subsidize the creation and distribution of literature to support racial superiority and racial purity." The mass distribution of the abridged third edition he described as part of a "public relations effort", and "the latest attempt to convince the nation of 'the completely different nature' of blacks and whites." He notes that bulk rates were offered "for distribution to media figures, especially columnists who write on race issues". Reviews - review of Race, Evolution, and Behavior and two other books - review of Race, Evolution, and Behavior and three other books - discusses the links of the Pioneer Fund to the distribution and positive reviews for the book See also Behavioural genetics Behaviorism Evolutionary psychology Race and intelligence Scientific racism References External links Race, Evolution, and Behavior: A Life History Perspective - Copy of the 2nd Special Abridged Edition that the author put on his personal website 1995 non-fiction books American non-fiction books Books about evolutionary psychology Books about human intelligence Books about race and ethnicity English-language non-fiction books Forensic psychology Human evolution books Pseudoscience literature Race and intelligence controversy Scientific racism
Race, Evolution, and Behavior
[ "Biology" ]
1,568
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
5,507,760
https://en.wikipedia.org/wiki/Nu%CA%BButele
Nuutele is an island that consists of a volcanic tuff ring. It lies 1.3 km off the eastern end of Upolu island, Samoa, in central South Pacific Ocean. It is the largest of the four Aleipata Islands, with an area of . Nuutele, together with Nuulua, a smaller island in the Aleipata group, are significant conservation areas for native species of bird life. Nuutele features steep terrain, with vertical marine cliffs up to 180 m high. Nuutele is famous as a highly appreciated scenic landmark when viewed in the distance from the popular Lalomanu beach area on nearby Upolu Island, across the water. See also Samoa Islands List of islands Desert island Notes (includes Nu'utele) Some information about Nu'utele and Nu'ulua. References Uninhabited islands of Samoa Volcanoes of Samoa Tuff cones Nature conservation in Samoa Biota of Samoa Atua (district)
Nuʻutele
[ "Biology" ]
195
[ "Biota by country", "Biota of Samoa" ]
5,508,354
https://en.wikipedia.org/wiki/Vasoactivity
A vasoactive substance is an endogenous agent or pharmaceutical drug that has the effect of either increasing or decreasing blood pressure and/or heart rate through its vasoactivity, that is, vascular activity (effect on blood vessels). By adjusting vascular compliance and vascular resistance, typically through vasodilation and vasoconstriction, it helps the body's homeostatic mechanisms (such as the renin–angiotensin system) to keep hemodynamics under control. For example, angiotensin, bradykinin, histamine, nitric oxide, and vasoactive intestinal peptide are important endogenous vasoactive substances. Vasoactive drug therapy is typically used when a patient has the blood pressure and heart rate monitored constantly. The dosage is typically titrated (adjusted up or down) to achieve a desired effect or range of values as determined by competent clinicians. Vasoactive drugs are typically administered using a volumetric infusion device (IV pump). This category of drugs require close observation of the patient with near immediate intervention required by the clinicians in charge of the patient's care. Important vasoactive substances are angiotensin-11, endothelin-1, and alpha-adrenergic agonists. Various vasoactive agents, such as prostanoids, phosphodiesterase inhibitors, and endothelin antagonists, are approved for the treatment of pulmonary arterial hypertension. The use of vasoactive agents for patients with pulmonary hypertension may cause harm and unnecessary expense to persons with left heart disease or hypoxemic types of lung diseases. References Drugs
Vasoactivity
[ "Chemistry" ]
346
[ "Pharmacology", "Products of chemical industry", "Medicinal chemistry stubs", "Chemicals in medicine", "Pharmacology stubs", "Drugs" ]
5,508,726
https://en.wikipedia.org/wiki/Expansive%20homeomorphism
In mathematics, the notion of expansivity formalizes the notion of points moving away from one another under the action of an iterated function. The idea of expansivity is fairly rigid, as the definition of positive expansivity, below, as well as the Schwarz–Ahlfors–Pick theorem demonstrate. Definition If is a metric space, a homeomorphism is said to be expansive if there is a constant called the expansivity constant, such that for every pair of points in there is an integer such that Note that in this definition, can be positive or negative, and so may be expansive in the forward or backward directions. The space is often assumed to be compact, since under that assumption expansivity is a topological property; i.e. if is any other metric generating the same topology as , and if is expansive in , then is expansive in (possibly with a different expansivity constant). If is a continuous map, we say that is positively expansive (or forward expansive) if there is a such that, for any in , there is an such that . Theorem of uniform expansivity Given f an expansive homeomorphism of a compact metric space, the theorem of uniform expansivity states that for every and there is an such that for each pair of points of such that , there is an with such that where is the expansivity constant of (proof). Discussion Positive expansivity is much stronger than expansivity. In fact, one can prove that if is compact and is a positively expansive homeomorphism, then is finite (proof). External links Expansive dynamical systems on scholarpedia Dynamical systems
Expansive homeomorphism
[ "Physics", "Mathematics" ]
352
[ "Mechanics", "Dynamical systems" ]
5,508,909
https://en.wikipedia.org/wiki/Melanocortin%20receptor
Melanocortin receptors are members of the rhodopsin family of 7-transmembrane G protein-coupled receptors. There are five known members of the melanocortin receptor system each with differing specificities for melanocortins: . MC1R is associated with pigmentation genetics. . MC2R is also known as the ACTH receptor or corticotropin receptor because it is specific for ACTH alone. . MC3R is associated with childhood growth, accrual of lean mass and onset of puberty. . Defects in MC4R are a cause of autosomal dominant obesity, accounting for 6% of all cases of early-onset obesity. . MC5R These receptors are inhibited by endogenous inverse agonists agouti signalling peptide and agouti-related peptide, and activated by synthetic (i.e. afamelanotide) and endogenous agonist melanocyte-stimulating hormones. Selective ligands Several selective ligands for the melanocortin receptors are known, and some synthetic compounds have been investigated as potential tanning, anti-obesity and aphrodisiac drugs, with tanning effects mainly from stimulation of MC1, while anorectic and aphrodisiac effects appear to involve both MC3 and MC4. MC1, MC3 and MC4 are widely expressed in the brain, and are also thought to be responsible for effects on mood and cognition. Agonists Non-selective α-MSH β-MSH γ-MSH Afamelanotide Bremelanotide Melanotan II Modimelanotide Setmelanotide MC1-selective BMS-470,539 MC4-selective PF-00446687 PL-6983 THIQ Unknown (but for certain MC2-acting) Alsactide Tetracosactide Antagonists and inverse agonists Non-selective Agouti-related peptide Agouti signalling peptide MC2-selective Atumelnant (CRN04894) MC4-selective HS-014 HS-024 MCL-0042 MCL-0129 MPB-10 SHU-9119 (agonist at MC1 and MC5, antagonist at MC3 and MC4) Unknown Semax References External links Calculated spatial position of melanocortin-4 receptor in the lipid bilayer, inactive state with antagonist and active state with agonist G protein-coupled receptors
Melanocortin receptor
[ "Chemistry" ]
511
[ "G protein-coupled receptors", "Signal transduction" ]
5,508,956
https://en.wikipedia.org/wiki/Ishango%20bone
The Ishango bone, discovered at the "Fisherman Settlement" of Ishango in the Democratic Republic of the Congo, is a bone tool and possible mathematical device that dates to the Upper Paleolithic era. The curved bone is dark brown in color, about 10 centimeters in length, and features a sharp piece of quartz affixed to one end, perhaps for engraving. Because the bone has been narrowed, scraped, polished, and engraved to a certain extent, it is no longer possible to determine what animal the bone belonged to, although it is assumed to have been a mammal. The ordered engravings have led many to speculate the meaning behind these marks, including interpretations like mathematical significance or astrological relevance. It is thought by some to be a tally stick, as it features a series of what has been interpreted as tally marks carved in three columns running the length of the tool, though it has also been suggested that the scratches might have been to create a better grip on the handle or for some other non-mathematical reason. Others argue that the marks on the object are non-random and that it was likely a kind of counting tool and used to perform simple mathematical procedures. Other speculations include the engravings on the bone serving as a lunar calendar. Dating to 20,000 years before present, it has been described as 'the oldest mathematical tool of humankind', though older engraved bones are also known, such as the approximately 26,000 year-old 'Wolf Bone' from Dolni Vestonice in the Czech Republic, and the approximately 40,000-year-old Lebombo bone from southern Africa. History Archaeological discovery The Ishango bone was found in 1950 by Belgian Jean de Heinzelin de Braucourt while exploring what was then the Belgian Congo. It was discovered in the area of Ishango near the Semliki River. Lake Edward empties into the Semliki which forms part of the headwaters of the Nile River (now on the border between modern-day Uganda and D.R. Congo). Some archaeologists believe the prior inhabitants of Ishango were a "pre-sapiens species". However, the most recent inhabitants, who gave the area its name, have no immediate connections with the primary settlement, which was "buried in a volcanic eruption". On an excavation, de Heinzelin discovered a bone about the "size of a pencil" amongst human remains and many stone tools in a small community that fished and gathered in this area of Africa. Professor de Heinzelin brought the Ishango bone to Belgium, where it was stored in the treasure room of the Royal Belgian Institute of Natural Sciences in Brussels. Several molds and copies were created from the petrified bone in order to preserve the delicate nature of the fragile artifact while being exported. A written request to the museum was required to see the artifact, as it was no longer on display for the public eye. Dating The artifact was first estimated to have originated between 9,000 BCE and 6,500 BCE, with numerous other analyses debating the bone to be as old as 44,000 years. However, the dating of the site where it was discovered was re-evaluated, and it is now believed to be about 20,000 years old (dating from between 18,000 BCE and 20,000 BCE). The dating of this bone is widely debated in the archaeological community as the ratio of Carbon-14 isotopes was upset by nearby volcanic activity. Interpretations Mathematical The 168 etchings on the bone are ordered in three parallel columns along the length of the bone, each marking with a varying orientation and length. The first column, or central column along the most curved side of the bone, is referred to as the M column, from the French word (middle). The left and right columns are respectively referred to as G and D, or (left) and (right) in French. The parallel markings have led to various tantalizing hypotheses, such as that the implement indicates an understanding of decimals or prime numbers. Though these propositions have been questioned, it is considered likely by many scholars that the tool was used for mathematical purposes, perhaps including simple mathematical procedures or to construct a numeral system. Discoverer of the Ishango bone, de Heinzelin, suggested that the bone was evidence of knowledge of simple arithmetic, or at least that the markings were "deliberately planned". He based his interpretation on archaeological evidence, comparing "Ishango harpoon heads to those found in northern Sudan and ancient Egypt". This comparison led to the suggestion of a link between arithmetic processes conducted at Ishango with the "commencement of mathematics in ancient Egypt." The third column has been interpreted as a "table of prime numbers", as column G appears to illustrate prime numbers between 10 and 20, but this may be a coincidence. Historian of mathematics Peter S. Rudman argues that prime numbers were probably not understood until the early Greek period of about 500 BCE, and were dependent on the concept of division, which he dates to no earlier than 10,000 BCE. More recently, mathematicians Dirk Huylebrouck and Vladimir Pletser have proposed that the Ishango bone is a counting tool using the base 12 and sub-bases 3 and 4, and involving simple multiplication, somewhat comparable to a primitive slide rule. However, they have concluded that there is not sufficient evidence to confirm an understanding of prime numbers during this time period. Anthropologist Caleb Everett has also provided insight into interpretations of the bone, explaining that "the quantities evident in the groupings of marks are not random", and are likely evidence of prehistoric numerals. Everett suggests that the first column may reflect some "doubling pattern" and that the tool may have been used for counting and multiplication and also possibly as a "numeric reference table". Astronomical Alexander Marshack, an archaeologist from Harvard University, speculated that the Ishango bone represents numeric notation of a six-month lunar calendar after conducting a "detailed microscopic examination" of the bone. This idea arose from the fact that the markings on the first two rows adds up to 60, corresponding with two lunar months, and the sum of the number of carvings on the last row being 48, or a month and a half. Marshack generated a diagram comparing the different sizes and phases of the Moon with the notches of the Ishango bone. There is some circumstantial evidence to support this alternate hypothesis, being that present day African societies utilize bones, strings, and other devices as calendars. However, critics in the field of archaeology have concluded that Marshack's interpretation is flawed, describing that his analysis of the Ishango bone confines itself to a simple search for a pattern, rather than an actual test of his hypothesis. This has also led Claudia Zaslavsky to suggest that the creator of the tool may have been a woman, tracking the lunar phase in relation to the menstrual cycle. Other explanations Mathematician Olivier Keller warns against the urge to project modern culture's perception of numbers onto the Ishango bone. Keller explains that this practice encourages observers to negate and possibly ignore alternative symbolic materials, those which are present in a range of media (on human remains, stones and cave art) from the Upper Paleolithic era and beyond which also deserve equitable investigation. Dirk Huylebrouck, in a review of the research on the object, favors the idea that the Ishango bone had some advanced mathematical use, stating that "whatever the interpretation, the patterns surely show the bone was more than a simple tally stick." He also remarks that "to credit the computational and astronomical reading simultaneously would be far-fetched", quoting mathematician George Joseph, who stated that "a single bone may well collapse under the heavy weight of conjectures piled onto it." Similarly, George Joseph, in "The Crest of the Peacock: Non-European Roots of Mathematics" also stated that the Ishango bone was "more than a simple tally." Moreover, he states that "certain underlying numerical patterns may be observed within each of the rows marked." But, regarding various speculative theories of its exact mathematical use, concluded that several are plausible but uncertain. See also Lebombo bone History of mathematics Paleolithic tally sticks References Further reading O. Keller, "The fables of Ishango, or the irresistible temptation of mathematical fiction" V. Pletser, D. Huylebrouck, "Contradictions and narrowness of views in "The fables of Ishango, or the irresistible temptation of mathematical fiction", answers and updates" Archaeological discoveries in Africa History of Africa History of mathematics Mathematical tools Bone carvings Upper Paleolithic 1950 archaeological discoveries
Ishango bone
[ "Mathematics", "Technology" ]
1,772
[ "Applied mathematics", "Mathematical tools", "History of computing", "nan" ]
5,509,033
https://en.wikipedia.org/wiki/Delegated%20administration
In computing, delegated administration or delegation of control describes the decentralization of role-based-access-control systems. Many enterprises use a centralized model of access control. For large organizations, this model scales poorly and IT teams become burdened with menial role-change requests. These requests — often used when hire, fire, and role-change events occur in an organization — can incur high latency times or suffer from weak security practices. Such delegation involves assigning a person or group specific administrative permissions for an Organizational Unit. In information management, this is used to create teams that can perform specific (limited) tasks for changing information within a user directory or database. The goal of delegation is to create groups with minimum permissions that grant the ability to carry out authorized tasks. Granting extraneous/superfluous permissions would create abilities beyond the authorized scope of work. One best practice for enterprise role management entails the use of LDAP groups. Delegated administration refers to a decentralized model of role or group management. In this model, the application or process owner creates, manages and delegates the management of roles. A centralized IT team simply operates the service of directory, metadirectory, web interface for administration, and related components. Allowing the application or business process owner to create, manage and delegate groups supports a much more scalable approach to the administration of access rights. In a metadirectory environment, these roles or groups could also be "pushed" or synchronized with other platforms. For example, groups can be synchronized with native operating systems such as Microsoft Windows for use on an access control list that protects a folder or file. With the metadirectory distributing groups, the central directory is the central repository of groups. Some enterprise applications (e.g., PeopleSoft) support LDAP groups inherently. These applications are capable of using LDAP to call the directory for its authorization activities. Web-based group management tools — used for delegated administration — therefore provide the following capabilities using a directory as the group repository: Decentralized management of groups (roles) and access rights by business- or process-owners Categorizing or segmenting users by characteristic, not by enumeration Grouping users for e-mail, subscription, and access control Reducing work process around maintenance of groups Reproducing groups on multiple platforms and into disparate environments Active Directory In Microsoft Active Directory the administrative permissions this is accomplished using the Delegation of Control Wizard. Types of permissions include managing and viewing user accounts, managing groups, managing group policy links, generating Resultant Set of Policy, and managing and viewing InOrgPerson accounts. A use of Delegation of Control could be to give managers complete control of users in their own department. With this arrangement managers can create new users, groups, and computer objects, but only in their own OU. See also Access control Identity management User provisioning RBAC Reading list Delegating Authority in Active Directory, TechNet Magazine Built-in Groups vs. Delegation, WindowsSecurity.Com References Operating system technology Computer access control Decentralization Active Directory
Delegated administration
[ "Engineering" ]
635
[ "Cybersecurity engineering", "Computer access control" ]
5,509,325
https://en.wikipedia.org/wiki/Agouti-signaling%20protein
Agouti-signaling protein is a protein that in humans is encoded by the ASIP gene. It is responsible for the distribution of melanin pigment in mammals. Agouti interacts with the melanocortin 1 receptor to determine whether the melanocyte (pigment cell) produces phaeomelanin (a red to yellow pigment), or eumelanin (a brown to black pigment). This interaction is responsible for making distinct light and dark bands in the hairs of animals such as the agouti, which the gene is named after. In other species such as horses, agouti signalling is responsible for determining which parts of the body will be red or black. Mice with wildtype agouti will be grey-brown, with each hair being partly yellow and partly black. Loss of function mutations in mice and other species cause black fur coloration, while mutations causing expression throughout the whole body in mice cause yellow fur and obesity. The agouti-signaling protein (ASIP) is a competitive antagonist with alpha-Melanocyte-stimulating hormone (α-MSH) to bind with melanocortin 1 receptor (MC1R) proteins. Activation by α-MSH causes production of the darker eumelanin, while activation by ASIP causes production of the redder phaeomelanin. This means where and while agouti is being expressed, the part of the hair that is growing will come out yellow rather than black. Function In mice, the agouti gene encodes a paracrine signalling molecule that causes hair follicle melanocytes to synthesize the yellow pigment pheomelanin instead of the black or brown pigment eumelanin. Pleiotropic effects of constitutive expression of the mouse gene include adult-onset obesity, increased tumor susceptibility, and premature infertility. This gene is highly similar to the mouse gene and encodes a secreted protein that may (1) affect the quality of hair pigmentation, (2) act as an inverse agonist of alpha-melanocyte-stimulating hormone, (3) play a role in neuroendocrine aspects of melanocortin action, and (4) have a functional role in regulating lipid metabolism in adipocytes. In mice, the wild type agouti allele (A) presents a grey phenotype, however, many allele variants have been identified through genetic analyses, which result in a wide range of phenotypes distinct from the typical grey coat. The most widely studied allele variants are the lethal yellow mutation (Ay) and the viable yellow mutation (Avy) which are caused by ectopic expression of agouti. These mutations are also associated with yellow obese syndrome which is characterized by early onset obesity, hyperinsulinemia and tumorigenesis. The murine agouti gene locus is found on chromosome 2 and encodes a 131 amino acid protein. This protein signals the distribution of melanin pigments in epithelial melanocytes located at the base of hair follicles with expression being more sensitive on ventral hair than on dorsal hair. Agouti is not directly secreted in the melanocyte as it works as a paracrine factor on dermal papillae cells to inhibit release of melanocortin. Melanocortin acts on follicular melanocytes to increase production of eumelanin, a melanin pigment responsible for brown and black hair. When agouti is expressed, production of pheomelanin dominates, a melanin pigment that produces yellow or red colored hair. Structure Agouti signalling peptide adopts an inhibitor cystine knot motif. Along with the homologous Agouti-related peptide, these are the only known mammalian proteins to adopt this fold. The peptide consists of 131 amino acids. Mutations The lethal yellow mutation (Ay) was the first embryonic mutation to be characterized in mice, as homozygous lethal yellow mice (Ay/ Ay) die early in development, due to an error in trophectoderm differentiation. Lethal yellow homozygotes are rare today, while lethal yellow and viable yellow heterozygotes (Ay/a and Avy/a) remain more common. In wild-type mice agouti is only expressed in the skin during hair growth, but these dominant yellow mutations cause it to be expressed in other tissues as well. This ectopic expression of the agouti gene is associated with the yellow obese syndrome, characterized by early onset obesity, hyperinsulinemia and tumorigenesis. The lethal yellow (Ay) mutation is due to an upstream deletion at the start site of agouti transcription. This deletion causes the genomic sequence of agouti to be lost, except the promoter and the first non-encoding exon of Raly, a ubiquitously expressed gene in mammals. The coding exons of agouti are placed under the control of the Raly promoter, initiating ubiquitous expression of agouti, increasing production of pheomelanin over eumelanin and resulting in the development of a yellow phenotype. The viable yellow (Avy) mutation is due to a change in the mRNA length of agouti, as the expressed gene becomes longer than the normal gene length of agouti. This is caused by the insertion of a single intracisternal A particle (IAP) retrotransposon upstream to the start site of agouti transcription. In the proximal end of the gene, an unknown promoter then causes agouti to be constitutionally activated, and individuals to present with phenotypes consistent with the lethal yellow mutation. Although the mechanism for the activation of the promoter controlling the viable yellow mutation is unknown, the strength of coat color has been correlated with the degree of gene methylation, which is determined by maternal diet and environmental exposure. As agouti itself inhibits melanocortin receptors responsible for eumelanin production, the yellow phenotype is exacerbated in both lethal yellow and viable yellow mutations as agouti gene expression is increased. Viable yellow (Avy/a) and lethal yellow (Ay/a) heterozygotes have shortened life spans and increased risks for developing early onset obesity, type II diabetes mellitus and various tumors. The increased risk of developing obesity is due to the dysregulation of appetite, as agouti agonizes the agouti-related protein (AGRP), responsible for the stimulation of appetite via hypothalamic NPY/AGRP orexigenic neurons. Agouti also promotes obesity by antagonizing melanocyte-stimulating hormone (MSH) at the melanocortin receptor (MC4R), as MC4R is responsible for regulating food intake by inhibiting appetite signals. The increase in appetite is coupled to alterations in nutrient metabolism due to the paracrine actions of agouti on adipose tissue, increasing levels of hepatic lipogenesis, decreasing levels of lipolysis and increasing adipocyte hypertrophy. This increases body mass and leads to difficulties with weight loss as metabolic pathways become dysregulated. Hyperinsulinemia is caused by mutations to agouti, as the agouti protein functions in a calcium dependent manner to increase insulin secretion in pancreatic beta cells, increasing risks of insulin resistance. Increased tumor formation is due to the increased mitotic rates of agouti, which are localized to epithelial and mesenchymal tissues. Methylation and diet intervention Correct functioning of agouti requires DNA methylation. Methylation occurs in six guanine-cytosine (GC) rich sequences in the 5’ long terminal repeat of the IAP element in the viable yellow mutation. Methylation on a gene causes the gene to not be expressed because it will cause the promoter to be turned off. In utero, the mother's diet can cause methylation or demethylation. When this area is unmethylated, ectopic expression of agouti occurs, and yellow phenotypes are shown because the phaeomelanin is expressed instead of eumelanin. When the region is methylated, agouti is expressed normally, and grey and brown phenotypes (eumelanin) occur. The epigenetic state of the IAP element is determined by the level of methylation, as individuals show a wide range of phenotypes based on their degree of DNA methylation. Increased methylation is correlated with increased expression of the normal agouti gene. Low levels of methylation can induce gene imprinting which results in offspring displaying consistent phenotypes to their parents, as ectopic expression of agouti is inherited through non-genomic mechanisms. DNA methylation is determined in utero by maternal nutrition and environmental exposure. Methyl is synthesized de novo but attained through the diet by folic acid, methionine, betaine, and choline, as these nutrients feed into a consistent metabolic pathway for methyl synthesis. Adequate zinc and vitamin B12 are required for methyl synthesis as they act as cofactors for transferring methyl groups. When inadequate methyl is available during early embryonic development, DNA methylation cannot occur, which increases ectopic expression of agouti and results in the presentation of the lethal yellow and viable yellow phenotypes which persist into adulthood. This leads to the development of the yellow obese syndrome, which impairs normal development and increases susceptibility to the development of chronic disease. Ensuring maternal diets are high in methyl equivalents is a key preventive measure for reducing ectopic expression of agouti in offspring. Diet intervention through methyl supplementation reduces imprinting at the agouti locus, as increased methyl consumption causes the IAP element to become completely methylated and ectopic expression of agouti to be reduced. This lowers the proportion of offspring that present with the yellow phenotype and increases the number offspring that resemble agouti wild type mice with grey coats. Two genetically identical mice could look very different phenotypically due to the mothers' diets while the mice were in utero. If the mice has the agouti gene it can be expressed due to the mother eating a typical diet and the offspring would have a yellow coat. If the same mother had eaten a methyl-rich diet supplemented with zinc, vitamin B12, and folic acid then the offspring's agouti gene would likely become methylated, it wouldn't be expressed, and the coat color would be brown instead. In mice, the yellow coat color is also associated with health problems in mice including obesity and diabetes. Human homologue Agouti signaling protein (ASP) is the human homologue of murine agouti. It is encoded by the human agouti gene on chromosome 20 and is a protein consisting of 132 amino acids. It is expressed much more broadly than murine agouti and is found in adipose tissue, pancreas, testes, and ovaries, whereas murine agouti is solely expressed in melanocytes. ASP has 85% similarity to the murine form of agouti. As ectopic expression of murine agouti leads to the development of the yellow obese syndrome, this is expected to be consistent in humans. The yellow obese syndrome increases the development of many chronic diseases, including obesity, type II diabetes mellitus and tumorigenesis. ASP has similar pharmacological activation to murine agouti, as melanocortin receptors are inhibited through competitive antagonism. Inhibition of melanocortin by ASP can also be through non-competitive methods, broadening its range of effects. The function of ASP differs to murine agouti. ASP effects the quality of hair pigmentation whereas murine agouti controls the distribution of pigments that determine coat color. ASP has neuroendocrine functions consistent with murine agouti, as it agonizes via AgRP neurons in the hypothalamus and antagonizes MSH at MC4Rs which reduce satiety signals. AgRP acts as an appetite stimulator and increases appetite while decreasing metabolism. Because of these mechanisms, AgRP may be linked to increased body mass and obesity in both humans and mice. Over-expression of AgRP has been linked to obesity in males, while certain polymorphisms of AgRP have been linked to eating disorders like anorexia nervosa. The mechanism underlying hyperinsulinemia in humans is consistent with murine agouti, as insulin secretion is heightened through calcium sensitive signaling in pancreatic beta cells. The mechanism for ASP induced tumorigenesis remains unknown in humans. See also Agouti coloration genetics Agouti-related peptide Genomic imprinting Methylation Epigenetics References Further reading External links Peptides Peptide hormones Mammal genes Melanocortin receptor antagonists
Agouti-signaling protein
[ "Chemistry" ]
2,662
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
5,509,703
https://en.wikipedia.org/wiki/Species%20distribution
Species distribution, or species dispersion, is the manner in which a biological taxon is spatially arranged. The geographic limits of a particular taxon's distribution is its range, often represented as shaded areas on a map. Patterns of distribution change depending on the scale at which they are viewed, from the arrangement of individuals within a small family unit, to patterns within a population, or the distribution of the entire species as a whole (range). Species distribution is not to be confused with dispersal, which is the movement of individuals away from their region of origin or from a population center of high density. Range In biology, the range of a species is the geographical area within which that species can be found. Within that range, distribution is the general structure of the species population, while dispersion is the variation in its population density. Range is often described with the following qualities: Sometimes a distinction is made between a species' natural, endemic, indigenous, or native range, where it has historically originated and lived, and the range where a species has more recently established itself. Many terms are used to describe the new range, such as non-native, naturalized, introduced, transplanted, invasive, or colonized range. Introduced typically means that a species has been transported by humans (intentionally or accidentally) across a major geographical barrier. For species found in different regions at different times of year, especially seasons, terms such as summer range and winter range are often employed. For species for which only part of their range is used for breeding activity, the terms breeding range and non-breeding range are used. For mobile animals, the term natural range is often used, as opposed to areas where it occurs as a vagrant. Geographic or temporal qualifiers are often added, such as in British range or pre-1950 range. The typical geographic ranges could be the latitudinal range and elevational range. Disjunct distribution occurs when two or more areas of the range of a taxon are considerably separated from each other geographically. Factors affecting species distribution Distribution patterns may change by season, distribution by humans, in response to the availability of resources, and other abiotic and biotic factors. Abiotic There are three main types of abiotic factors: climatic factors consist of sunlight, atmosphere, humidity, temperature, and salinity; edaphic factors are abiotic factors regarding soil, such as the coarseness of soil, local geology, soil pH, and aeration; and social factors include land use and water availability. An example of the effects of abiotic factors on species distribution can be seen in drier areas, where most individuals of a species will gather around water sources, forming a clumped distribution. Researchers from the Arctic Ocean Diversity (ARCOD) project have documented rising numbers of warm-water crustaceans in the seas around Norway's Svalbard Islands. ARCOD is part of the Census of Marine Life, a huge 10-year project involving researchers in more than 80 nations that aims to chart the diversity, distribution and abundance of life in the oceans. Marine Life has become largely affected by increasing effects of global climate change. This study shows that as the ocean temperatures rise species are beginning to travel into the cold and harsh Arctic waters. Even the snow crab has extended its range 500 km north. Biotic Biotic factors such as predation, disease, and inter- and intra-specific competition for resources such as food, water, and mates can also affect how a species is distributed. For example, biotic factors in a quail's environment would include their prey (insects and seeds), competition from other quail, and their predators, such as the coyote. An advantage of a herd, community, or other clumped distribution allows a population to detect predators earlier, at a greater distance, and potentially mount an effective defense. Due to limited resources, populations may be evenly distributed to minimize competition, as is found in forests, where competition for sunlight produces an even distribution of trees. One key factor in determining species distribution is the phenology of the organism. Plants are well documented as examples showing how phenology is an adaptive trait that can influence fitness in changing climates. Physiology can influence species distributions in an environmentally sensitive manner because physiology underlies movement such as exploration and dispersal. Individuals that are more disperse-prone have higher metabolism, locomotor performance, corticosterone levels, and immunity. Humans are one of the largest distributors due to the current trends in globalization and the expanse of the transportation industry. For example, large tankers often fill their ballasts with water at one port and empty them in another, causing a wider distribution of aquatic species. Patterns on large scales On large scales, the pattern of distribution among individuals in a population is clumped. Bird wildlife corridors One common example of bird species' ranges are land mass areas bordering water bodies, such as oceans, rivers, or lakes; they are called a coastal strip. A second example, some species of bird depend on water, usually a river, swamp, etc., or water related forest and live in a river corridor. A separate example of a river corridor would be a river corridor that includes the entire drainage, having the edge of the range delimited by mountains, or higher elevations; the river itself would be a smaller percentage of this entire wildlife corridor, but the corridor is created because of the river. A further example of a bird wildlife corridor would be a mountain range corridor. In the U.S. of North America, the Sierra Nevada range in the west, and the Appalachian Mountains in the east are two examples of this habitat, used in summer, and winter, by separate species, for different reasons. Bird species in these corridors are connected to a main range for the species (contiguous range) or are in an isolated geographic range and be a disjunct range. Birds leaving the area, if they migrate, would leave connected to the main range or have to fly over land not connected to the wildlife corridor; thus, they would be passage migrants over land that they stop on for an intermittent, hit or miss, visit. Patterns on small scales On large scales, the pattern of distribution among individuals in a population is clumped. On small scales, the pattern may be clumped, regular, or random. Clumped Clumped distribution, also called aggregated distribution, clumped dispersion or patchiness, is the most common type of dispersion found in nature. In clumped distribution, the distance between neighboring individuals is minimized. This type of distribution is found in environments that are characterized by patchy resources. Animals need certain resources to survive, and when these resources become rare during certain parts of the year animals tend to "clump" together around these crucial resources. Individuals might be clustered together in an area due to social factors such as selfish herds and family groups. Organisms that usually serve as prey form clumped distributions in areas where they can hide and detect predators easily. Other causes of clumped distributions are the inability of offspring to independently move from their habitat. This is seen in juvenile animals that are immobile and strongly dependent upon parental care. For example, the bald eagle's nest of eaglets exhibits a clumped species distribution because all the offspring are in a small subset of a survey area before they learn to fly. Clumped distribution can be beneficial to the individuals in that group. However, in some herbivore cases, such as cows and wildebeests, the vegetation around them can suffer, especially if animals target one plant in particular. Clumped distribution in species acts as a mechanism against predation as well as an efficient mechanism to trap or corner prey. African wild dogs, Lycaon pictus, use the technique of communal hunting to increase their success rate at catching prey. Studies have shown that larger packs of African wild dogs tend to have a greater number of successful kills. A prime example of clumped distribution due to patchy resources is the wildlife in Africa during the dry season; lions, hyenas, giraffes, elephants, gazelles, and many more animals are clumped by small water sources that are present in the severe dry season. It has also been observed that extinct and threatened species are more likely to be clumped in their distribution on a phylogeny. The reasoning behind this is that they share traits that increase vulnerability to extinction because related taxa are often located within the same broad geographical or habitat types where human-induced threats are concentrated. Using recently developed complete phylogenies for mammalian carnivores and primates it has been shown that in the majority of instances threatened species are far from randomly distributed among taxa and phylogenetic clades and display clumped distribution. A contiguous distribution is one in which individuals are closer together than they would be if they were randomly or evenly distributed, i.e., it is clumped distribution with a single clump. Regular or uniform Less common than clumped distribution, uniform distribution, also known as even distribution, is evenly spaced. Uniform distributions are found in populations in which the distance between neighboring individuals is maximized. The need to maximize the space between individuals generally arises from competition for a resource such as moisture or nutrients, or as a result of direct social interactions between individuals within the population, such as territoriality. For example, penguins often exhibit uniform spacing by aggressively defending their territory among their neighbors. The burrows of great gerbils for example are also regularly distributed, which can be seen on satellite images. Plants also exhibit uniform distributions, like the creosote bushes in the southwestern region of the United States. Salvia leucophylla is a species in California that naturally grows in uniform spacing. This flower releases chemicals called terpenes which inhibit the growth of other plants around it and results in uniform distribution. This is an example of allelopathy, which is the release of chemicals from plant parts by leaching, root exudation, volatilization, residue decomposition and other processes. Allelopathy can have beneficial, harmful, or neutral effects on surrounding organisms. Some allelochemicals even have selective effects on surrounding organisms; for example, the tree species Leucaena leucocephala exudes a chemical that inhibits the growth of other plants but not those of its own species, and thus can affect the distribution of specific rival species. Allelopathy usually results in uniform distributions, and its potential to suppress weeds is being researched. Farming and agricultural practices often create uniform distribution in areas where it would not previously exist, for example, orange trees growing in rows on a plantation. Random Random distribution, also known as unpredictable spacing, is the least common form of distribution in nature and occurs when the members of a given species are found in environments in which the position of each individual is independent of the other individuals: they neither attract nor repel one another. Random distribution is rare in nature as biotic factors, such as the interactions with neighboring individuals, and abiotic factors, such as climate or soil conditions, generally cause organisms to be either clustered or spread. Random distribution usually occurs in habitats where environmental conditions and resources are consistent. This pattern of dispersion is characterized by the lack of any strong social interactions between species. For example; When dandelion seeds are dispersed by wind, random distribution will often occur as the seedlings land in random places determined by uncontrollable factors. Oyster larvae can also travel hundreds of kilometers powered by sea currents, which can result in their random distribution. Random distributions exhibit chance clumps (see Poisson clumping). Statistical determination of distribution patterns There are various ways to determine the distribution pattern of species. The Clark–Evans nearest neighbor method can be used to determine if a distribution is clumped, uniform, or random. To utilize the Clark–Evans nearest neighbor method, researchers examine a population of a single species. The distance of an individual to its nearest neighbor is recorded for each individual in the sample. For two individuals that are each other's nearest neighbor, the distance is recorded twice, once for each individual. To receive accurate results, it is suggested that the number of distance measurements is at least 50. The average distance between nearest neighbors is compared to the expected distance in the case of random distribution to give the ratio: If this ratio R is equal to 1, then the population is randomly dispersed. If R is significantly greater than 1, the population is evenly dispersed. Lastly, if R is significantly less than 1, the population is clumped. Statistical tests (such as t-test, chi squared, etc.) can then be used to determine whether R is significantly different from 1. The variance/mean ratio method focuses mainly on determining whether a species fits a randomly spaced distribution, but can also be used as evidence for either an even or clumped distribution. To utilize the Variance/Mean ratio method, data is collected from several random samples of a given population. In this analysis, it is imperative that data from at least 50 sample plots is considered. The number of individuals present in each sample is compared to the expected counts in the case of random distribution. The expected distribution can be found using Poisson distribution. If the variance/mean ratio is equal to 1, the population is found to be randomly distributed. If it is significantly greater than 1, the population is found to be clumped distribution. Finally, if the ratio is significantly less than 1, the population is found to be evenly distributed. Typical statistical tests used to find the significance of the variance/mean ratio include Student's t-test and chi squared. However, many researchers believe that species distribution models based on statistical analysis, without including ecological models and theories, are too incomplete for prediction. Instead of conclusions based on presence-absence data, probabilities that convey the likelihood a species will occupy a given area are more preferred because these models include an estimate of confidence in the likelihood of the species being present/absent. They are also more valuable than data collected based on simple presence or absence because models based on probability allow the formation of spatial maps that indicates how likely a species is to be found in a particular area. Similar areas can then be compared to see how likely it is that a species will occur there also; this leads to a relationship between habitat suitability and species occurrence. Species distribution models Species distribution can be predicted based on the pattern of biodiversity at spatial scales. A general hierarchical model can integrate disturbance, dispersal and population dynamics. Based on factors of dispersal, disturbance, resources limiting climate, and other species distribution, predictions of species distribution can create a bio-climate range, or bio-climate envelope. The envelope can range from a local to a global scale or from a density independence to dependence. The hierarchical model takes into consideration the requirements, impacts or resources as well as local extinctions in disturbance factors. Models can integrate the dispersal/migration model, the disturbance model, and abundance model. Species distribution models (SDMs) can be used to assess climate change impacts and conservation management issues. Species distribution models include: presence/absence models, the dispersal/migration models, disturbance models, and abundance models. A prevalent way of creating predicted distribution maps for different species is to reclassify a land cover layer depending on whether or not the species in question would be predicted to habit each cover type. This simple SDM is often modified through the use of range data or ancillary information, such as elevation or water distance. Recent studies have indicated that the grid size used can have an effect on the output of these species distribution models. The standard 50x50 km grid size can select up to 2.89 times more area than when modeled with a 1x1 km grid for the same species. This has several effects on the species conservation planning under climate change predictions (global climate models, which are frequently used in the creation of species distribution models, usually consist of 50–100 km size grids) which could lead to over-prediction of future ranges in species distribution modeling. This can result in the misidentification of protected areas intended for a species future habitat. Species Distribution Grids Project The Species Distribution Grids Project is an effort led out of the University of Columbia to create maps and databases of the whereabouts of various animal species. This work is centered on preventing deforestation and prioritizing areas based on species richness. As of April 2009, data are available for global amphibian distributions, as well as birds and mammals in the Americas. The map gallery Gridded Species Distribution contains sample maps for the Species Grids data set. These maps are not inclusive but rather contain a representative sample of the types of data available for download: See also Geographic range limit Animal migration Biogeography Colonisation Cosmopolitan distribution Occupancy frequency distribution Notes External links Livestock Grazing Distribution Patterns: Does Animal Age Matter? Discrete Uniform Random Distribution Animal migration Biogeography Ecology terminology Population ecology Population genetics
Species distribution
[ "Biology" ]
3,465
[ "Ecology terminology", "Behavior", "Biogeography", "Animal migration", "Ethology" ]
5,509,737
https://en.wikipedia.org/wiki/Guanylin
Guanylin is a 15 amino acid peptide that is secreted by goblet cells in the colon. Guanylin acts as an agonist of the guanylyl cyclase receptor GC-C and regulates electrolyte and water transport in intestinal and renal epithelia. Upon receptor binding, guanylin increases the intracellular concentration of cGMP, induces chloride secretion and decreases intestinal fluid absorption, ultimately causing diarrhoea. The peptide stimulates the enzyme through the same receptor binding region as the heat-stable enterotoxins. Researches have found that a loss in guanylin expression can lead to colorectal cancer due to guanylyl cyclase C's function as an intestinal tumor suppressor. When guanylin expression was measured on over 250 colon cancer patients, more than 85% of patients had a loss of guanylin expression in cancerous tissue samples by 100-1000 times when compared to the same patients's nearby healthy colon tissue. Another study done on genetically engineered mice found that mice on a high calorie diet had reduced guanylin expression in the colon. This loss of expression then resulted in guanylyl cyclase C inhibition and the formation of tumors, therefore linking diet-induced obesity with colorectal cancer. Human proteins containing this domain GUCA2A; GUCA2B; Structure This peptide has two topogies, both isoforms are shown below: References External links Peptides Protein domains
Guanylin
[ "Chemistry", "Biology" ]
310
[ "Biomolecules by chemical classification", "Protein classification", "Protein domains", "Molecular biology", "Peptides" ]
5,509,769
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von%20Mises%20criterion
In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as In one-sample applications is the theoretical distribution and is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case. The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930. The generalization to two samples is due to Anderson. The Cramér–von Mises test is an alternative to the Kolmogorov–Smirnov test (1933). Cramér–von Mises test (one sample) Let be the observed values, in increasing order. Then the statistic is If this value is larger than the tabulated value, then the hypothesis that the data came from the distribution can be rejected. Watson test A modified version of the Cramér–von Mises test is the Watson test which uses the statistic U2, where where Cramér–von Mises test (two samples) Let and be the observed values in the first and second sample respectively, in increasing order. Let be the ranks of the xs in the combined sample, and let be the ranks of the ys in the combined sample. Anderson shows that where U is defined as If the value of T is larger than the tabulated values, the hypothesis that the two samples come from the same distribution can be rejected. (Some books give critical values for U, which is more convenient, as it avoids the need to compute T via the expression above. The conclusion will be the same.) The above assumes there are no duplicates in the , , and sequences. So is unique, and its rank is in the sorted list . If there are duplicates, and through are a run of identical values in the sorted list, then one common approach is the midrank method: assign each duplicate a "rank" of . In the above equations, in the expressions and , duplicates can modify all four variables , , , and . References Further reading Statistical distance Nonparametric statistics Normality tests
Cramér–von Mises criterion
[ "Physics" ]
461
[ "Physical quantities", "Statistical distance", "Distance" ]
5,510,125
https://en.wikipedia.org/wiki/Transrepression
In the field of molecular biology, transrepression is a process whereby one protein represses (i.e., inhibits) the activity of a second protein through a protein-protein interaction. Since this repression occurs between two different protein molecules (intermolecular), it is referred to as a trans-acting process. The protein that is repressed is usually a transcription factor whose function is to up-regulate (i.e., increase) the rate of gene transcription. Hence the net result of transrepression is down regulation of gene transcription. An example of transrepression is the ability of the glucocorticoid receptor to inhibit the transcriptional promoting activity of the AP-1 and NF-κB transcription factors. In addition to transactivation, transrepression is an important pathway for the anti-inflammatory effects of glucocorticoids. Other nuclear receptors such as LXR and PPAR have been demonstrated to also have the ability to transrepress the activity of other proteins. See also Selective glucocorticoid receptor agonist References Molecular biology
Transrepression
[ "Chemistry", "Biology" ]
231
[ "Biochemistry", "Molecular biology stubs", "Molecular biology" ]
5,510,563
https://en.wikipedia.org/wiki/Rollover%20cable
A rollover cable (also known as a Yost cable, Cisco cable, or a console cable) is a type of null-modem cable that is used to connect a computer terminal to a router's console port. This cable is typically flat (and has a light blue color) to help distinguish it from other types of network cabling. It gets the name rollover because the pinouts on one end are reversed from the other, as if the flat cable had been rolled over. This cabling system was invented to eliminate the differences in RS-232 wiring systems. Any two RS-232 systems can be directly connected by a standard rollover cable and a standard connector. For legacy equipment, an adapter is permanently attached to the legacy port. See also 8P8C Serial cable RS-232 External links Cabling Guide for Console and AUX Ports Document ID: 12223 Zonker's Cisco Console Server Connections Guide Dave Yost Serial Device Wiring Standard Out-of-band management Signal cables
Rollover cable
[ "Technology" ]
207
[ "Computing stubs", "Computer network stubs" ]
5,510,937
https://en.wikipedia.org/wiki/F-plasmid
The F-plasmid (first named F by one of its discoverers Esther Lederberg;also called the sex factor in E. coli,the F sex factor, or the fertility factor) allows genes to be transferred from one bacterium carrying the factor to another bacterium lacking the factor by conjugation. The F factor was the first plasmid to be discovered. Unlike other plasmids, F factor is constitutive for transfer proteins due to a mutation in the gene finO. The F plasmid belongs to F-like plasmids, a class of conjugative plasmids that control sexual functions of bacteria with a fertility inhibition (Fin) system. Discovery Esther M. Lederberg and Luigi L. Cavalli-Sforza discovered "F," subsequently publishing with Joshua Lederberg. Once her results were announced, two other labs joined the studies. "This was not a simultaneous independent discovery of F (I named this as Fertility Factor until it was understood.) We wrote to Hayes, Jacob, & Wollman who then proceeded with their studies." The discovery of "F" has sometimes been confused with William Hayes' discovery of "sex factor", though he never claimed priority. Indeed, "he [Hayes] thought F was really lambda, and when we convinced him [that it was not], he then began his work." Structure The most common functional segments constituting F factors are: OriT (Origin of Transfer): The sequence which marks the starting point of conjugative transfer. OriV (Origin of Vegetative Replication): The sequence starting with which the plasmid-DNA will be replicated in the recipient cell. tra-region (transfer genes): Genes coding the F-Pilus and DNA transfer process. IS (Insertion Elements) composed of one copy of IS2, two copies of IS3, and one copy of IS1000: so-called "selfish genes" (sequence fragments which can integrate copies of themselves at different locations). Some F plasmid genes and their Function: traA: F-pilin, Major subunit of the F-pilus. traN: recognizes cell-surface receptors Relation to the genome The episome that harbors the F factor can exist as an independent plasmid or integrate into the bacterial cell's genome. There are several names for the possible states: Hfr bacteria possess the entire F episome integrated into the bacterial genome. F+ bacteria possess F factor as a plasmid independent of the bacterial genome. The F plasmid contains only F factor DNA and no DNA from the bacterial genome. F' (F-prime) bacteria are formed by incorrect excision from the chromosome, resulting in F plasmid carrying bacterial sequences that are next to where the F episome has been inserted. F− bacteria do not contain F factor and act as the recipients. Function When an F+ cell conjugates/mates with an F− cell, the result is two F+ cells, both capable of transmitting the plasmid to other F− cells by conjugation. A pilus on the F+ cell interacts with the recipient cell allowing formation of a mating junction, the DNA is nicked on one strand, unwound and transferred to the recipient. The F-plasmid belongs to a class of conjugative plasmids that control sexual functions of bacteria with a fertility inhibition (Fin) system. In this system, a trans-acting factor, FinO, and antisense RNAs, FinP, combine to repress the expression of the activator gene TraJ. TraJ is a transcription factor that upregulates the tra operon. The tra operon includes genes required for conjugation and plasmid transfer. This means that an F+ bacteria can always act as a donor cell. The finO gene of the original F plasmid (in E. coli K12) is interrupted by an IS3 insertion, resulting in constitutive tra operon expression. F+ cells also have the surface exclusion proteins TraS and TraT on the bacterial surface. These proteins prevent secondary mating events involving plasmids belonging to the same incompatibility (Inc) group. Thus, each F+ bacterium can host only a single plasmid type of any given incompatibility group. In the case of Hfr transfer, the resulting transconjugates are rarely Hfr. The result of Hfr/F− conjugation is a F− strain with a new genotype. When F-prime plasmids are transferred to a recipient bacterial cell, they carry pieces of the donor's DNA that can become important in recombination. Bioengineers have created F plasmids that can contain inserted foreign DNA; this is called a bacterial artificial chromosome. The first DNA helicase ever described is encoded on the F-plasmid and is responsible for initiating plasmid transfer. It was originally called E. coli DNA Helicase I, but is now known as F-plasmid TraI. In addition to being a helicase, the 1756 amino acid (one of the largest in E. coli) F-plasmid TraI protein is also responsible for both specific and non-specific single-stranded DNA binding as well as catalyzing the nicking of single-stranded DNA at the origin of transfer. See also FlmB RNA Fosmid Hfr cell References External links Bacteriology Molecular genetics Plasmids
F-plasmid
[ "Chemistry", "Biology" ]
1,178
[ "Plasmids", "Molecular genetics", "Bacteria", "Molecular biology" ]
5,512,015
https://en.wikipedia.org/wiki/Rosa%20%C3%97%20centifolia
Rosa × centifolia (lit. hundred leaved rose; syn. R. gallica var. centifolia (L.) Regel), the Provence rose, cabbage rose or Rose de Mai, is a hybrid rose developed by Dutch breeders in the period between the 17th century and the 19th century, possibly earlier. History Its parentage includes Rosa × damascena, but it may be a complex hybrid; its exact hereditary history is not well documented or fully investigated, but it now appears that this is not the "hundred-leaved" (centifolia) rose mentioned by Theophrastus and Pliny: "no unmistakable reference can be traced earlier than about 1580". The original plant was sterile, but a sport with single flowers appeared in 1769, from which various cultivars known as centifolia roses were developed, many of which are further hybrids. Other cultivars have appeared as further sports from these roses. Rosa × centifolia 'Muscosa' is a sport with a thick covering of resinous hairs on the flower buds, from which most (but not all) "moss roses" are derived. Dwarf or miniature sports have been known for almost as long as the larger forms, including a miniature moss rose 'Moss de Meaux'. In 1783 the French artist Élisabeth Vigée Le Brun painted a famous portrait of Marie Antoinette holding a pink centifolia rose. Growth Individual plants are shrubby in appearance, growing to 1.5–2 m tall, with long drooping canes and greyish green pinnate leaves with 5–7 leaflets. The flowers are round and globular, with numerous thin overlapping petals that are highly scented; they are usually pink, less often white to dark red-purple. Cultivation and uses R. × centifolia is particular to the French city of Grasse, known as the perfume capital of the world. It is widely cultivated for its singular fragrance — clear and sweet, with light notes of honey. The flowers are commercially harvested for the production of rose oil, which is commonly used in perfumery. Centifolia cultivars Cultivars of Rosa × centifolia that are still grown include: 'Bullata', also called 'Lettuce Rose' and 'À Feuilles de Laitue', known since 1801 'Cristata', also called 'Chapeau de Napoleon' 'Fantin-Latour', blush pink, fragrant 'Petite de Hollande', also called 'Pompon des Dames', known since the 18th century 'Rose de Meaux', also called "Rosa pomponia", known since 1637 'Unique Blanche', also called 'Mutabilis', 'White Provence', 'Vièrge de Cléry' and other names 'Village Maid', introduced by Vibert in 1845, a striped flower Both 'Centifola' and 'Fantin-Latour' are recipients of the Royal Horticultural Society's Award of Garden Merit. References External links Plants for a Future: Rosa centifolia Centifolia: The Hundred-Petalled Rose Grasse: Villages Beyond Provence centifolia Medicinal plants Hybrid plants
Rosa × centifolia
[ "Biology" ]
642
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
5,512,120
https://en.wikipedia.org/wiki/Alcohol%20and%20Drugs%20History%20Society
The Alcohol and Drugs History Society (ADHS) is a scholarly organization whose members study the history of a variety of illegal, regulated, and unregulated drugs such as opium, alcohol, and coffee. Organized in 2004, the ADHS is the successor to a society with a more limited scope, the Alcohol and Temperance History Group, which existed for 25 years. The last ATHG president and the first ADHS president was Ian R. Tyrrell, Professor of History at the University of New South Wales, in Australia. In July 2006 he was succeeded by W.J. Rorabaugh, Professor of History at the University of Washington. In July, 2008, David T. Courtwright, Professor of History at the University of North Florida became president. In 2011 Joseph Spillane, Associate Professor of History at the University of Florida, succeeded him as president. The ADHS sponsors the academic journal The Social History of Alcohol and Drugs: an Interdisciplinary Journal (SHAD). The journal appears in both printed and electronic formats, published twice a year, in the winter and the summer. Starting in 2019, the journal was published by the University of Chicago Press. SHAD's editorial structure recently changed and its current co-editors are David Herzberg (SUNY-Buffalo), Nancy Campbell (Rensselaer Polytechnic Institute), and Lucas Richert (University of Strathclyde). Its former editor-in-chief is Dan Malleck, from Brock University, St. Catharines, Canada. Other former editors include Jim Mills, from University of Strathclyde in Glasgow, Scotland; W. Scott Haine, from California, and the Reviews Editors are James Nicholls, from Bath Spa University, England and Alex Mold, from the London School of Hygiene and Tropical Medicine. In November, 2009 it created an editorial board, which is populated by some of the most prominent alcohol and drugs history scholars in the world. The ADHS is affiliated with the American Historical Association and sponsors sessions at the January meetings of the AHA. The ADHS held an international conference in Canada in August 2007, with Guelph University (Ontario) as the host The next international conference was in 2009 at Glasgow, Scotland, with the University of Strathclyde as the host. In 2011 an international conference was held at the State University of New York, Buffalo. In 2015 the ADHS met at Bowling Green State University in Ohio, USA. In 2017 there was an international conference at the University of Utrecht, The Netherlands. Shanghai will serve as host for the 2019 meeting. Also Drunken monkey hypothesis Stoned ape theory References External links Organizations established in 2004 Historical societies of the United States Pharmacological societies Alcohol
Alcohol and Drugs History Society
[ "Chemistry" ]
550
[ "Pharmacology", "Pharmacological societies" ]
5,512,301
https://en.wikipedia.org/wiki/Levomethorphan
Levomethorphan (LVM) (INN, BAN) is an opioid analgesic of the morphinan family that has never been marketed. It is the L-stereoisomer of racemethorphan (methorphan). The effects of the two isomers of racemethorphan are quite different, with dextromethorphan (DXM) being an antitussive at low doses and a dissociative hallucinogen at much higher doses. Levomethorphan is about five times stronger than morphine. Levomethorphan is a prodrug to levorphanol, analogously to DXM acting as a prodrug to dextrorphan or codeine behaving as a prodrug to morphine. As such, levomethorphan has similar effects to levorphanol but is less potent as it must be demethylated to the active form by liver enzymes before being able to produce its effects. As a prodrug of levorphanol, levomethorphan functions as a potent agonist of all three of the opioid receptors, μ, κ (κ1 and κ3 but notably not κ2), and δ, as an NMDA receptor antagonist, and as a serotonin-norepinephrine reuptake inhibitor. Via activation of the κ-opioid receptor, levomethorphan can produce dysphoria and psychotomimetic effects such as dissociation and hallucinations. Levomethorphan is listed under the Single Convention on Narcotic Drugs 1961 and is regulated like morphine in most countries. In the United States it is a Schedule II Narcotic controlled substance with a DEA ACSCN of 9210 and a 2014 annual aggregate manufacturing quota of 195 grams, up from 6 grams the year before. The salts in use are the tartrate (free base conversion ratio 0.644) and hydrobromide (0.958). At the current time, no levomethorphan pharmaceuticals are marketed in the United States. See also Butorphanol Cyclorphan Levallorphan Levorphanol Nalbuphine Oxilorphan Proxorphan Racemorphan Xorphanol References Delta-opioid receptor agonists Dissociative drugs Enantiopure drugs GABA receptor antagonists Glycine receptor antagonists Kappa-opioid receptor agonists NMDA receptor antagonists Morphinans Mu-opioid receptor agonists Nociceptin receptor agonists Phenol ethers Prodrugs Semisynthetic opioids Serotonin–norepinephrine reuptake inhibitors
Levomethorphan
[ "Chemistry" ]
589
[ "Prodrugs", "Chemicals in medicine", "Stereochemistry", "Enantiopure drugs" ]
5,512,894
https://en.wikipedia.org/wiki/Kronecker%20limit%20formula
In mathematics, the classical Kronecker limit formula describes the constant term at s = 1 of a real analytic Eisenstein series (or Epstein zeta function) in terms of the Dedekind eta function. There are many generalizations of it to more complicated Eisenstein series. It is named for Leopold Kronecker. First Kronecker limit formula The (first) Kronecker limit formula states that where E(τ,s) is the real analytic Eisenstein series, given by for Re(s) > 1, and by analytic continuation for other values of the complex number s. γ is Euler–Mascheroni constant τ = x + iy with y > 0. , with q = e2π i τ is the Dedekind eta function. So the Eisenstein series has a pole at s = 1 of residue π, and the (first) Kronecker limit formula gives the constant term of the Laurent series at this pole. This formula has an interpretation in terms of the spectral geometry of the elliptic curve associated to the lattice : it says that the zeta-regularized determinant of the Laplace operator associated to the flat metric on is given by . This formula has been used in string theory for the one-loop computation in Polyakov's perturbative approach. Second Kronecker limit formula The second Kronecker limit formula states that where u and v are real and not both integers. q = e2π i τ and qa = e2π i aτ p = e2π i z and pa = e2π i az for Re(s) > 1, and is defined by analytic continuation for other values of the complex number s. See also Herglotz–Zagier function References Serge Lang, Elliptic functions, C. L. Siegel, Lectures on advanced analytic number theory, Tata institute 1961. External links Chapter0.pdf Theorems in analytic number theory Modular forms
Kronecker limit formula
[ "Mathematics" ]
402
[ "Theorems in mathematical analysis", "Theorems in analytic number theory", "Theorems in number theory", "Modular forms", "Number theory" ]
5,513,803
https://en.wikipedia.org/wiki/Oxazole%20%28data%20page%29
References Zoltewicz, J. A. & Deady, L. W. Quaternization of heteroaromatic compounds. Quantitative aspects. Adv. Heterocycl. Chem. 22, 71-121 (1978). Chemical data pages Chemical data pages cleanup
Oxazole (data page)
[ "Chemistry" ]
61
[ "Chemical data pages", "nan" ]
5,514,192
https://en.wikipedia.org/wiki/Symmetric%20hypergraph%20theorem
The Symmetric hypergraph theorem is a theorem in combinatorics that puts an upper bound on the chromatic number of a graph (or hypergraph in general). The original reference for this paper is unknown at the moment, and has been called folklore. Statement A group acting on a set is called transitive if given any two elements and in , there exists an element of such that . A graph (or hypergraph) is called symmetric if its automorphism group is transitive. Theorem. Let be a symmetric hypergraph. Let , and let denote the chromatic number of , and let denote the independence number of . Then Applications This theorem has applications to Ramsey theory, specifically graph Ramsey theory. Using this theorem, a relationship between the graph Ramsey numbers and the extremal numbers can be shown (see Graham-Rothschild-Spencer for the details). See also Ramsey theory Notes Graph coloring Theorems in graph theory
Symmetric hypergraph theorem
[ "Mathematics" ]
186
[ "Graph theory stubs", "Graph coloring", "Graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Theorems in graph theory" ]
14,490,007
https://en.wikipedia.org/wiki/Internet%20Mapping%20Project
The Internet Mapping Project was started by William Cheswick and Hal Burch at Bell Labs in 1997. It has collected and preserved traceroute-style paths to some hundreds of thousands of networks almost daily since 1998. The project included visualization of the Internet data, and the Internet maps were widely disseminated. The technology is now used by Lumeta, a spinoff of Bell Labs, to map corporate and government networks. Although Cheswick left Lumeta in September 2006, Lumeta continues to map both the IPv4 and IPv6 Internet. The data allows for both a snapshot and view over time of the routed infrastructure of a particular geographical area, company, organization, etc. Cheswick continues to collect and preserve the data, and it is available for research purposes. According to Cheswick, a main goal of the project was to collect the data over time, and make a time-lapse movie of the growth of the Internet. Techniques The techniques available for network discovery rely on hop-limited probes of the type used by the Unix traceroute utility or the Windows NT tracert.exe tool. A Traceroute-style network probe follows the path that network packets take from a source node to a destination node. This technique uses Internet Protocol packets with an 8-bit time to live (TTL) header field. As a packet passes through routers on the Internet, each router decreases the TTL value by one until it reaches zero. When a router receives a packet with a TTL value of zero, it drops the packet instead of forwarding it. At this point, it sends an Internet Control Message Protocol (ICMP) error message to the source node where the packet originated indicating that the packet exceeded its maximum transit time. Active Probing – Active probing is a series of probes set out through a network to obtain data. Active probing is used in internet mapping to discover the topology of the Internet. Topology maps of the Internet are an important tool for characterizing the infrastructure and understanding the properties, behavior and evolution of the Internet. Other internet mapping projects Hand Drawn Maps of Internet from 1973. The Center for Applied Internet Data Analysis (CAIDA) collects, monitors, analyzes, and maps several forms of Internet traffic data concerning network topology. Their "Internet Topology Maps also referred to as AS-level Internet Graphs [are being generated] in order to visualize the shifting topology of the Internet over time." The Opte Project, started in 2003 by engineer Barrett Lyon, using traceroute and BGP routes for mapping. New Hampshire Project – In 2010, the U.S. Department of Commerce has awarded the University of New Hampshire's Geographically Referenced Analysis and Information Transfer (NH GRANIT) project approximately $1.7 million to manage a program that will inventory and map current and planned broadband coverage available to the state's businesses, educators, and citizens. As a part of this project, The New Hampshire Broadband Mapping Program (NHBMP) was created as a coordinated, multi-agency initiative funded by the American Recovery and Reinvestment Act through the National Telecommunications and Information Administration (NTIA), and is part of a national effort to expand high-speed Internet access and adoption through improved data collection and broadband planning. In 2009, Kevin Kelly (editor), cofounder of Wired Magazine, started his own Internet Mapping Project to understand how people conceive the internet. He wanted to discover the maps that people have in their mind as they navigate the vast internet by having them submit hand drawn pictures. So far, he has collected close to 80 submissions by people of all ages, nationalities and expertise levels, ranging from the concrete to the conceptual to the comic. See also Network mapping Route analytics References Internet architecture 1997 establishments in the United States
Internet Mapping Project
[ "Technology" ]
773
[ "Internet architecture", "IT infrastructure" ]
14,490,084
https://en.wikipedia.org/wiki/Outline%20of%20meteorology
The following outline is provided as an overview of and topical guide to the field of Meteorology. Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction. Essence of meteorology Meteorology Climate – the average and variations of weather in a region over long periods of time. Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology). Weather – the set of all the phenomena in a given atmosphere at a given time. Branches of meteorology Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more Methods in meteorology Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations Weather forecasting Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location Data collection Pilot Reports Weather maps Weather map Surface weather analysis Forecasts and reporting of Atmospheric pressure Dew point High-pressure area Ice Black ice Frost Low-pressure area Precipitation Temperature Weather front Wind chill Wind direction Wind speed Instruments and equipment of meteorology Anemometer – a device for measuring wind speed; used in weather stations Barograph – an aneroid barometer that records the barometric pressure over time and produces a paper or foil chart called a barogram Barometer – an instrument used to measure atmospheric pressure using either water, air, or mercury; useful for forecasting short term changes in the weather Ceiling balloon – a balloon, with a known ascent rate, used to measure the height of the base of clouds during daylight Ceiling projector – a device that is used, in conjunction with an alidade, to measure the height of the base of clouds Ceilometer – a device that uses a laser or other light source to measure the height of the base of clouds. Dark adaptor goggles – clear, red-tinted plastic goggles used either for adapting the eyes to dark prior to night observation or to help identify clouds during bright sunshine or glare from snow Disdrometer – an instrument used to measure the drop size, distribution, and velocity of falling hydrometeors Field mill – an instrument used to measure the strength of electric fields in the atmosphere near thunderstorm clouds Hygrometer – an instrument used to measure humidity Ice Accretion Indicator – an L-shaped piece of aluminum 15 inches (38 cm) long by 2 inches (5 cm) wide used to indicate the formation of ice, frost, or the presence of freezing rain or freezing drizzle Lidar (LIght raDAR) – an optical remote sensing technology used in atmospheric physics (among other fields) that measures the properties of scattered light to find information about a distant target Lightning detector – a device, either ground-based, mobile, or space-based, that detects lightning produced by thunderstorms Nephelometer – an instrument used to measure suspended particulates in a liquid or gas colloid. Gas-phase nephelometers are used to provide information on atmospheric visibility and albedo Nephoscope – an instrument for measuring the altitude, direction, and velocity of clouds Pyranometer – A type of actinometer found in many meteorological stations used to measure broadband solar irradiance Radar – see Weather radar Radiosonde – an instrument used in weather balloons that measures various atmospheric parameters and transmits them to a fixed receiver Rain gauge – an instrument that gathers and measures the amount of liquid precipitation over a set period of time Snow gauge – an instrument that gathers and measures the amount of solid precipitation over a set period of time SODAR (SOnic Detection And Ranging) – an instrument that measures the scattering of sound waves by atmospheric turbulence Solarimeter – a pyranometer, an instrument used to measure combined direct and diffuse solar radiation Sounding rocket – an instrument-carrying sub-orbital rocket designed to take measurements and perform scientific experiments Stevenson screen – part of a standard weather station, it shields instruments from precipitation and direct heat radiation while still allowing air to circulate freely Sunshine recorders – devices used to indicate the amount of sunshine at a given location Thermograph – a chart recorder that measures and records both temperature and humidity Thermometer – a device that measures temperature or temperature gradient Weather balloon – a high-altitude balloon that carries instruments aloft and uses a radiosonde to send back information on atmospheric pressure, temperature, and humidity Weather radar – a type of radar used to locate precipitation, calculate its motion, estimate its type (rain, snow, hail, etc.) and forecast its future position and intensity Weather vane – a movable device attached to an elevated object such as a roof that shows the direction of the wind Windsock – a conical textile tube designed to indicate wind direction and relative wind speed Wind profiler – equipment that uses radar or SODAR to detect wind speed and direction at various elevations History of meteorology History of weather forecasting – prior to the invention of meteorological instruments, weather analysis and prediction relied on pattern recognition, which was not always reliable History of surface weather analysis – initially used to study storm behavior, now used to explain current weather and as an aid in short term weather forecasting Meteorological phenomena Atmospheric pressure – the pressure at any given point in the Earth's atmosphere Cloud – a visible mass of droplets or frozen crystals floating in the atmosphere above the surface of a planet Rain – precipitation in which separate drops of water fall to the Earth from clouds, a product of the condensation of atmospheric water vapor Snow – precipitation in the form of crystalline water ice, consisting of a multitude of snowflakes that fall from clouds Freezing rain – precipitation that falls from a cloud as snow, melts completely on its way down, then passes through a layer of below-freezing air becoming supercooled, at which point it will freeze upon impact with any object encountered Sleet – term used in the United States and Canada for precipitation consisting of small, translucent ice balls, usually smaller than hailstones Tropical cyclone – a storm system with a low-pressure center and numerous thunderstorms that produce strong winds and flooding rain Extratropical cyclone – a low-pressure weather system occurring in the middle latitudes of the Earth having neither tropical nor polar characteristics Weather front – a boundary separating two masses of air of different densities; the principal cause of meteorological phenomena Low pressure – a region where the atmospheric pressure is lower in relation to the surrounding area Storm – any disturbed state of the atmosphere and strongly implying severe weather Flooding – an overflow of an expanse of water that submerges the land; a deluge Nor'easter – a macro-scale storm along the East Coast of the United States, named for the winds that come from the northeast Wind – the flow of air or other gases that compose an atmosphere; caused by rising heated air and cooler air rushing in to occupy the vacated space. Temperature – a physical property that describes our common notions of hot and cold Invest (meteorology) – An area with the potential for tropical cyclone development Weather-related disasters Weather disasters Extreme weather List of floods List of natural disasters by death toll List of severe weather phenomena Leaders in meteorology William M. Gray (October 9, 1929 – April 16, 2016) – has been involved in forecasting hurricanes since 1984 Francis Galton (February 16, 1822 - January 17, 1911) – was a polymath, and devised the first weather map, proposed a theory of anticyclones, and was the first to establish a complete record of short-term climatic phenomena on a European scale Herbert Saffir (March 29, 1917 – November 21, 2007) – was the developer of the Saffir-Simpson Hurricane Scale for measuring the intensity of hurricanes Bob Simpson (November 19, 1912 – December 18, 2014) – was a meteorologist, hurricane specialist, first director of the National Hurricane Research Project, former director of the National Hurricane Center, and co-developer of the Saffir-Simpson Hurricane Scale. See also Meteorology Glossary of meteorology Index of meteorology articles Standard day Jet stream Heat index Equivalent potential temperature (Theta-e) Primitive equations Climate: El Niño Monsoon Flood Drought Global warming Effect of sun angle on climate Other phenomena: Deposition Dust devil Fog Tide Air mass Evaporation Sublimation Crepuscular rays Anticrepuscular rays External links See weather forecasting#External links for weather forecast sites Air Quality Meteorology - Online course that introduces the basic concepts of meteorology and air quality necessary to understand meteorological computer models. Written at a bachelor's degree level. The GLOBE Program - (Global Learning and Observations to Benefit the Environment) An international environmental science and education program that links students, teachers, and the scientific research community in an effort to learn more about the environment through student data collection and observation. Glossary of Meteorology - From the American Meteorological Society, an excellent reference of nomenclature, equations, and concepts for the more advanced reader. JetStream - An Online School for Weather - National Weather Service Learn About Meteorology - Australian Bureau of Meteorology The Weather Guide - Weather Tutorials and News at About.com Meteorology Education and Training (MetEd) - The COMET Program NOAA Central Library - National Oceanic & Atmospheric Administration The World Weather 2010 Project The University of Illinois at Urbana-Champaign Ogimet - online data from meteorological stations of the world, obtained through NOAA free services National Center for Atmospheric Research Archives, documents the history of meteorology Weather forecasting and Climate science - United Kingdom Meteorological Office Meteorology Meteorology
Outline of meteorology
[ "Physics" ]
2,089
[ "Meteorology", "Applied and interdisciplinary physics" ]
14,490,701
https://en.wikipedia.org/wiki/Dream%20guide
A dream guide is a spirit guide dream character encountered in a dream, particular a fully lucid dream. On the scale of lucidity, "full" lucidity requires that all characters in a dream, not just the dreamer, be aware that they are in a dream. In this case, "another dream character not only becomes lucid before the dream-ego, he also possesses a higher degree of lucidity than the dream-ego later achieves." Anthony Shafton gives the following example of encountering a dream guide: Generally, the stage of capacity of a dream guide to put in such an appearance so as to inform the unwitting dreamer of the fact that this is a dream; must be preceded by the stage (achieved in some previous nights) of the witting dreamer informing (in a manner acceptable, or course, to themselves) prospective dream guides of the fact of this being a dream, and securing their agreement to this fact. This stage will in turn have quite likely have been preceded by a still earlier stage in which the witting dreamer will have endeavored to secure the agreement, by prospective dream guides, of the fact of this being a dream, but having been rebuffed by them (the rebuff have been due merely to the statement's not having been made in a style suitable to their literary fashion, which can be quite punctilious). References Guide Lucid dreams Spiritism
Dream guide
[ "Biology" ]
294
[ "Dream", "Behavior", "Sleep" ]
14,492,178
https://en.wikipedia.org/wiki/Chemical%20Facility%20Anti-Terrorism%20Standards
The Chemical Facility Anti-Terrorism Standards (CFATS), codified at 6 C.F.R. part 27, are a set of United States federal government security regulations for certain high-risk chemical facilities that possess particular chemicals, called chemicals of interest (COI) at particular concentrations. The CFATS regulations apply across a number of industries, ranging from chemical plants and chemical storage facilities to electrical generating facilities, refineries, and universities. Adoption The U.S. Department of Homeland Security promulgated the Final Rule on April 9, 2007. The regulations came into effect on June 8, 2007, apart from material covered in Appendix A, which took effect upon its publication in the Federal Register on November 20, 2007. The new rules apply to any "Chemical Facility," which the regulation defines as follows: Chemical Facility or facility shall mean any establishment that possesses or plans to possess, at any relevant point in time, a quantity of a chemical substance determined by the Secretary to be potentially dangerous or that meets other risk-related criteria identified by the Department. As used herein, the term chemical facility or facility shall also refer to the owner or operator of the chemical facility. Where multiple owners and/or operators function within a common infrastructure or within a single fenced area, the Assistant Secretary may determine that such owners and/or operators constitute a single chemical facility or multiple chemical facilities depending on the circumstances. The response from the US chemical community to the initial legislation was rather critical, but the revisions introduced in November appear to have addressed many of the concerns of both industry and academia. For example, certain routine chemicals of low toxicity, such as acetone or urea, have been removed from the list, since record-keeping for such common compounds was considered an excessive burden. However, some environmental groups believe the exemption quantities of certain substances, especially chlorine (set at ), have been set too high. Application The CFATS regulations specify over 300 chemicals of interest (COI) and "screening threshold quantities" (STQ) for each. COI are designated based on hazards associated with release (i.e., substances that are toxic, flammable, or explosive), theft or diversion, or sabotage. Legislation On February 6, 2014, Rep. Patrick Meehan (R, PA-7) introduced into the United States House of Representatives the Chemical Facility Anti-Terrorism Standards Program Authorization and Accountability Act of 2014 (H.R. 4007; 113th Congress). The bill would make permanent the United States Department of Homeland Security’s (DHS's) authority to regulate security at certain chemical facilities in the United States. Under the Chemical Facility Anti-Terrorism Standards (CFATS) program, DHS collects and reviews information from chemical facilities in the United States to determine which facilities present security risks and then requires them to write and enact security plans. The DHS National Protection and Programs Directorate's Office of Infrastructure Protection Assistant Secretary Caitlin Durkovich testified in favor of the bill before the United States House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies. On June 23, 2014, it was reported (amended) alongside House Report 113-491 part 1. On July 8, 2014, the House voted in a voice vote to pass the bill. On January 18, 2019, one day before the Chemical Facility Anti-Terrorism Standards Program was set to expire, President Donald Trump signed into law the Chemical Facility Anti-Terrorism Standards Program Extension Act, introduced to the House by Rep. Bennie G. Thompson (D-MS), which extended the program by 15 months. See also Toxic Substances Control Act (TSCA) Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) - EU legislation) Dangerous Substances Directive (67/548/EEC) - EU legislation References External links US Dept. of Homeland Security Website United States federal defense and national security legislation Chemical safety
Chemical Facility Anti-Terrorism Standards
[ "Chemistry" ]
800
[ "Chemical safety", "Chemical accident", "nan" ]
14,493,178
https://en.wikipedia.org/wiki/Quasi-star
A quasi-star (also called black hole star) is a hypothetical type of extremely large and luminous star that may have existed early in the history of the Universe. They are thought to have existed for around 7–10 million years due to their immense mass. Unlike modern stars, which are powered by nuclear fusion in their cores, a quasi-star's energy would come from material falling into a black hole at its core. They were first proposed in the 1960s and have since provided valuable insights into the early universe, galaxy formation, and the behavior of black holes. Although they have not been observed, they are considered to be a possible progenitor of supermassive black holes. Formation and properties A quasi-star would have resulted from the core of a large protostar collapsing into a black hole, where the outer layers of the protostar are massive enough to absorb the resulting burst of energy without being blown away or falling into the black hole, as occurs with modern supernovae. Such a star would have to be at least . Quasi-stars may have also formed from dark matter halos drawing in enormous amounts of gas via gravity, which can produce supermassive stars with tens of thousands of solar masses. Formation of quasi-stars could only happen early in the development of the Universe before hydrogen and helium were contaminated by heavier elements; thus, they may have been very massive Population III stars. Such stars would dwarf VY Canis Majoris, Mu Cephei and VV Cephei A, three among the largest known modern stars. Once the black hole had formed at the protostar's core, it would continue generating a large amount of radiant energy from the infall of stellar material. This constant outburst of energy would counteract the force of gravity, creating an equilibrium similar to the one that supports modern fusion-based stars. Quasi-stars would have had a short maximum lifespan, approximately 7 million years, during which the core black hole would have grown to about . These intermediate-mass black holes have been suggested as the progenitors of modern supermassive black holes such as the one in the center of the Galaxy. Quasi-stars are predicted to have had surface temperatures higher than . At these temperatures, each one would be about as luminous as a small galaxy. As a quasi-star cools over time, its outer envelope would become transparent, until further cooling to a limiting temperature of . This limiting temperature would mark the end of the quasi-star's life since there is no hydrostatic equilibrium at or below this limiting temperature. The object would then quickly dissipate, leaving behind the intermediate mass black hole. See also References Further reading External links Black holes Star types Hypothetical stars
Quasi-star
[ "Physics", "Astronomy" ]
549
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Astronomical classification systems", "Stellar phenomena", "Astronomical objects", "Star types" ]
14,493,316
https://en.wikipedia.org/wiki/Etravirine
Etravirine (ETR,), sold under the brand name Intelence is an antiretroviral medication used for the treatment of HIV. Etravirine is a human immunodeficiency virus type 1 (HIV-1) non-nucleoside reverse transcriptase inhibitor (NNRTI). Unlike agents in the class, resistance to other NNRTIs does not seem to confer resistance to etravirine. Etravirine is marketed by Janssen, a subsidiary of Johnson & Johnson. In January 2008, the US Food and Drug Administration (FDA) approved its use for people with established resistance to other drugs, making it the 30th anti-HIV drug approved in the United States and the first to be approved in 2008. It was also approved for use in Canada in April 2008. Etravirine is licensed in the United States, Canada, Israel, Russia, Australia, New Zealand, and the European Union, and is under regulatory review in Switzerland. Medical uses In the US, etravirine is indicated for the treatment of HIV-1 infection in treatment-experienced patients aged two years of age and older. In the EU, etravirine, in combination with a boosted protease inhibitor and other antiretrovirals, is indicated for the treatment of human-immunodeficiency-virus-type-1 (HIV-1) infection in antiretroviral-treatment-experienced people aged six years of age and older. Contraindication People with rare hereditary problems of galactose intolerance, the Lapp lactase deficiency or glucose-galactose malabsorption should not take this etravine. Adverse effects In 2009, the FDA prescribing information for etravirine was modified to include "postmarketing reports of cases of Stevens–Johnson syndrome, toxic epidermal necrolysis, and erythema multiforme, as well as hypersensitivity reactions characterized by rash, constitutional findings, and sometimes organ dysfunction, including liver failure." Mechanism of action Etravirine is a second-generation non-nucleoside reverse transcriptase inhibitor (NNRTI), designed to be active against HIV with mutations that confer resistance to the two most commonly prescribed first-generation NNRTIs, mutation K103N for efavirenz and Y181C for nevirapine. This potency appears to be related to etravirine's flexibility as a molecule. Etravirine is a diarylpyrimidine (DAPY), a type of organic molecule with some conformational isomerism that can bind the enzyme reverse transcriptase in multiple conformations, allowing for a more robust interaction between etravirine and the enzyme, even in the presence of mutations. Chemistry Etravine forms as colourless orthorhombic crystals in space group Pna21. The structures of these and of a number of solvate and salt forms have been reported. Research Etravine has been studied for use in a drug repositioning application. Etravirine was shown to cause an increase in frataxin production. References CYP3A4 inducers Hepatotoxins Non-nucleoside reverse transcriptase inhibitors Aminopyrimidines Drugs developed by Johnson & Johnson Nitriles Organobromides Belgian inventions Diaryl ethers
Etravirine
[ "Chemistry" ]
700
[ "Nitriles", "Functional groups" ]
14,494,205
https://en.wikipedia.org/wiki/Octanal
Octanal is the organic compound, an aldehyde, with the chemical formula CH3(CH2)6CHO. A colorless fragrant liquid with a fruit-like odor, it occurs naturally in citrus oils. It is used commercially as a component in perfumes and in flavor production for the food industry. It is usually produced by hydroformylation of heptene and the dehydrogenation of 1-octanol. Octanal can also be referred to as caprylic aldehyde or C8 aldehyde. References Silberberg, 2006, Principles of Chemistry Octanal Fatty aldehydes Alkanals
Octanal
[ "Chemistry" ]
135
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
14,495,121
https://en.wikipedia.org/wiki/Closed%20user%20group
Closed User group (CUG) is a supplementary service provided by the mobile operators to mobile subscribers who can make and receive calls from any member associated within the group. This service is applicable for SMS also. There will be administrative owner who will be responsible for invoicing. Irrespective of this a CUG member can make and receive calls to and from other networks outside the CUG group too; although calls outside a CUG group may not be invoiced by the administrative owner. A subscriber may: be a member of more than one but not more than ten closed user groups; be permitted to make calls outside of the closed user group (outgoing access); be permitted to receive calls from outside of the closed user group (incoming access); be allowed to make emergency calls irrespective of the group subscription. be allowed to make call inside the closed user group member like incoming and outgoing also. If the user is a member of multiple closed user groups there will be a preferred CUG assigned by the network that will be used by default. However, it is possible on a per-call basis to specify a different closed user group (of which the user is a member) for the call. It is also possible on a per-call basis to suppress the use of the preferred CUG, i.e. act as if the user is not a member of the closed user group, and to suppress the outgoing access permission, i.e. to insist that the call only go through if the destination is a member of the CUG. When an incoming call is received it is possible for the network to indicate the closed user group that is being applied to the call to the called user. For example: Mr Smith, a senior member at a pizza delivery outlet, could be a member of two closed user groups: his own team of pizza delivery agents; his peer group of senior pizza delivery executives. Mr Smith's preferred CUG would be that of his team. But based on whom Mr. Smith is calling, he can either suppress or enable the preferred CUG. Also, when Mr. Smith receives a call, the network would indicate which user group the call originated from. As can be seen, this supplementary service is restricted in use only by organizations, and is not for use by the general public. However, there are handsets that support closed user group applications. Technical references 3GPP 22.085 Closed user group (CUG) supplementary services; Stage 1 3GPP 24.085 Closed user group (CUG) supplementary services; Stage 3 References Mobile technology Telephone services Full form of CUG in Hindi
Closed user group
[ "Technology" ]
540
[ "nan" ]
14,495,472
https://en.wikipedia.org/wiki/Augmented%20tree-based%20routing
Augmented tree-based routing (ATR) protocol, first proposed in 2007, is a multi-path DHT-based routing protocol for scalable networks. ATR resorts to an augmented tree-based address space structure and a hierarchical multi-path routing protocol in order to gain scalability and good resilience against node failure/mobility and link congestion/instability. See also List of ad hoc routing protocols Mobile ad hoc network References Routing algorithms Wireless networking Ad hoc routing protocols
Augmented tree-based routing
[ "Technology", "Engineering" ]
97
[ "Computing stubs", "Wireless networking", "Computer networks engineering", "Computer network stubs" ]
14,495,617
https://en.wikipedia.org/wiki/Lacosamide
Lacosamide, sold under the brand name Vimpat among others, is a medication used for the treatment of partial-onset seizures and primary generalized tonic-clonic seizures. It is used by mouth or intravenously. It is available as a generic medication. Medical uses Lacosamide is indicated for the treatment of partial-onset seizures and adjunctive therapy in the treatment of primary generalized tonic-clonic seizures. Off-label use As with other anti-epileptic drugs (AEDs), lacosamide may have a variety of off-label uses, including for pain management and treatment of mental health disorders. Lacosamide and other AEDs have been used off-label in the management of bipolar disorder, cocaine addiction, dementia, depression, diabetic peripheral neuropathy, fibromyalgia, headache, hiccups, Huntington's disease, mania, migraine, obsessive-compulsive disorder, panic disorder, restless leg syndrome, and tinnitus. Combinations of AEDs are often employed for seizure reduction. Studies are underway for the use of lacosamide as a monotherapy for partial onset seizures, diabetic neuropathy, and fibromyalgia. Contraindications The FDA has assigned lacosamide to pregnancy category C. Animal studies have reported incidences of fetal mortality and growth deficit. Lacosamide has not been tested during human pregnancy, and should be administered with caution. In addition, it has not been determined whether the excretion of lacosamide occurs in breast milk. Side effects Lacosamide was generally well tolerated in adult patients with partial-onset seizures. The side-effects most commonly leading to discontinuation were dizziness, ataxia, diplopia (double vision), nystagmus, nausea, vertigo and drowsiness. These adverse reactions were observed in at least 10% of patients. Less common side-effects include tremors, blurred vision, vomiting and headache. Gastrointestinal A generally well-tolerated drug, the most commonly reported gastrointestinal side effects of lacosamide are nausea, vomiting, and diarrhea. Central nervous system Dizziness was the most common treatment-related adverse event. Other CNS effects are headache, drowsiness, blurred vision, involuntary movements, memory problems, diplopia (double vision), trembling or shaking of the hands, unsteadiness, ataxia. Psychiatric Panic attacks; agitation or restlessness; irritability and aggression, anxiety, or depression; suicidality; insomnia and mania; altered mood; false and unusual sense of well-being. Lacosamide appears to have a low incidence of psychiatric side effects with psychosis reported in only 0.3% of patients. Cardiovascular There is the risk of postural hypotension as well as arrhythmias. In addition, there is the possibility of atrioventricular block. There have also been post-marketing reports of lacosamide causing atrial fibrillation and atrial flutter in some populations, namely those with diabetic neuropathy. Allergies There have been reports of rash and pruritus. Warnings Suicidal behavior and ideation have been observed as early as one week after starting treatment with lacosamide, and is an adverse reaction to the use of most AEDs. In clinical trials with a medial treatment duration of 12 weeks, the incidence of suicidal ideation was 0.43% among 27,863 patients as opposed to 0.24% among 16,029 placebo-treated patients. Suicidal behavior was observed in 1 of every 530 patients treated. In pregnancy In a study conducted to assess the teratogenic potential of AEDs in the zebrafish embryo, the teratogenicity index of lacosamide was found to be higher than that of lamotrigine, levetiracetam, and ethosuximide. Lacosamide administration resulted in different malformations in the neonatal zebrafish depending on dosage. Overdose There is no known antidote in the event of an overdose. Pharmacology Pharmacodynamics Lacosamide is a functionalized amino acid that produces activity in the maximal electroshock seizure (MES) test, that, like some other antiepileptic drugs (AEDs), are believed to act through voltage-gated sodium channels. Lacosamide enhances the slow inactivation of voltage-gated sodium channels without affecting the fast inactivation of voltage-gated sodium channels. This inactivation prevents the channel from opening, helping end the action potential. Many antiepileptic drugs, like carbamazepine or lamotrigine, slow the recovery from inactivation and hence reduce the ability of neurons to fire action potentials. Inactivation only occurs in neurons firing action potentials; this means that drugs that modulate fast inactivation selectively reduce the firing in active cells. Slow inactivation is similar but does not produce complete blockade of voltage gated sodium channels, with both activation and inactivation occurring over hundreds of milliseconds or more. Lacosamide makes this inactivation happen at less depolarized membrane potentials. This means that lacosamide only affects neurons which are depolarized or active for long periods of time, typical of neurons at the focus of epilepsy. Lacosamide administration results in the inhibition of repetitive neuronal firing, the stabilization of hyperexcitable neuronal membranes, and the reduction of long-term channel availability, but does not affect physiological function. Lacosamide has a dual mechanism of action. It also modulates collapsin response mediator protein 2 (CRMP-2), preventing the formation of abnormal neuronal connections in the brain. Lacosamide does not affect AMPA, kainate, NMDA, GABAA, GABAB or a variety of dopaminergic, serotonergic, adrenergic, muscarinic or cannabinoid receptors and does not block potassium or calcium currents. Lacosamide does not modulate the reuptake of neurotransmitters including norepinephrine, dopamine, and serotonin. In addition, it does not inhibit GABA transaminase. Preclinical research In preclinical trials, the effect of lacosamide administration on animal models of epilepsy was tested using the Frings audiogenic seizures (AGS)-susceptible mouse model of seizure activity with an effective dose (ED50) of 0.63 mg/kg, i.p.. The effect of lacosamide was also assessed using the MES test to detect inhibition of seizure spread. Lacosamide administration was successful in preventing the spread of seizures induced by MES in mice (ED50 = 4.5 mg/kg, i.p.) and rats (ED50 = 3.9 mg/kg, p.o.). In preclinical trials, administration of lacosamide in combination with other AEDs resulted in synergistic anticonvulsant effects. Lacosamide produced effects in animal models of essential tremor, tardive dyskinesia, schizophrenia, and anxiety. Preclinical trials found the S-stereoisomer to be less potent than the R-stereoisomer in the treatment of seizures. Pharmacokinetics When administered orally in healthy individuals, lacosamide is rapidly absorbed from the gastrointestinal tract. Little of the drug is lost via the first pass effect, and thus has an oral bioavailability of nearly 100%. In adults, lacosamide demonstrates a low plasma protein binding of <15%, which reduces the potential for interaction with other drugs. Lacosamide is at its highest concentration in blood plasma approximately 1 to 4 hours after oral administration. Lacosamide has a half life of about 12–16 hours, which remains unchanged if the patients is also taking enzyme inducers. Consequently, the drug is administered twice per day at 12-hour intervals. Lacosamide is excreted renally, with 95% of the drug eliminated in the urine. 40% of the compound remains unchanged from its original structure, while the rest of the elimination product consists of metabolites of lacosamide. Just 0.5% of the drug is eliminated in the feces. The major metabolic pathway of lacosamide is CYP2C9, CY2C19, and CYP3A4-mediated demethylation. The dose-response curve for lacosamide is linear and proportional for oral doses of up to 800 mg and intravenous doses of up to 300 mg. Lacosamide has low potential for drug-drug interactions, and no pharmacokinetic interactions have been found to occur with other (AEDs) that act on sodium channels. A study on the binding of lacosamide to CRMP-2 in Xenopus oocytes showed both competitive and specific binding. Lacosamide has a Kd value just under 5 μM and a Bmax of about 200 pM/mg. The volume of distribution (Vd) of lacosamide in plasma is 0.6 L/kg, which is close to the total volume of water. Lacosamide is ampiphilic and is thus hydrophilic while also lipophilic enough to cross the blood-brain barrier. Chemistry Lacosamide is a powdery, white to light yellow crystalline compound. The chemical name of lacosamide is (R)-2-acetamido-N-benzyl-3-methoxypropionamide and the systemic name is N2-Acetyl-N-benzyl-O-methyl-D-serinamide. Lacosamide is a functionalized amino acid molecule that has high solubility in water and DMSO, with a solubility of 20.1 mg/mL in phosphate-buffered saline (PBS, pH 7.5, 25 °C). The molecule has six rotatable bonds and one aromatic ring. Lacosamide melts at 143-144 °C and boils at 536.447 °C at a pressure of 760 mmHg. Synthesis The following three-step synthesis of lacosamide was proposed in 1996. (R)-2-amino-3-hydroxypropanoic acid is treated with acetic anhydride and acetic acid. The product is treated first with N-methylmorpholine, isobutyl chloroformate, and benzylamine, next with methyl iodide and silver oxide, forming lacosamide. More efficient routes to synthesis have been proposed in recent years, including the following. History Lacosamide was discovered at the University of Houston in 1996. They hypothesized that modified amino acids may be therapeutically useful in the treatment of epilepsy. A few hundred such molecules were synthesized over several years and these were tested phenotypically in an epilepsy disease model performed in rats. N-benzyl-2-acetamido-3-methoxypropionamide was found to be highly efficacious in this model, with the biological activity traced specifically to its R enantiomer. This compound was to become lacosamide after being licensed by Schwarz Pharma, which completed its pre-clinical and early clinical development. After its purchase of Schwarz Pharma in 2006, UCB completed the clinical development program and obtained marketing approval for lacosamide. Its precise mechanism of action was unknown at the time of approval, and the exact amino acid targets involved remain uncertain to this day. The U.S. Food and Drug Administration (FDA) accepted UCB's New Drug Application for lacosamide as of November 29, 2007, beginning the approval process for the drug. UCB also filed for marketing approval in the European Union (EU); the European Medicines Agency accepted the marketing application for review in May 2007. The drug was approved in the EU on September 3, 2008. It was approved in the US on October 29, 2008. The release of lacosamide was delayed owing to an objection about its placement into schedule V of the Controlled Substances Act. The FDA issued their final rule of placement into Schedule V on June 22, 2009. Lacosamide's US patent expired on March 17, 2022. Partial-onset seizures Lacosamide was tested in three placebo-controlled, double-blind, randomized trials of at least 1300 patients. In a multi center, multinational, placebo-controlled, double-blind, randomized clinical trial conducted to determine the efficacy and safety of different doses of lacosamide on individuals with poorly controlled partial-onset seizures, lacosamide was found significantly to reduce seizure frequency when given in addition to other antiepileptics, at doses of 400 and 600 milligrams a day. Peripheral neuropathy In a smaller trial of patients with diabetic neuropathy, lacosamide also provided significantly better pain relief when compared to placebo. Lacosamide administration in combination with 1-3 other AEDs was well tolerated in patients. Lacosamide administered at 400 mg/day was found to significantly reduce pain in patients with diabetic neuropathy in a multi center, double-blind, placebo-controlled Phase III trial with a treatment duration of 18 weeks. A small (n=24) study for small fiber peripheral neuropathy also showed positive results. Society and culture Names Lacosamide is the international nonproprietary name (INN). It was formerly known as erlosamide, harkoseride, SPM-927, and ADD 234037. Lacosamide is sold under the brand name Vimpat by UCB, and under the brand name Motpoly XR by Acute Pharmaceuticals. In Pakistan, it is marketed by G.D. Searle as Lacolit. Research Clinical trials are underway for the use of lacosamide as monotherapy for partial-onset seizures. There is no evidence that lacosamide provides additional value over current antiepileptic drugs (AEDs) for the treatment of partial-onset seizures, but it may offer a safety advantage. Newer AEDs, including lacosamide, vigabatrin, felbamate, gabapentin, tiagabine, and rufinamide have been found to be more tolerable and safer than older drugs such as carbamazepine, phenytoin, and valproate. References Further reading Acetamides Anticonvulsants Belgian inventions Benzyl compounds Ethers Sodium channel blockers
Lacosamide
[ "Chemistry" ]
3,045
[ "Organic compounds", "Functional groups", "Ethers" ]
14,495,904
https://en.wikipedia.org/wiki/Ralf%20Brown%27s%20Interrupt%20List
Ralf Brown's Interrupt List (aka RBIL, x86 Interrupt List, MS-DOS Interrupt List or INTER) is a comprehensive list of interrupts, calls, hooks, interfaces, data structures, CMOS settings, memory and port addresses, as well as processor opcodes for x86 machines from the 1981 IBM PC up to 2000 (including many clones), most of it still applying to IBM PC compatibles today. It also lists some special function registers for the NEC V25 and V35 microcontrollers. Overview The list covers operating systems, device drivers, and application software; both documented and undocumented information including bugs, incompatibilities, shortcomings, and workarounds, with version, locale, and date information, often at a detail level far beyond that found in the contemporary literature. A large part of it covers system BIOSes and internals of operating systems such as DOS, OS/2, and Windows, as well as their interactions. It has been a widely used resource by IBM PC system developers, analysts, as well as application programmers in the pre-Windows era. Parts of the compiled information have been used for and in the creation of several books on systems programming, some of which have also been translated into Chinese, Japanese and Russian. As such the compilation has proven to be an important resource in developing various closed and open source operating systems, including Linux and FreeDOS. Today it is still used as a reference to BIOS calls and to develop programs for DOS as well as other system-level software. The project is the result of the research and collaborative effort of more than 650 listed contributors worldwide over a period of 15 years, of which about 290 provided significant information (and some 55 of them even more than once). The original list was created in January 1985 by Janet Jack and others, and, named "Interrupt List for MS-DOS", it was subsequently maintained and mailed to requestors on Usenet by Ross M. Greenberg until 1986. Since October 1987 it is maintained by Ralf D. Brown, a researcher at Carnegie Mellon University's Language Technologies Institute. Information from several other interrupt listings was merged into the list in order to establish one comprehensive reference compilation. Over the years, Michael A. Shiels, Timothy Patrick Farley, Matthias R. Paul, Robin Douglas Howard Walker, Wolfgang Lierz and Tamura Jones became major contributors to the project, providing information all over the list. The project was also expanded to include other PC development-related information and therefore absorbed a number of independently maintained lists on PC I/O ports (by Wim Osterholt and Matthias R. Paul), BIOS CMOS memory contents (by Atley Padgett Peterson), processor opcodes (by Alex V. Potemkin) and bugs (by Harald Feldmann). Brown and Paul also conducted several systematic surveys on specific hard- and software details among a number of dedicated user groups in order to validate some info and to help fill some gaps in the list. Originally, the list was distributed in an archive named INTERRUP in various compression formats as well as in the form of diffs. The distribution file name was changed to include a version in the form INTERnyy (with n = issue number, and yy = 2-digit release year) in 1988. In mid 1989 the distribution settled to only use ZIP compression. When the archive reached the size of a 360 KB floppy in June 1991, the distribution split into several files following an INTERrrp.ZIP naming scheme (with rr = revision starting with 26 for version 91.3, and p = part indicator of the package starting with letter A). Officially named "MS-DOS Interrupt List" and "x86 Interrupt List" (abbreviated as "INTER") by its maintainer, the community coined the unofficial name "Ralf Brown's Interrupt List" (abbreviated as "RBIL") in the 1990s. The publication is currently at revision 61 as of 17 July 2000 with almost 8 MB of ASCII text including close to entries plus about tables, fully cross linked, which would result in more than 3700 pages (at 60 lines per page) of condensed information when printed. Of this, the interrupt list itself makes up some 5.5 MB for more than 2500 pages printed. While the project is not officially abandoned and the website is still maintained (), new releases have not been forthcoming for a very long time, despite the fact that information was still pending for release even before the INTER61 release in 2000. New releases were planned for at several times in 2001 and 2002, but when they did not materialize, portions of the new information on DOS and PC internals provided by Paul were circulated in preliminary form in the development community for peer-review and to assist in operating system development. See also BIOS interrupt call DOS API INT (x86 instruction) Malware analysis Notes References External links (NB. Delorie Software's HTML-converted version of INTER61.) (NB. Computer Tyme's HTML-converted version of INTER61.) Interrupts x86 architecture IBM PC compatibles History of computing
Ralf Brown's Interrupt List
[ "Technology" ]
1,059
[ "Interrupts", "Events (computing)", "Computers", "History of computing" ]
14,496,121
https://en.wikipedia.org/wiki/Conductance%20%28graph%20theory%29
In theoretical computer science, graph theory, and mathematics, the conductance is a parameter of a Markov chain that is closely tied to its mixing time, that is, how rapidly the chain converges to its stationary distribution, should it exist. Equivalently, the conductance can be viewed as a parameter of a directed graph, in which case it can be used to analyze how quickly random walks in the graph converge. The conductance of a graph is closely related to the Cheeger constant of the graph, which is also known as the edge expansion or the isoperimetic number. However, due to subtly different definitions, the conductance and the edge expansion do not generally coincide if the graphs are not regular. On the other hand, the notion of electrical conductance that appears in electrical networks is unrelated to the conductance of a graph. History The conductance was first defined by Mark Jerrum and Alistair Sinclair in 1988 to prove that the permanent of a matrix with entries from has a polynomial-time approximation scheme. In the proof, Jerrum and Sinclair studied the Markov chain that switches between perfect and near-perfect matchings in bipartite graphs by adding or removing individual edges. They defined and used the conductance to prove that this Markov chain is rapidly mixing. This means that, after running the Markov chain for a polynomial number of steps, the resulting distribution is guaranteed to be close to the stationary distribution, which in this case is the uniform distribution on the set of all perfect and near-perfect matchings. This rapidly mixing Markov chain makes it possible in polynomial time to draw approximately uniform random samples from the set of all perfect matchings in the bipartite graph, which in turn gives rise to the polynomial-time approximation scheme for computing the permanent. Definition For undirected -regular graphs without edge weights, the conductance is equal to the Cheeger constant divided by , that is, we have . More generally, let be a directed graph with vertices, vertex set , edge set , and real weights on each edge . Let be any vertex subset. The conductance of the cut is defined viawhereand so is the total weight of all edges that are crossing the cut from to andis the volume of , that is, the total weight of all edges that start at . If equals , then also equals and is defined as . The conductance of the graph is now defined as the minimum conductance over all possible cuts:Equivalently, the conductance satisfies Generalizations and applications In practical applications, one often considers the conductance only over a cut. A common generalization of conductance is to handle the case of weights assigned to the edges: then the weights are added; if the weight is in the form of a resistance, then the reciprocal weights are added. The notion of conductance underpins the study of percolation in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes. Conductance also helps measure the quality of a Spectral clustering. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster (which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster (called "internal conductance") can be used as well. Markov chains For an ergodic reversible Markov chain with an underlying graph G, the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets of the capacity of divided by the ergodic flow out of . Alistair Sinclair showed that conductance is closely tied to mixing time in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the probability of leaving a set of nodes given that we started in that set to begin with. This may also be written as where is the stationary distribution of the chain. In some literature, this quantity is also called the bottleneck ratio of G. Conductance is related to Markov chain mixing time in the reversible setting. Precisely, for any irreducible, reversible Markov Chain with self loop probabilities for all states and an initial state , . See also Resistance distance Percolation theory Krackhardt E/I Ratio Notes References Markov processes Algebraic graph theory Matrices Graph invariants
Conductance (graph theory)
[ "Mathematics" ]
965
[ "Mathematical objects", "Graph theory", "Matrices (mathematics)", "Graph invariants", "Mathematical relations", "Algebra", "Algebraic graph theory" ]
14,496,397
https://en.wikipedia.org/wiki/Snarfing
Snarf is a term used by computer programmers and the UNIX community meaning to copy a file or data over a network, for any purpose, with additional specialist meanings to access data without appropriate permission. It also refers to using command line tools to transfer files through the HTTP, gopher, finger, and FTP protocols without user interaction, and to a method of achieving cache coherence in a multiprocessing computer architecture through observation of writes to cached data. Example An example of a snarf is the Evil twin attack, using a simple shell script running software like AirSnarf to create a wireless hotspot complete with a captive portal. Wireless clients that associate to a snarf access point will receive an IP, DNS, and gateway and appear completely normal. Users will have all of their DNS queries resolve to the attacker's IP number, regardless of their DNS settings, so any website they attempt to visit will bring up a snarf "splash page", requesting a username and password. The username and password entered by unsuspecting users will be mailed to root@localhost. The reason this works is: Legitimate access points can be impersonated and/or drowned out by rogue access points, and Users without a means to validate the authenticity of access points will nevertheless give up their hotspot credentials when asked for them See also Bluejacking Bluesnarfing Pod slurping References External links Airsnarf Attack Wiktionary "snarf" Privacy of telecommunications Web security exploits Cybercrime
Snarfing
[ "Technology" ]
324
[ "Computer security exploits", "Web security exploits" ]
14,496,433
https://en.wikipedia.org/wiki/Coreu
COREU (French: – Telex network of European correspondents, also EUKOR-Netzwerk in Austria) is a communication network of the European Union for the communication of the Council of the European Union, the European correspondents of the foreign ministries of the EU member states, permanent representatives of member states in Brussels, the European Commission, and the General Secretariat of the Council of the European Union. The European Parliament is not among the participants. COREU is the European equivalent of the American Secret Internet Protocol Router Network (SIPRNet, also known as Intelink-S). COREU's official aim is fast communication in case of crisis. The network enables a closer cooperation in matters regarding foreign affairs. In actuality the system's function exceeds that of mere communication, it also enables decision-making. COREU's first goal is to enable the exchange of information before and after decisions. Relaying upfront negotiations in preparation of meetings is the second goal. In addition, the system also allows the editing of documents and the decision-making, especially if there is little time. While the first two goals are preparatory measures for a shared foreign policy, the third is a methodical variant marked by practise that is defining for the image of the Common Foreign and Security Policy. Members (The following information dates from 2013):* There is one representative in each of the capital cities in the EU.(since 1973) In Germany for example, this is the European correspondent (EU-KOR) from the Foreign Office. In Austria it is the European correspondent from the Referat II.1.a in the Federal Ministry for Europe, Integration and Foreign Affairs They are the correspondents (since 1982) for the European Commission They comprise the secretariat for the European Council They also make up the European External Action Service (EEAS) (responsible for foreign policy issues, since 1987) Data volume and technical details COREU functions as a spoke-hub distribution paradigm system with the hub in Brussels. The network is operated by the European Union Intelligence and Situation Centre (formerly Joint Situation Center, JSC). The technical infrastructure is located in a building of the European Council. COREU may be described as an advanced telex system with encrypted messages via dedicated terminals. Once a message has reached the destination, it is then redistributed via the local media. In contrast, messages of governments are transmitted via local media to the correspondents and from there delivered point-to-point to Brussels via COREU. In 2010, approximately 8500 communications had been distributed over this network. History A telex-based communication system under the name COREU was established in 1973. Originally, only the ministries of Foreign Affairs in the European capitals were connected to it. This telex system was replaced in 1997 by the mail system CORTESY (COREU Terminal Equipment System). The name was retained despite the technical innovation. COREU was reportedly compromised by hackers working for the People's Liberation Army Strategic Support Force, allowing for the theft of thousands of low-classified documents and diplomatic cables. References External links European Union glossary: Coreu European Union Foreign relations of the European Union Networking standards Diplomacy Telecommunications techniques Computer networks Computer networks engineering
Coreu
[ "Technology", "Engineering" ]
656
[ "Networking standards", "Computer standards", "Computer networks engineering", "Computer engineering" ]
14,498,136
https://en.wikipedia.org/wiki/Biogenesis%20of%20lysosome-related%20organelles%20complex%201
BLOC-1 or biogenesis of lysosome-related organelles complex 1 is a ubiquitously expressed multisubunit protein complex in a group of complexes that also includes BLOC-2 and BLOC-3. BLOC-1 is required for normal biogenesis of specialized organelles of the endosomal-lysosomal system, such as melanosomes and platelet dense granules. These organelles are called LROs (lysosome-related organelles) which are apparent in specific cell-types, such as melanocytes. The importance of BLOC-1 in membrane trafficking appears to extend beyond such LROs, as it has demonstrated roles in normal protein-sorting, normal membrane biogenesis, as well as vesicular trafficking. Thus, BLOC-1 is multi-purposed, with adaptable function depending on both organism and cell-type. Mutations in all BLOC complexes lead to diseased states characterized by Hermansky-Pudlak Syndrome (HPS), a pigmentation disorder subdivided into multiple types depending on the mutation, highlighting the role of BLOC-1 in proper LRO-function. BLOC-1 mutations also are thought to be linked to schizophrenia, and BLOC-1 dysfunction in the brain has important ramifications in neurotransmission. Much effort has been given to uncovering the molecular mechanisms of BLOC-1 function to understand its role in these diseases. Ultracentrifugation coupled with electron microscopy demonstrated that BLOC-1 has 8 subunits (pallidin, cappuccino, dysbindin, Snapin, Muted, BLOS1, BLOS2, and BLOS3) that are linked linearly to form a complex of roughly 300 Angstrom in length and 30 Angstrom in diameter. Bacterial recombination also demonstrated heterotrimeric subcomplexes containing pallidin, cappucinno, and BLOS1 as well as dysbindin, Snapin, and BLOS-2 as important intermediate structures. These subcomplexes may explain different functional outcomes observed by altering different BLOC-1 subunits. Furthermore, dynamic bending of the complex as much as 45 degrees indicates flexibility is likely linked to proper BLOC-1 function. Within the endomembrane system, BLOC-1 acts at the early endosome, as witnessed in electron microscopy experiments, where it helps coordinate protein-sorting of LAMPS (lysosome-associate membrane proteins). Multiple studies recapitulate an association with the adaptor complex AP-3, a protein involved in vesicular trafficking of cargo from the early endosome to lysosomal compartments. BLOC-1 demonstrates physical association with AP-3 and BLOC-2 upon immunoprecipitation, although not to both complexes at the same time. Indeed, BLOC-1 functions in an AP-3 dependent route to sort CD63 (LAMP3) and Tyrp1. Furthermore, another study suggests an AP-3 dependent route of BLOC-1 also facilitates trafficking of LAMP1 and Vamp7-T1, a SNARE protein. An AP-3-independent, BLOC-2-dependent route of BLOC-1 sorting of Tyrp1 is also observed. Therefore, BLOC-1 appears to have multifaceted trafficking behavior. Indeed, AP-3 knockout mice maintain ability to deliver Tyrp1 to melanosomes, supporting existence of multiple BLOC-1 trafficking pathways. Evidence, however, suggests BLOC-2 may directly or indirectly intersect BLOC-1 trafficking downstream of early endosomes; BLOC-1 deficiency promotes missorted Tyrp1 at the plasma membrane, while BLOC-2 deficiency promotes Tyrp1 concentration at intermediate endosomal compartments. These studies demonstrate that BLOC-1 facilitates protein transport to lysosomal compartments, such as melanosomes, via multiple routes, although the exact functional association with BLOC-2 is unclear. The majority of studies have focused on mammalian BLOC-1, presumably because of its association with multiple disease states in humans. Still, it is clear BLOC-1 has an evolutionarily conserved importance in trafficking because its yeast homolog, which contains Vab2, has been proposed to modulate Rab5 (Vps21), which is essential for its membrane localization, by acting as a receptor on early endosomes for Rab5-GAP Msb3. Although this study purports the function of BLOC-1 on early endosomes, it has recently been argued that yeast do not contain an early endosome. In light of these newer findings, it appears, BLOC-1 may actually act at the TGN in yeast. Nevertheless, BLOC-1 is important for proper endomembrane function in both lower and higher order eukaryotes. In mammalian cells, most studies have focused on the ability of BLOC-1 to sort proteins. However, recent findings indicate that BLOC-1 has more complex functions in membrane biogenesis by associating with the cytoskeleton. Recycling endosome biogenesis is mediated by BLOC-1 as a hub for cytoskeletal activity. The kinesin KIF13A and actin machinery (AnxA2 and Arp2/3) appear to interact with BLOC-1 to generate recycling endosomes/recycling endosome tubules where microtubule action may lengthen tubules and microfilament action may stabilize or excise tubules. The BLOC-1 subunit pallidin associates with synaptic cytoskeletal components in Drosophila melanogaster neurons. Thus, BLOC-1 appears to engage in both protein sorting as well as membrane biogenesis via diverse mechanisms. Further study will be required to synthesize any of these molecular interactions into possible unified mechanisms. Studies of BLOC-1 in the nervous system have begun to link numerous molecular and cellular mechanisms to its proposed contribution to schizophrenia. Knock-down studies of the dysbindin gene DTNBP1 via siRNA demonstrated that the dysbindin subunit is integral for the signaling and recycling of the D2 receptor (DRD2) but not the D1 receptor. BLOC-1 mutations in dysbindin therefore can alter dopaminergic signaling in the brain which may confer symptoms of schizophrenia. These results appear to be relevant to the whole complex as the majority of expressed dysbindin localized to the BLOC-1 complex in the mouse brain. Furthermore, proper neurite extension appears to be regulated by BLOC-1, which may have molecular links to the ability of BLOC-1 to physically associate in vitro with SNARE proteins such as SNAP-25, SNAP-17, and syntaxin 13. This interaction with SNAREs could aid in membrane trafficking toward neurite extensions. Studies in Drosophila melanogaster indicate pallidin is non-essential for synaptic vesicle homeostasis or anatomy but is essential under conditions of increased neuronal signaling to maintain vesicular trafficking from endosomes via recycling mechanisms. The effects of a non-functional Bloc1s6 gene (encoding for pallidin) on the metabolome of the post-natal mouse hippocampus were explored using LC-MS, revealing altered levels of a variety of metabolites. Particularly intriguing effects include an increase in glutamate (and its precursor glutamine), an excitatory neurotransmitter linked to schizophrenia, as well as decreases in the neurotransmitters phenylalanine and tryptophan. Overall, modifications in the metabolome of these mice extend to nucleobase molecules and lysophospholipids as well, implicating further dysregulation effects of BLOC-1 deficiencies to plausible molecular contributions of schizophrenia. Complex components The identified protein subunits of BLOC-1 include: pallidin muted (protein) dysbindin cappuccino (protein) Snapin BLOS1 BLOS2 BLOS3 References Cell biology
Biogenesis of lysosome-related organelles complex 1
[ "Biology" ]
1,652
[ "Cell biology" ]
14,498,167
https://en.wikipedia.org/wiki/Goldbach%E2%80%93Euler%20theorem
In mathematics, the Goldbach–Euler theorem (also known as Goldbach's theorem), states that the sum of 1/(p − 1) over the set of perfect powers p, excluding 1 and omitting repetitions, converges to 1: This result was first published in Euler's 1737 paper "Variæ observationes circa series infinitas". Euler attributed the result to a letter (now lost) from Goldbach. Proof Goldbach's original proof to Euler involved assigning a constant to the harmonic series: , which is divergent. Such a proof is not considered rigorous by modern standards. There is a strong resemblance between the method of sieving out powers employed in his proof and the method of factorization used to derive Euler's product formula for the Riemann zeta function. Let be given by Since the sum of the reciprocal of every power of 2 is , subtracting the terms with powers of 2 from gives Repeat the process with the terms with the powers of 3: Absent from the above sum are now all terms with powers of 2 and 3. Continue by removing terms with powers of 5, 6 and so on until the right side is exhausted to the value of 1. Eventually, we obtain the equation which we rearrange into where the denominators consist of all positive integers that are the non-powers minus 1. By subtracting the previous equation from the definition of given above, we obtain where the denominators now consist only of perfect powers minus 1. While lacking mathematical rigor, Goldbach's proof provides a reasonably intuitive argument for the theorem's truth. Rigorous proofs require proper and more careful treatment of the divergent terms of the harmonic series. Other proofs make use of the fact that the sum of 1/(p − 1) over the set of perfect powers p, excluding 1 but including repetitions, converges to 1 by demonstrating the equivalence: See also Goldbach's conjecture List of sums of reciprocals References . Theorems in analysis Mathematical series Articles containing proofs
Goldbach–Euler theorem
[ "Mathematics" ]
428
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Series (mathematics)", "Mathematical structures", "Calculus", "Articles containing proofs", "Mathematical problems" ]
14,498,600
https://en.wikipedia.org/wiki/Chromo%E2%80%93Weibel%20instability
The Chromo–Weibel instability is a plasma instability present in homogeneous or nearly homogeneous non-abelian plasmas which possess an anisotropy in momentum space. In the linear limit it is similar to the Weibel instability in electromagnetic plasmas but due to non-linear interactions present in non-abelian plasmas the late development of this instability is characterized by a turbulent cascade of modes. This instability is relevant in the understanding of the early-time dynamics of the quark-gluon plasma as produced in heavy-ion collisions. See also Weibel instability References Quantum chromodynamics Plasma instabilities
Chromo–Weibel instability
[ "Physics" ]
129
[ "Physical phenomena", "Plasma physics", "Plasma phenomena", "Plasma instabilities", "Plasma physics stubs" ]
14,499,186
https://en.wikipedia.org/wiki/Eradication%20of%20infectious%20diseases
The eradication of infectious diseases is the reduction of the prevalence of an infectious disease in the global host population to zero. Two infectious diseases have successfully been eradicated: smallpox in humans, and rinderpest in ruminants. There are four ongoing programs, targeting the human diseases poliomyelitis (polio), yaws, dracunculiasis (Guinea worm), and malaria. Five more infectious diseases have been identified as potentially eradicable with current technology by the Carter Center International Task Force for Disease Eradication — measles, mumps, rubella, lymphatic filariasis (elephantiasis) and cysticercosis (pork tapeworm). The concept of disease eradication is sometimes confused with disease elimination, which is the reduction of an infectious disease's prevalence in a regional population to zero, or the reduction of the global prevalence to a negligible amount. Further confusion arises from the use of the term 'eradication' to refer to the total removal of a given pathogen from an individual (also known as clearance of an infection), particularly in the context of HIV and certain other viruses where such cures are sought. The targeting of infectious diseases for eradication is based on narrow criteria, as both biological and technical features determine whether a pathogenic organism is (at least potentially) eradicable. The targeted pathogen must not have a significant non-human (or non-human-dependent) reservoir (or, in the case of animal diseases, the infection reservoir must be an easily identifiable species, as in the case of rinderpest). This requires sufficient understanding of the life cycle and transmission of the pathogen. An efficient and practical intervention (such as a vaccine or antibiotic) must be available to interrupt transmission. Studies of measles in the pre-vaccination era led to the concept of the critical community size, the minimal size of the population below which a pathogen ceases to circulate. The use of vaccination programs before the introduction of an eradication campaign can reduce the susceptible population. The disease to be eradicated should be clearly identifiable, and an accurate diagnostic tool should exist. Economic considerations, as well as societal and political support and commitment, are other crucial factors that determine eradication feasibility. Eradicated diseases So far, only two diseases have been successfully eradicated—one specifically affecting humans (smallpox) and one affecting cattle (rinderpest). Smallpox Smallpox is the first disease, and so far the only infectious disease of humans, to be eradicated by deliberate intervention. It became the first disease for which there was an effective vaccine in 1798 when Edward Jenner showed the protective effect of inoculation (vaccination) of humans with material from cowpox lesions. Smallpox (variola) occurred in two clinical varieties: variola major, with a mortality rate of up to 40 percent, and variola minor, also known as alastrim, with a mortality rate of less than one percent. The last naturally occurring case of variola major was diagnosed in October 1975 in Bangladesh. The last naturally occurring case of smallpox (variola minor) was diagnosed on 26 October 1977, in Ali Maow Maalin, in the Merca District, of Somalia. The source of this case was an outbreak in the nearby district of Kurtunwarey. All 211 contacts were traced, revaccinated, and kept under surveillance. After two years' detailed analysis of national records, the global eradication of smallpox was certified by an international commission of smallpox clinicians and medical scientists on 9 December 1979, and endorsed by the General Assembly of the World Health Organization on 8 May 1980. However, there is an ongoing debate regarding the continued storage of the smallpox virus by labs in the US and Russia, as any accidental or deliberate release could create a new epidemic in people born since the late 1980s due to the cessation of vaccinations against the smallpox virus. Rinderpest During the twentieth century, there were a series of campaigns to eradicate rinderpest, a viral disease that infected cattle and other ruminants and belonged to the same family as measles, primarily through the use of a live attenuated vaccine. The final, successful campaign was led by the Food and Agriculture Organization of the United Nations. On 14 October 2010, with no diagnoses for nine years, the FAO announced that the disease had been completely eradicated, making this the first (and so far the only) disease of livestock to have been eradicated by human undertakings. Global eradication underway Moribund diseases A few diseases are commonly-regarded as moribund, in the sense that they are on the path to eradication. Poliomyelitis (polio) A dramatic reduction of the incidence of poliomyelitis in industrialized countries followed the development of a vaccine in the 1950s. In 1960, Czechoslovakia became the first country certified to have eliminated polio. In 1988, the World Health Organization (WHO), Rotary International, the United Nations Children's Fund (UNICEF), and the United States Centers for Disease Control and Prevention (CDC) passed the Global Polio Eradication Initiative. Its goal was to eradicate polio by the year 2000. The updated strategic plan for 2004–2008 expects to achieve global eradication by interrupting poliovirus transmission, using the strategies of routine immunization, supplementary immunization campaigns, and surveillance of possible outbreaks. The WHO estimates that global savings from eradication, due to forgone treatment and disability costs, could exceed one billion U.S. dollars per year. The following world regions have been declared polio-free: The Americas (1994) Western Pacific region, including China (2000) Europe (2002) Southeast Asia region (2014), including India Africa (2020) The lowest annual wild polio prevalence seen so far was in 2021, with only 6 reported cases. Only two countries remain in which poliovirus transmission may never have been interrupted: Pakistan and Afghanistan. (There have been no cases caused by wild strains of poliovirus in Nigeria since August 2016, though cVDPV2 was detected in environmental samples in 2017.) Nigeria was removed from the WHO list of polio-endemic countries in September 2015 but added back in 2016, and India was removed in 2014 after no new cases were reported for one year. On 20 September 2015, the World Health Organization announced that wild poliovirus type 2 had been eradicated worldwide, as it has not been seen since 1999. On 24 October 2019, the World Health Organization announced that wild poliovirus type 3 had also been eradicated worldwide. This leaves only wild poliovirus type 1 and vaccine-derived polio circulating in a few isolated pockets, with all wild polio cases after August 2016 in Afghanistan and Pakistan. Dracunculiasis Dracunculiasis, also called Guinea worm disease, is a painful and disabling parasitic disease caused by the nematode Dracunculus medinensis. It is spread through consumption of drinking water infested with copepods hosting Dracunculus larvae. The Carter Center has led the effort to eradicate the disease, along with the CDC, the WHO, UNICEF, and the Bill and Melinda Gates Foundation. Unlike diseases such as smallpox and polio, there is no vaccine or drug therapy for guinea worm. Eradication efforts have been based on making drinking water supplies safer (e.g. by provision of borehole wells, or through treating the water with larvicide), on containment of infection and on education for safe drinking water practices. These strategies have produced many successes: two decades of eradication efforts have reduced Guinea worm's global incidence dramatically from over 100,000 in 1995 to less than 100 cases since 2015. While success has been slower than was hoped (the original goal for eradication was 1995), the WHO has certified 180 countries free of the disease, and in 2020 six countries—South Sudan, Ethiopia, Mali, Angola, Cameroon and Chad—reported cases of guinea worm. , the WHO predicted it would be "a few years yet" before eradication is achieved, on the basis that it took 6–12 years for the countries that have so far eliminated guinea worm transmission to do so after reporting a similar number of cases to that reported by Sudan in 2009. Nonetheless, the last 1% of the effort may be the hardest, with cases not substantially decreasing from 2015 (22) to 2020 (24). As a result of missing the 2020 target, the WHO has revised its target for eradication to 2030. The worm is now understood to be able to infect dogs, domestic cats and baboons as well as humans, providing a natural reservoir for the pathogen and thus complicating eradication efforts. In response, the eradication effort is now also targeting animals (especially wild dogs) for treatment and isolation since animal infections far outnumber human infections now (in 2020 Chad reported 1570 animal infections and 12 human infections). Yaws Yaws is a rarely fatal but highly disfiguring disease caused by the spiral-shaped bacterium (spirochete) Treponema pallidum pertenue, a close relative of the syphilis bacterium Treponema pallidum pallidum, spread through skin to skin contact with infectious lesions. The global prevalence of this disease and the other endemic treponematoses, bejel and pinta, was reduced by the Global Control of Treponematoses programme between 1952 and 1964 from about 50 million cases to about 2.5 million (a 95% reduction). However, following the cessation of this program these diseases remained at a low prevalence in parts of Asia, Africa and the Americas with sporadic outbreaks. In 2012, the WHO targeted the disease for eradication by 2020, a goal that was missed. , there were 15 countries known to be endemic for yaws, with the recent discovery of endemic transmission in Liberia and the Philippines. In 2020, 82,564 cases of yaws were reported to the WHO and 153 cases were confirmed. The majority of the cases are reported from Papua New Guinea and with over 80% of all cases coming from one of three countries in the 2010-2013 period: Papua New Guinea, Solomon Islands, and Ghana. A WHO meeting report in 2018 estimated the total cost of elimination to be US$175 million (excluding Indonesia). In the South-East Asian Regional Office of the WHO, the eradication efforts are focused on the remaining endemic countries in this region (Indonesia and East Timor) after India was declared free of yaws in 2016. The discovery that oral antibiotic azithromycin can be used instead of the previous standard, injected penicillin, was tested on Lihir Island from 2013 to 2014; a single oral dose of the macrolide antibiotic reduced disease prevalence from 2.4% to 0.3% at 12 months. The WHO now recommends both treatment courses (oral azithromycin and injected penicillin), with oral azithromycin being the preferred treatment. Others Malaria Malaria has been eliminated from most of Europe, North America, Australia, North Africa and the Caribbean, and parts of South America, Asia and Southern Africa. The WHO defines "elimination" (or "malaria free") as having no domestic transmission (indigenous cases) for the past three years. They also define "pre-elimination" and "elimination" stages when a country has fewer than 5 or 1, respectively, cases per 1000 people at risk per year. In 1955, WHO launched the Global Malaria Eradication Program. Support waned, and the program was suspended in 1969. Since 2000, support for eradication has increased, although some actors in the global health community (including voices within the WHO) thought that eradication as goal was premature and that setting strict deadlines for eradication may be counterproductive as they are likely to be missed. According to the WHO's World Malaria Report 2015, the global mortality rate for malaria fell by 60% between 2000 and 2015. The WHO targeted a further 90% reduction between 2015 and 2030, with a 40% reduction and eradication in 10 countries by 2020. However, the 2020 goal was missed with a slight increase in cases compared to 2015. While 31 out of 92 endemic countries were estimated to be on track with the WHO goals for 2020, 15 countries reported an increase of 40% or more between 2015 and 2020. Between 2000 and 30 June 2021, twelve countries were certified by the WHO as being malaria-free. Argentina and Algeria were declared free of malaria in 2019. El Salvador and China were declared malaria free in the first half of 2021. Regional disparities were evident: Southeast Asia was on track to meet WHO's 2020 goals, while Africa, Americas, Eastern Mediterranean and West Pacific regions were off-track. The six Greater Mekong Subregion countries aim for elimination of P. falciparum transmitted malaria by 2025 and elimination of all malaria by 2030, having achieved a 97% and 90% reduction of cases respectively since 2000. Ahead of World Malaria Day, 25 April 2021, WHO named 25 countries in which it is working to eliminate malaria by 2025 as part of its E-2025 initiative. A major challenge to malaria elimination is the persistence of malaria in border regions, making international cooperation crucial. Lymphatic filariasis Lymphatic filariasis is an infection of the lymph system by mosquito-borne microfilarial worms which can cause elephantiasis. Studies have demonstrated that transmission of the infection can be broken when a single dose of combined oral medicines is consistently maintained annually for approximately seven years. The strategy for eliminating transmission of lymphatic filariasis is mass distribution of medicines that kill the microfilariae and stop transmission of the parasite by mosquitoes in endemic communities. In sub-Saharan Africa, albendazole is being used with ivermectin to treat the disease, whereas elsewhere in the world albendazole is used with diethylcarbamazine. Using a combination of treatments better reduces the number of microfilariae in blood. Avoiding mosquito bites, such as by using insecticide-treated mosquito bed nets, also reduces the transmission of lymphatic filariasis. In the Americas, 95% of the burden of lymphatic filariasis is on the island of Hispaniola (comprising Haiti and the Dominican Republic). An elimination effort to address this is currently under way alongside the malaria effort described above; both countries intend to eliminate the disease by 2020. , the efforts of the Global Programme to Eliminate LF are estimated to have already prevented 6.6 million new filariasis cases from developing in children, and to have stopped the progression of the disease in another 9.5 million people who have already contracted it. Overall, of 83 endemic countries, mass treatment has been rolled out in 48, and elimination of transmission reportedly achieved in 21. Regional elimination established or underway Some diseases have already been eliminated from large regions of the world, and/or are currently being targeted for regional elimination. This is sometimes described as "eradication", although technically the term only applies when this is achieved on a global scale. Even after regional elimination is successful, interventions often need to continue to prevent a disease becoming re-established. Three of the diseases here listed (lymphatic filariasis, measles, and rubella) are among the diseases believed to be potentially eradicable by the International Task Force for Disease Eradication, and if successful, regional elimination programs may yet prove a stepping stone to later global eradication programs. This section does not cover elimination where it is used to mean control programs sufficiently tight to reduce the burden of an infectious disease or other health problem to a level where they may be deemed to have little impact on public health, such as the leprosy, neonatal tetanus, or obstetric fistula campaigns. Other worm infections Other than Dracunculiasis and lymphatic filariasis, there is no global commitment to eliminate helminthiasis (worm infections); however, the London Declaration on Neglected Tropical Diseases and the WHO aim to control worm infections, including schistosomiasis and soil-transmitted helminthiasis (which are caused by roundworms, whipworms and hookworms). It is estimated that between 576 and 740 million individuals are infected with hookworm. Of these infected individuals, about 80 million are severely affected. Soil-transmitted helminthiasis The current WHO goals are to control soil-transmitted helminthiasis (STH) by 2020 to a point where it does not pose a serious public health problem any more in children and 75% of children have received deworming interventions. By 2018, an average of 60% of school children were reached, however only 16 countries reached more than 75% coverage of pre-school children and 28 countries reached over 75% coverage of school-age children. In 2018, the number of countries with endemic STH was estimated to be 96 (down from 112 in 2010). Sizeable donations of a total of 3.3 billion deworming tablets by GlaxoSmithKline and Johnson & Johnson since 2010 to the WHO allowed progress on its goals. In 2019, the WHO targets were updated to eliminate morbidity of STH by 2030, with less than 2% of all children being infected by that date in all 98 currently endemic countries. Schistosomiasis The WHO set a goal to control morbidity of schistosomiasis by 2020 and eliminate the public health problems associated with it by 2025 (bringing infections down to less than 1% of the population). The effort is assisted by the Schistosomiasis Control Initiative. In 2018, a total of 63% of all school age children were treated. Hookworm In North American countries, such as the United States, elimination of hookworm had been attained due to scientific advances. Despite the United States declaring that it had eliminated hookworm decades ago, a 2017 study showed it was present in Lowndes County, Alabama. The Rockefeller Foundation's hookworm campaign in the 1920s was supposed to focus on the eradication of hookworm infections for those living in Mexico and other rural areas. However, the campaign was politically influenced, causing it to be less successful, and regions such as Mexico still deal with these infections from parasitic worms. This use of health campaigns by political leaders for political and economic advantages has been termed the science-politics paradox. Measles As of 2018, all six WHO regions have goals to eliminate measles, and at the 63rd World Health Assembly in May 2010, delegates agreed to move towards eventual eradication, although no specific global target date has yet been agreed. The Americas set a goal in 1994 to eliminate measles and rubella transmission by 2000, and successfully achieved to reduce cases from over 250,000 in 1990 to only 105 cases in 2003. However, while eradication in the Americas was certified in 2015, the certification was lost in 2018 due to endemic measles transmission in Venezuela and subsequent spread to Brazil and Colombia; while additional limited outbreaks have occurred elsewhere as well. Europe had set a goal to eliminate measles transmission by 2010, which was missed due to the MMR vaccine controversy and by low uptake in certain groups, and despite achieving low levels by 2008, European countries have since experienced a small resurgence in cases. The Eastern Mediterranean also had goals to eliminate measles by 2010 (later revised to 2015), the Western Pacific aims to eliminate the disease by 2012, and in 2009 the regional committee for Africa agreed a goal of measles elimination by 2020. In 2019, the WHO South-East Asian region has set a target to eliminate measles by 2023. , a total of 82 countries were certified to have eliminated endemic measle transmission. In 2005, a global target was agreed for a 90% reduction in measles deaths by 2010 from the 757,000 deaths in 2000 (later updated to 95% by 2015). Estimates in 2008 showed a 78% decline to 164,000 deaths, further declining to 145,700 in 2013. however, progress has since stalled since and both the 2010 and 2015 target were missed: in 2018, still over 140,000 deaths were reported. As of 2018, global vaccination efforts have reached 86% coverage of the first dose of the measles vaccine and 68% coverage of the second dose. The WHO region of the Americas declared on 27 September 2016 it had eliminated measles. The last confirmed endemic case of measles in the Americas was in Brazil in July 2015. May 2017 saw a return of measles to the US after an outbreak in Minnesota among unvaccinated children. Another outbreak occurred in the state of New York between 2018 and 2019, causing over 200 confirmed measles cases in mostly ultra-Orthodox Jewish communities. Subsequent outbreaks occurred in New Jersey and Washington state with over 30 cases reported in the Pacific Northwest. The WHO European region missed its elimination target of 2010 as well as the new target of 2015 despite overall coverage of 90% of the first dose of the measles vaccine. In 2018, 84,000 cases were reported in the European region (an increase from 25,000 in 2017); with the majority of cases originating from Ukraine. By the end of 2021, WHO's European regional office considered the endemic measles eliminated in 33 out of 53 member states, with the transmission interrupted in one more and re-established in five others. Rubella Four out of six WHO regions have goals to eliminate rubella, with the WHO recommending using existing measles programmes for vaccination with combined vaccines such as the MMR vaccine. The number of reported cases dropped from 670,000 in the year 2000 to below 15,000 in 2018, and the global coverage of rubella vaccination was estimated at 69% in 2018 by the WHO. The WHO region of the Americas declared on 29 April 2015 it had eliminated rubella and congenital rubella syndrome. The last confirmed endemic case of rubella in the Americas was in Argentina in February 2009. Australia achieved eradication in 2018. , 82 countries were certified to have eliminated rubella. The WHO European region missed its elimination target of 2010 as well as the new target of 2015 due to undervaccination in Central and Western Europe. As of 2018, 39 countries out of 53 European countries have eliminated endemic Rubella and three additional ones that have interrupted transmission; a total of 850 confirmed rubella cases were reported in the European region in 2018 with 438 of these in Poland. European countries with endemic Rubella in 2018 were: Belgium, Bosnia and Herzegovina, Denmark, France, Germany, Italy, Poland, Romania, Serbia, Turkey and Ukraine. The disease remains problematic in other regions as well; the WHO regions of Africa and South-East Asia have the highest rates of congenital rubella syndrome and a 2013 outbreak of rubella in Japan resulted in 15,000 cases. Onchocerciasis Onchocerciasis (river blindness) is the world's second leading cause of infectious blindness. It is caused by the nematode Onchocerca volvulus, which is transmitted to people via the bite of a black fly. The current WHO goal is to increase the number of countries free of transmission from 4 (in 2020) to 12 in 2030. Elimination of this disease is under way in the region of the Americas, where this disease was endemic to Brazil, Colombia, Ecuador, Guatemala, Mexico and Venezuela. The principal tool being used is mass ivermectin treatment. If successful, the only remaining endemic locations would be in Africa and Yemen. In Africa, it is estimated that greater than 102 million people in 19 countries are at high risk of onchocerciasis infection, and in 2008, 56.7 million people in 15 of these countries received community-directed treatment with ivermectin. Since adopting such treatment measures in 1997, the African Programme for Onchocerciasis Control reports a reduction in the prevalence of onchocerciasis in the countries under its mandate from a pre-intervention level of 46.5% in 1995 to 28.5% in 2008. Some African countries, such as Uganda, are also attempting elimination and successful elimination was reported in 2009 from two endemic foci in Mali and Senegal. On 29 July 2013, the Pan American Health Organization (PAHO) announced that after 16 years of efforts, Colombia had become the first country in the world to eliminate the parasitic disease onchocerciasis. It has also been eliminated in Ecuador (2014), Mexico (2015), and Guatemala (2016). The only remaining countries in America in which the disease is endemic are Brazil and Venezuela as of 2021. Prion diseases Following an epidemic of variant Creutzfeldt–Jakob disease in the UK in the 1990s, there have been campaigns to eliminate bovine spongiform encephalopathy in cattle across the European Union and beyond which have achieved large reductions in the number of cattle with this disease. Cases of variant Creutzfeldt–Jakob disease have also fallen since then, from an annual peak of 29 cases in 2000 to five in 2008 and none in 2012. Two cases were reported in both 2013 and 2014: two in France; one in the United Kingdom and one in the United States. Following the ongoing eradication effort, only seven cases of bovine spongiform encephalopathy were reported worldwide in 2013: three in the United Kingdom, two in France, one in Ireland and one in Poland. This is the lowest number of cases since at least 1988. In 2015, there were at least six reported cases (three of the atypical H-type). Four cases were reported globally in 2017, and the condition is considered to be nearly eradicated. With the cessation of cannibalism among the Fore people, the last known victims of kuru died in 2005 or 2009, but the disease has a very long incubation period. Syphilis In 2007, the WHO launched a roadmap for the elimination of congenital syphilis (mother to child transmission). In 2015, Cuba became the first country in the world to eliminate mother-to-child syphilis. In 2017 the WHO declared that Antigua and Barbuda, Saint Kitts and Nevis and four British Overseas Territories—Anguilla, Bermuda, Cayman Islands, and Montserrat—have been certified that they have ended transmission of mother-to-child syphilis and HIV. In 2018, Malaysia also achieved certification. Nevertheless, eradication of syphilis by all transmission methods remains unresolved and many questions about the eradication effort remain to be answered. African trypanosomiasis Early planning by the WHO for the eradication of African trypanosomiasis, also known as sleeping sickness, is underway as the rate of reported cases continues to decline and passive treatment is continued. The WHO aims to eliminate transmission of the Trypanosoma brucei gambiense parasite by 2030, though it acknowledges that this goal "leaves no room for complacency." The eradication and control efforts have been progressing well, with the number of reported cases dropping below 10,000 in 2009 for the first time; with only 992 cases reported in 2019 and 565 cases in 2020. The vast majority of the 565 cases in 2020 (over 60%) were recorded in the Democratic Republic of the Congo. However, some researchers have argued that total elimination may not be achievable due to human asymptomatic carriers of T. b. gambiense and non-tsetse modes of transmission. The (PATTEC) works to eradicate the vector (the tsetse fly) population levels and subsequently the protozoan disease, by use of insecticide-impregnated targets, fly traps, insecticide-treated cattle, ultra-low dose aerial/ground spraying (SAT) of tsetse resting sites and the sterile insect technique (SIT). The use of SIT in Zanzibar proved effective in eliminating the entire population of tsetse flies but was expensive and is relatively impractical to use in many of the endemic countries afflicted with African trypanosomiasis. Rabies Because the rabies virus is almost always caught from animals, rabies eradication has focused on reducing the population of wild and stray animals, controls and compulsory quarantine on animals entering the country, and vaccination of pets and wild animals. Many island nations, including Iceland, Ireland, Japan, Malta, and the United Kingdom, managed to eliminate rabies during the twentieth century, and more recently much of continental Europe has been declared rabies-free. Chagas disease Chagas disease is caused by Trypanosoma cruzi and is mostly spread by Triatominae. It is endemic to 21 countries in Latin America. There are over 30,000 new cases per year and 12,000 deaths due to the disease. Eradication efforts focus on the elimination of vector-borne transmission and the elimination of the vectors themselves. Leprosy Since the introduction of multi-drug therapy in 1981, the prevalence of leprosy has been reduced by over 95%. The success of the treatment has prompted the WHO in 1991 to set a target of less than one case per 10,000 people (eliminate the disease as a public health risk) which was achieved in 2000. The elimination of transmission of leprosy is part of the WHO "Towards zero leprosy" strategy to be implemented until 2030. It aims to reduce transmission to zero in 120 countries and reduce the number of new cases to about 60,000 per year (from ca. 200,000 cases in 2019). These goals are supported by the Global Partnership for Zero Leprosy (GPZL) and the London Declaration on Neglected Tropical Diseases. However, a lack of understanding of the disease and its transmission, and the long incubation period of the M. leprae pathogen have so far prevented the formulation of a full-scale eradication strategy. Eradicable diseases in animals Following rinderpest, many experts believe that ovine rinderpest, or peste des petits ruminants (PPR), is the next disease amenable to global eradication. PPR is a highly contagious viral disease of goats and sheep characterized by fever, painful sores in the mouth, tongue and feet, diarrhea, pneumonia and death, especially in young animals. It is caused by a virus of the genus Morbillivirus that is related to rinderpest, measles and canine distemper. The World Organisation for Animal Health (WOAH) prioritises African swine fever, bovine tuberculosis, foot and mouth disease, and PPR. Eradication difficulties Public upheaval by means of war, famine, political means, and infrastructure destruction can disrupt or eliminate eradication efforts altogether. See also Drugs for Neglected Diseases Initiative Globalization and disease Kigali Declaration on Neglected Tropical Diseases List of diseases eliminated from the United States Neglected tropical diseases Planned extinction Sanitation Tuberculosis elimination Explanatory notes References Further reading External links Carter Center International Task Force for Disease Eradication Website of the Global Polio Eradication Initiative Articles containing video clips Epidemiology Health campaigns Infection-control measures
Eradication of infectious diseases
[ "Biology", "Environmental_science" ]
6,479
[ "Epidemiology", "Vaccination", "Environmental social science", "Eradicated diseases" ]
14,499,741
https://en.wikipedia.org/wiki/List%20of%20Allis-Chalmers%20tractors
This is a list of farm and industrial tractors produced by Allis-Chalmers Corporation, as well as tractors that were produced by other manufacturers and then sold under the Allis-Chalmers brand name. For clarity, tractors are listed by series and separated by major models as needed. Tractors (wheeled) Lawn/garden tractor series B-Series B-1 B-7 (prototype only 3 built) B-10 (early, 9 hp) Big Ten B-10 (late, 10 hp) B-12 B-110 B-112 B-206 B-207 B-208 B-208S B-210 B-212 HB-112 HB-212 Numbered series Homesteader 310 310D 312 312D 312H 314H 410S 414S 416S 416H 608 LTD 610 611 LT 614 616 620 710 712 712S 714 716 718H 720 T-811 808 GT 810 GT 816 GT 912 914 916 917 919 920 Allis-Chalmers 4W series 4W-220 (1981-1984) (articulated) 4W-305 (1981-1985) (articulated) Allis-Chalmers 100 series 160 (1969-1973): Also known as One-Sixty; imported from Renault (France) 170 (1967-1973): Also known as One-Seventy 175 (1970-1980) 180 (1967-1973): Also known as One-Eighty 185 (1970-1981) 190 (1964-1973): Also known as One-Ninety 190XT 200 (1972-1975) 210 220 (1969-1973): Also known as Two-Twenty 440 (1972-1976): Built by Steiger Allis-Chalmers 5000 series 5015 (1982-1985): Imported from Toyosha (Japan) 5020 (1977-1985): Imported from Toyosha (Japan) 5030 (1978-1985): Imported from Toyosha (Japan) 5040 (1975-1980): Imported from UTB (Romania) 5045 (1981): Imported from Fiat (Italy) 5050 (1976-1983): Imported from Fiat (Italy) Allis-Chalmers 6000 series 6040 (1974): Imported from Renault (France) 6060 (1980-1984) 6070 (1984-1985) 6080 (1980-1985) 6140 (1982-1985): Imported from Toyosha (Japan) Allis-Chalmers 7000 series 7000 (1975-1979) 7010 (1979-1981) 7020 (1977-1981) 7030 (1973-1974) 7040 (1974-1977) 7045 (1977-1981) 7050 (1973-1974) 7060 (1974-1981) 7080 (1974-1981) 7580 4WD (1976-1981) (articulated) Allis-Chalmers 8000 series 8010 (1981-1985) 8030 (1981-1985) 8050 (1981-1985) 8070 (1981-1985) 8550 (1977-1981) Allis-Chalmers B series Model B (1937-1957) Model IB (1945-1958) Allis-Chalmers C series Model C (1940-1950) Model CA (1950-1958) Allis-Chalmers D series Model D10 (1959-1968; Series I, II and III) Model D12 (1959-1968; Series I, II and III) Model D14 (1957-1960) Model D15 (1960-1968; Series I and II) Model D17 (1957-1967; Series I, II, III and IV) Model D19 (1961-1964) Model D21 (1963-1969; Series I and II) Allis-Chalmers I series: Industrial tractors Model I40 (1964-1966) Model I60 (1965-1966) Model I400 (1966-1968) Model I600 (1966-1968) Allis-Chalmers Model 6-12 (1918–1923) Allis-Chalmers Model 10-10 (1914–1923) Allis-Chalmers Model A (1936–1942) Allis-Chalmers Model E (1918–1936): Also known as Model 15-30, 18-30, 20-35, 25-40, 30-60 (The 30-60 is a rare variation of the 25-40 also known as the "Thresherman's Special") Allis-Chalmers Model ED40 (1964):200 imported from Allis-Chalmers International (United Kingdom Essendine factory) through Canadian dealerships. Allis-Chalmers Model G (1948–1955) Allis-Chalmers Model L (1920–1927): Also known as Model 12-20, 15-25 Allis-Chalmers Model T16 "Sugar Babe" Allis-Chalmers U Series Model U (1929-1952): Also known as United Model UC (1930-1953): Also known as All-Crop or Cultivator Model UI (1937-1947) Allis-Chalmers W series Model WC (1933-1948) Model WD (1948-1953) Model WD45 (1953-1957) Model WF (1937-1951) Model RC (1938-1941) After the second world war Allis Chalmers operated factories in the United Kingdom at Totton (to 1949) in Totton Hampshire and Essendine in Rutland. Formerly the Minneapolis-Moline factory. Model EB (1950-1955) British built model B with a straight front axle. EB serial numbers from Essendine works began at EB-4001. Some 2000 were assembled at the Totton, Southampton facility between 1947/9 from imported CKD kits but using US serial numbers locally stamped with an additional E prefix. Theoretically there may be duplication of serial numbers with later English production tractors. Model D270 (1954-1957) Model D272 (1957-1960) Model ED40 (1960-1968) Tractors (tracked) Allis-Chalmers HD series Model H3 (1960-1968) Model HD3 (1942) Model HD3 (1960-1968) Model HD4 (1965-1969?) Model HD5 (1946-1955) Model HD6 (1955-1974) Model HD7 (1940-1950) Model HD9 (1950-1955) Model HD10 (1940-1950) Model HD11 (1955-1975?) Model HD14 (1939-1947) Model HD15 (1950-1955) Model HD16 (1955-1975?) Model HD19 (1947-1950) Model HD20 (1951-1954) Model HD21 (1954-1975) Model HD31 Model HD41 (1969-1974) Allis-Chalmers K series Model K (1929-1941) formerly Monarch 35 Model KO (1934-1943) Allis-Chalmers L series Model L (1931-1942) Model LD (1939) Model LO (1934) Allis-Chalmers Model M (1932–1942) Allis-Chalmers Monarch Series: formerly built by Monarch Tractor Corporation Monarch Model F (1926-1931) Monarch Model G (1926-1927) Monarch Model H (1927-1931) Allis-Chalmers S series Model S (1937-1942) Model SO (1937-1942) Harvesters Gleaner C Gleaner E Gleaner E-III Gleaner K Gleaner L2 Gleaner A85 All-Crop harvesters All-Crop 40 All-Crop 60 All-Crop 66 All-Crop 72 All-Crop 90 All-Crop 100 Military production M1 tractor medium model HD7W M1 tractor heavy model HD10W M4 tractor high-speed 18-ton artillery tractor manufactured from 1943. M6 tractor high-speed 38-ton (artillery tractor) M7 snow tractor M19 snow trailer, 1-ton M50 Ontos – a light anti-tank vehicle, 297 units produced from 1955 to 1957. Other related equipment Allis-Chalmers Speed Patrol: road grading and maintenance tractors Speed Patrol model H (1932–1933) Speed Patrol model 42 (1933–1940) Speed Patrol model 54 (1934–1940) Prototype models Allis-Chalmers fuel-cell tractor (1959) Allis-Chalmers model D (ca. 1944–1945): not related to later production D series Allis-Chalmers model F (ca. 1947) Allis-Chalmers model H (ca. 1942–1945): Four-wheel-drive tractor, based on Bonham Power Horse design See also List of Allis-Chalmers engines References Swinford, Norm (1994). Allis-Chalmers Farm Equipment 1914-1985. Dean, Terry (2000). Allis-Chalmers Farm Tractors and Crawlers Data Book. Allis-Chalmers Allis-Chalmers Manufacturing Company Agriculture-related lists
List of Allis-Chalmers tractors
[ "Engineering" ]
1,791
[ "Engineering vehicles", "Tractors" ]