id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
38,269,892
https://en.wikipedia.org/wiki/Crushing%20plant
A Crushing plant is one-stop crushing installation, which can be used for rock crushing, garbage crushing, building materials crushing and other similar operations. Crushing plants may be either fixed or mobile. A crushing plant has different stations (primary, secondary, tertiary, ...) where different crushing, selection and transport cycles are done in order to obtain different stone sizes or the required granulometry. Components Crushing plants make use of a large range of equipment, such as a pre-screener, loading conveyor, intake hopper, magnetic separator, crushing unit, such as jaw crushers and cone crusher etc. Vibration feeder: These machines feed the jaw and impact crusher with the rocks and stones to be crushed. Crushers: These are the machines where the rocks and stones are crushed. There are different types of crushers for different types of rocks and stones and different sizes of the input and output material. Each plant would incorporate one or several crushing machines depending on the required final material (small stones or sand). Vibrating screen: These machines are used to separate the different sizes of the material obtained by the crushers. Belt conveyor: These elements are the belts used for transportation of the material from one machine to another during different phases of process. Central electric control system: Control and monitor the operation of the entire system. Process of crushing plant Raw materials are evenly and gradually conveyed into jaw stone crushing equipment for primary crushing via the hopper of vibrating feeder. The crushed stone materials are conveyed to crushing plant by belt conveyor for secondary crushing before they are sent to vibrating screen to be separated. After separating, qualified materials will be taken away as final products, while unqualified materials will be carried back to the stone crushing equipment for recrushing. And customers can classify final products according to different size ranges. All the final products are up to the related standards within and beyond India. Of course, according to different requirements, customers can adjust the size of their final products from this stone crushing plant. Process of Stone Crushing Plant Clients will get the satisfactory products after objects being crushed for several times. Dust is generated during the working process while the dust control units are needed. See also Crusher References Mining equipment Manufacturing
Crushing plant
Engineering
448
11,422,045
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD60
In molecular biology, Small nucleolar RNA SNORD60 (also known as U60) is a non-coding RNA that belongs to the C/D class of small nucleolar RNA (snoRNA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. References External links Small nuclear RNA
Small nucleolar RNA SNORD60
Chemistry
83
69,043,471
https://en.wikipedia.org/wiki/Ilmor-Chevrolet%20265-A%20engine
The Ilmor 265-A is a turbocharged, , V-8 Indy car racing engine, designed and developed by Ilmor, for use in the CART PPG Indy Car World Series; between 1986 and 1993. 2.65-liter Indy motor cubic inch based on the pre-chevy Studebaker 265, current generation is DFX Ford V-8 Mario Illien and Paul Morgan were working at Cosworth on the Cosworth DFX turbocharged methanol engine for the CART Indy Car World Series; differences of opinion over the direction in which DFX development should go (Cosworth were inherently conservative as they had a near monopoly) led them to break away from their parent company to pursue their own ideas. There was some acrimony in their split from Cosworth, their former employer claiming that the Ilmor engine was little different from their planned modifications to the DFX. Founded as an independent British engine manufacturer in 1983, it started building engines for Indy cars with the money of team owner and chassis manufacturer Roger Penske. The Ilmor 265-A, badged initially as the Ilmor-Chevrolet Indy V-8, debuted at the 1986 Indianapolis 500 with Team Penske driver Al Unser. In 1987, the engine program expanded to all three Penske team drivers (Rick Mears, Danny Sullivan, and Al Unser), Patrick Racing, and Newman/Haas Racing. Mario Andretti, driving for Newman/Haas, won at Long Beach, the engine's first Indy car victory. He also won the pole position for the 1987 Indianapolis 500. A year later, Rick Mears won the 1988 Indianapolis 500, the engine's first win at Indianapolis. The engine went on to have a stellar record in CART. From 1987 to 1991, the "Chevy-A" engine won 64 of 78 races. In 1992, the 265-A engine was followed up by the 265-B engine. The "Chevy-B" was fielded singly by Penske Racing (Rick Mears and Emerson Fittipaldi) in 1992 and won four CART series races. All other Ilmor teams remained with the venerable "Chevy-A" for 1992. Bobby Rahal, driving a "Chevy-A" won the 1992 CART championship, the fifth consecutive (and final) for the 265-A. Al Unser Jr. won the 1992 Indianapolis 500 driving a "Chevy-A", also the fifth consecutive (and final) Indy 500 win for the 265-A. Emerson Fittipaldi drove a "Chevy-B" to 4th place in points, but both he and Mears dropped out of that year's Indy 500 due to crashes. It was at this time that Ilmor was receiving new competition from Cosworth, which had just introduced their new powerplant, the Ford-Cosworth XB. For the 1993 season, the 265-C engine was introduced, intended to replace both the 265-A and the 265-B. The "Chevy-C" was used widespread, and produced continued success for Ilmor. Some backmarker teams continued to utilize the "A" and "B" engines during the 1993 season, but neither the "A" nor the "B" would win another Indy car race. Chevrolet dropped its badging support after the 1993 season. Applications Truesports 91C Truesports 92C Rahal-Hogan R/H-001 Lola T87/00 Lola T88/00 Lola T89/00 Lola T90/00 Lola T91/00 Galmer G92 Lola T92/00 Lola T93/00 March 86C March 87C March 88C March 89C Penske PC-12 Penske PC-15 Penske PC-16 Penske PC-17 Penske PC-18 Penske PC-19 Penske PC-20 Penske PC-21 Penske PC-22 References External links Chevrolet Motorsport's Official Website Chevrolet IndyCar official website on chevrolet.com Engines by model Chevrolet engines IndyCar Series Champ Car V8 engines
Ilmor-Chevrolet 265-A engine
Technology
834
72,622,694
https://en.wikipedia.org/wiki/WR%20101-2
WR 101-2, also known as CXOGC J174516.1-284909, is a Wolf-Rayet star located in the Galactic Center, about 8,000 pc away from Earth. Its size has been estimated at . Properties WR 101-2's spectral type is Ofpe/WN9, signifying it as being a slash star, a Wolf-Rayet star which in this case contains extra nitrogen and helium emission in its spectrum as well as a P Cygni profile. Assuming a distance of 8,000 pc (appropriate as the massive star is apparently located in the Galactic Center, a structure known to be about 8,000 pc away), a K-band magnitude of 7.89, a K-band extinction of 1.7, and a K-band bolometric correction of -2.9, the luminosity turns out to be 2.4 million times that of the Sun (Log(L) = 6.38), making it one of the brightest stars known and certainly in the Galactic Center. WR 101-2's effective temperature was estimated to be about 20,000 K, one of the coolest for any Wolf-Rayet star. The resulting radius for this is . References Wolf–Rayet stars Sagittarius (constellation)
WR 101-2
Astronomy
273
17,231,324
https://en.wikipedia.org/wiki/Mismatch%20loss
Mismatch loss in transmission line theory is the amount of power expressed in decibels that will not be available on the output due to impedance mismatches and signal reflections. A transmission line that is properly terminated, that is, terminated with the same impedance as that of the characteristic impedance of the transmission line, will have no reflections and therefore no mismatch loss. Mismatch loss represents the amount of power wasted in the system. It can also be thought of as the amount of power gained if the system was perfectly matched. Impedance matching is an important part of RF system design; however, in practice there will likely be some degree of mismatch loss. In real systems, relatively little loss is due to mismatch loss and is often on the order of 1dB. According to Walter Maxwell mismatch does not result in any loss ("wasted" signal), except through the transmission line. This is because the signal reflected from the load is transmitted back to the source, where it is re-reflected due to the reactive impedance presented by the source, back to the load, until all of the signal's power is emitted or absorbed by the load. Calculation Mismatch loss (ML) is the ratio of the difference between incident and reflected power to incident power: where = incident power = reflected power = delivered power (also called the accepted power) The fraction of incident power delivered to the load is where is the magnitude of the reflection coefficient. Note that as the reflection coefficient approaches zero, power to the load is maximized. If the reflection coefficient is known, mismatch can be calculated by In terms of the voltage standing wave ratio (VSWR): Sources of mismatch loss Any component of the transmission line that has an input and output will contribute to the overall mismatch loss of the system. For example, in mixers mismatch loss occurs when there is an impedance mismatch between the RF port and IF port of the mixer . This is one of the principal reasons for losses in mixers. Likewise, a large amount of the loss in amplifiers comes from the mismatch between the input and output. Consequently, not all of the available power generated by the amplifier gets transferred to the load. This is most important in antenna systems where mismatch loss in the transmitting and receiving antenna directly contributes to the losses the system—including the system noise figure. Other common RF system components such as filters, attenuators, splitters, and combiners will generate some amount of mismatch loss. While completely eliminating mismatch loss in these components is near impossible, mismatch loss contributions by each component can be minimized by selecting quality components for use in a well designed system. Mismatch error If there are two or more components in cascade as is often the case, the resultant mismatch loss is not only due to the mismatches from the individual components, but also from how the reflections from each component combine with each other. The overall mismatch loss cannot be calculated by just adding up the individual loss contributions from each component. The difference between the sum of the mismatch loss in each component and total mismatch loss due to the interactions of the reflections is known as mismatch error. Depending on how the multiple reflections combine, the overall system loss may be lower or higher than the sum of the mismatch loss from each component. Mismatch error occurs in pairs as the signal reflects off of each mismatched component. So for the example in Figure 3, there are mismatch errors generated by each pair of components. The mismatch uncertainty increases as the frequency increases, and in wide-band applications. The phasing of the reflections makes it particularly harder to model. The general case for calculating mismatch error (ME) is: where is the complex phase change due to the second reflection See also Insertion loss References Telecommunications engineering
Mismatch loss
Engineering
806
30,825,113
https://en.wikipedia.org/wiki/Netty%20%28software%29
Netty is a non-blocking I/O client-server framework for the development of Java network applications such as protocol servers and clients. The asynchronous event-driven network application framework and tools are used to simplify network programming such as TCP and UDP socket servers. Netty includes an implementation of the reactor pattern of programming. Originally developed by JBoss, Netty is now developed and maintained by the Netty Project Community. Besides being an asynchronous network application framework, Netty also includes built-in implementations of SSL/TLS, HTTP, HTTP/2, HTTP/3, WebSockets, DNS, Protocol Buffers, SPDY and other protocols. Netty is not a Java web container, but is able to run inside one, and supports message compression. Netty has been actively developed since 2004. Beginning with version 4.0.0, Netty also supports the usage of NIO.2 as a backend, along with NIO and blocking Java sockets. See also Application server Node.js Twisted (software) Apache MINA References External links Java platform Message-oriented middleware
Netty (software)
Technology
238
74,041,250
https://en.wikipedia.org/wiki/Bean%20prize
The Bean prize, also known as the William B. Bean Student Research Award and named for William Bennett Bean, is awarded annually to medical students by the American Osler Society (AOS) for research in history of medicine and humanities. Background The Bean prize is named for William Bennett Bean, who was a resident physician under Sir William Osler. Bean became the first president of the American Osler Society, who created the award for medical students. References External links History of medicine Medicine awards
Bean prize
Technology
99
41,831,706
https://en.wikipedia.org/wiki/List%20of%20edible%20salts
Edible salts, also known as table salts, are salts generally derived from mining (rock salt) or evaporation (including sea salt). Edible salts may be identified by such characteristics as their geographic origin, method of preparation, natural impurities, additives, flavourings, or intended purpose (such as pickling or curing). Common terms and mass-produced seasoned salts Artisinal or geographical-indication salts Artisanal salts are produced using specific, often traditional, methods, resulting in unique flavor profiles and textures. They may be sourced from specific geographical locations, such as coastal regions or salt flats. Geographical Indication (GI) salts are salts that can only be produced in a specific geographical area. These regions often have unique environmental conditions, such as soil composition, climate, or mineral content, that contribute to the salt's distinct characteristics. To protect their authenticity and quality, many are legally protected such as that of the EU's Protected Designation of Origin products . References Salts
List of edible salts
Chemistry
201
23,291,018
https://en.wikipedia.org/wiki/Wireless%20Medical%20Telemetry%20Service
Wireless Medical Telemetry Service (WMTS) is a wireless service specifically defined in the United States by the Federal Communications Commission (FCC) for transmission of data related to a patient's health (biotelemetry). It was created in 2000 because of interference issues due to establishment of digital television. The bands defined are 608-614 MHz, 1395-1400 MHz and 1427-1432 MHz. Devices using these bands are typically proprietary. Further, the use of these bands has not been internationally agreed to, so many times devices cannot be marketed or used freely in countries other than the United States. Because of this, in addition to WMTS, many manufacturers have created devices that transmit data in the ISM bands such as 902-928 MHz, and, more typically, 2.4-2.5 GHz, often using IEEE 802.11 or Bluetooth radios. FCC statements There is an FCC statement on coexistence of WMTS in various frequency bands. Prior to the establishment of the WMTS, medical telemetry devices generally could be operated on an unlicensed basis on vacant television channels 7-13 (174-216 MHz) and 14-46 (470-668 MHz) or on a licensed but secondary basis to private land mobile radio operations in the 450-470 MHz frequency band. This meant that wireless medical telemetry operations had to accept interference from the primary users of these frequency bands, i.e., the television broadcasters and private land mobile radio licensees. Further, if a wireless medical telemetry operation caused interference to television or private land mobile radio transmissions, the user of the wireless medical telemetry equipment would be responsible for rectifying the problem, even if that meant shutting down the medical telemetry operation. The FCC was concerned that certain regulatory developments, including the advent of digital television (DTV) service, would result in more intensive use of these frequencies by the primary services, subjecting wireless medical telemetry operations to greater interference than before and perhaps precluding such operations entirely in many instances. To ensure that wireless medical telemetry devices can operate free of harmful interference, the FCC decided to establish the WMTS. In a Report and Order released on June 12, 2000, the FCC allocated a total of 14 megahertz of spectrum to WMTS on a primary basis. At the same time, it adopted a number of regulations to ensure that the WMTS frequencies are used effectively and efficiently for their intended medical purpose. The WMTS rules took effect on October 16, 2000 WMTS rules by FCC Band Plan: The frequencies currently allocated for WMTS are divided into three blocks: the 608-614 MHz frequency band (which corresponds to UHF TV channel 37 but is not used by any TV station because it is used for radio astronomy) and the 1395-1400 MHz and 1427-1432 MHz frequency bands (both of which had been used by the Federal Government but were reallocated to the private sector under the Omnibus Budget Reconciliation Act of 1993). The frequencies in the 1427-1432 MHz band are shared by WMTS with non-medical telemetry operations, such as utility telemetry operations, that are regulated under Part 90 of the FCC's Rules. Generally, WMTS operations are accorded primary status over non-medical telemetry operations in the 1427-1429.5 MHz band, but are treated as secondary to non-medical telemetry operations in the 1429.5-1432 MHz band. However, there are seven geographical areas in which WMTS and non-medical telemetry operations have "flipped" the bands in which each enjoys primary status. These seven areas, termed the "carve-out" areas, are (1) Pittsburgh, PA; (2) the Washington, D.C. metropolitan area; (3) Richmond/Norfolk, VA; (4) Austin/Georgetown, TX; (5) Battle Creek, MI; (6) Detroit, MI; and (7) Spokane, WA. In these seven areas, in contrast to the rest of the country, WMTS has primary status in the 1429-1431.5 MHz band, but is secondary to non-medical telemetry operations in the 1427-1429 MHz band. FDA comments Comments from US FDA, in part: Because of concerns for interference with the present wireless medical telemetry systems, and the introduction of the WMTS, CDRH has issued a public health advisory to hospital administrators, risk managers, directors of biomedical/clinical engineering, and nursing home directors. In general, CDRH encourages manufacturers and users of medical telemetry devices to move to the new spectrum because of its protections against interference from other intentional transmitters and because frequency coordination will be provided. See also Medical Device Radiocommunications Service References Health informatics Telemedicine Telecommunication services Wireless networking standards Radio regulations
Wireless Medical Telemetry Service
Technology,Biology
1,024
11,527
https://en.wikipedia.org/wiki/Fundamental%20theorem%20on%20homomorphisms
In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism. The homomorphism theorem is used to prove the isomorphism theorems. Similar theorems are valid for vector spaces, modules, and rings. Group-theoretic version Given two groups and and a group homomorphism , let be a normal subgroup in and the natural surjective homomorphism (where is the quotient group of by ). If is a subset of (where represents a kernel) then there exists a unique homomorphism such that . In other words, the natural projection is universal among homomorphisms on that map to the identity element. The situation is described by the following commutative diagram: is injective if and only if . Therefore, by setting , we immediately get the first isomorphism theorem. We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group". Proof The proof follows from two basic facts about homomorphisms, namely their preservation of the group operation, and their mapping of the identity element to the identity element. We need to show that if is a homomorphism of groups, then: is a subgroup of . is isomorphic to . Proof of 1 The operation that is preserved by is the group operation. If , then there exist elements such that and . For these and , we have (since preserves the group operation), and thus, the closure property is satisfied in . The identity element is also in because maps the identity element of to it. Since every element in has an inverse such that (because preserves the inverse property as well), we have an inverse for each element in , therefore, is a subgroup of . Proof of 2 Construct a map by . This map is well-defined, as if , then and so which gives . This map is an isomorphism. is surjective onto by definition. To show injectiveness, if , then , which implies so . Finally, hence preserves the group operation. Hence is an isomorphism between and , which completes the proof. Applications The group theoretic version of fundamental homomorphism theorem can be used to show that two selected groups are isomorphic. Two examples are shown below. Integers modulo n For each , consider the groups and and a group homomorphism defined by (see modular arithmetic). Next, consider the kernel of , , which is a normal subgroup in . There exists a natural surjective homomorphism defined by . The theorem asserts that there exists an isomorphism between and , or in other words . The commutative diagram is illustrated below. N / C theorem Let be a group with subgroup . Let , and be the centralizer, the normalizer and the automorphism group of in , respectively. Then, the theorem states that is isomorphic to a subgroup of . Proof We are able to find a group homomorphism defined by , for all . Clearly, the kernel of is . Hence, we have a natural surjective homomorphism defined by . The fundamental homomorphism theorem then asserts that there exists an isomorphism between and , which is a subgroup of . See also Quotient category References Theorems in abstract algebra
Fundamental theorem on homomorphisms
Mathematics
693
7,956,443
https://en.wikipedia.org/wiki/Cereals%20%26%20Grains%20Association
Cereals & Grains Association (formerly AACC International, formerly the American Association of Cereal Chemists) is a non-profit professional organization of members who are specialists in the use of cereal grains in foods. Founded in 1916, they are headquartered in Eagan, Minnesota. Sections Cereals & Grains Association has nine active sections. Four of the nine active sections are located outside of the United States and they are located in western Canada, Australia, Japan, and Europe. Divisions Cereals & Grains Association has eleven divisions. These include biotechnology, carbohydrate, engineering/processing, milling/baking, nutrition, protein, rheology, rice, food safety and quality, pet and animal food, and pulses. Publications Cereals & Grains Association publishes Cereal Chemistry, a bimonthly publication in cereal science, including processing, oils, and laboratory tests on these grains (corn, oat, barley, rye, etc.), Cereal Foods World, the bi-monthly magazine of the association that deals with research papers and professional issues related to those who are involved in cereal science, and books on different issues relating to grains and cereals (storage, milling, processing, food quality, food safety, ingredients, dietary fiber, and nutrition). Continuing Education Throughout its existence, Cereals & Grains Association has offered continuing education or professional development courses to its members and non-members on issues dealing with cereal science and grain processing issues. These courses have included food safety, employee safety, extrusion, processing, and more. References External links Official website 1916 establishments in the United States Agricultural organizations based in the United States Food technology organizations Chemistry organizations Chemical engineering organizations Scientific organizations established in 1916 Organizations based in Minnesota Professional associations based in the United States Dakota County, Minnesota Cereals
Cereals & Grains Association
Chemistry,Engineering
357
31,428,107
https://en.wikipedia.org/wiki/Oil%20well%20control
Oil well control is the management of the dangerous effects caused by the unexpected release of formation fluid, such as natural gas and/or crude oil, upon surface equipment of oil or gas drilling rigs and escaping into the atmosphere. Technically, oil well control involves preventing the formation gas or fluid (hydrocarbons), usually referred to as kick, from entering into the wellbore during drilling or well interventions. Formation fluid can enter the wellbore if the pressure exerted by the column of drilling fluid is not great enough to overcome the pressure exerted by the fluids in the formation being drilled (pore pressure). Oil well control also includes monitoring a well for signs of impending influx of formation fluid into the wellbore during drilling and procedures, to stop the well from flowing when it happens by taking proper remedial actions. Failure to manage and control these pressure effects can cause serious equipment damage and injury, or loss of life. Improperly managed well control situations can cause blowouts, which are uncontrolled and explosive expulsions of formation hydrocarbons from the well, potentially resulting in a fire. Importance of oil well control Oil well control is one of the most important aspects of drilling operations. Improper handling of kicks in oil well control can result in blowouts with very grave consequences, including the loss of valuable resources and also lives of field personnel. Even though the cost of a blowout (as a result of improper/no oil well control) can easily reach several millions of US dollars, the monetary loss is not as serious as the other damages that can occur: irreparable damage to the environment, waste of valuable resources, ruined equipment, and most importantly, the safety and lives of personnel on the drilling rig. In order to avert the consequences of blowout, the utmost attention must be given to oil well control. That is why oil well control procedures should be in place prior to the start of an abnormal situation noticed within the wellbore, and ideally when a new rig position is sited. In other words, this includes the time the new location is picked, all drilling, completion, workover, snubbing and any other drilling-related operations that should be executed with proper oil well control in mind. This type of preparation involves widespread training of personnel, the development of strict operational guidelines and the design of drilling programs – maximizing the probability of successfully regaining hydrostatic control of a well after a significant influx of formation fluid has taken place. Fundamental concepts and terminology Pressure is a very important concept in the oil and gas industry. Pressure can be defined as: the force exerted per unit area. Its SI unit is newtons per square metre or pascals. Another unit, bar, is also widely used as a measure of pressure, with 1 bar equal to 100 kilopascals. Normally pressure is measured in the U.S. petroleum industry in units of pounds force per square inch of area, or psi. 1000  psi equals 6894.76 kilo-pascals. Hydrostatic pressure Hydrostatic pressure (HSP), as stated, is defined as pressure due to a column of fluid that is not moving. That is, a column of fluid that is static, or at rest, exerts pressure due to local force of gravity on the column of the fluid. The formula for calculating hydrostatic pressure in SI units (N/m2) is: Hydrostatic pressure = Height (m) × Density (kg/m3) × Gravity (m/s2). All fluids in a wellbore exert hydrostatic pressure, which is a function of density and vertical height of the fluid column. In US oil field units, hydrostatic pressure can be expressed as: HSP = 0.052 × MW × TVD, where MW (Mud Weight or density) is the drilling-fluid density in pounds per gallon (ppg), TVD is the true vertical depth in feet and HSP is the hydrostatic pressure in psi. The 0.052 is needed as the conversion factor to psi unit of HSP.Schlumberger Limited article,"Hydrostatic pressure", "Schlumberger OilField Glossary". Retrieved 9 April 2011. To convert these units to SI units, one can use: 1 ppg ≈ 1 ft = 0.3048 metres 1 psi = 0.0689475729 bar 1 bar = 105 pascals 1 bar= 15 psi Pressure gradient The pressure gradient is described as the pressure per unit length. Often in oil well control, pressure exerted by fluid is expressed in terms of its pressure gradient. The SI unit is pascals/metre. The hydrostatic pressure gradient can be written as: Pressure gradient (psi/ft) = HSP/TVD = 0.052 × MW (ppg). Formation pressure Formation pressure is the pressure exerted by the formation fluids, which are the liquids and gases contained in the geologic formations encountered while drilling for oil or gas. It can also be said to be the pressure contained within the pores of the formation or reservoir being drilled. Formation pressure is a result of the hydrostatic pressure of the formation fluids, above the depth of interest, together with pressure trapped in the formation. Under formation pressure, there are 3 levels: normally pressured formation, abnormal formation pressure, or subnormal formation pressure. Normally pressured formation Normally pressured formation has a formation pressure that is the same with the hydrostatic pressure of the fluids above it. As the fluids above the formation are usually some form of water, this pressure can be defined as the pressure exerted by a column of water from the formation's depth to sea level. The normal hydrostatic pressure gradient for freshwater is 0.433 pounds per square inch per foot (psi/ft), or 9.792 kilopascals per meter (kPa/m), and 0.465 psi/ft for water with dissolved solids like in Gulf Coast waters, or 10.516 kPa/m. The density of formation water in saline or marine environments, such as along the Gulf Coast, is about 9.0 ppg or 1078.43 kg/m3. Since this is the highest for both Gulf Coast water and fresh water, a normally pressured formation can be controlled with a 9.0 ppg mud. Sometimes the weight of the overburden, which refers to the rocks and fluids above the formation, will tend to compact the formation, resulting in pressure built-up within the formation if the fluids are trapped in place. The formation in this case will retain its normal pressure only if there is a communication with the surface. Otherwise, an abnormal formation pressure will result. Abnormal formation pressure As discussed above, once the fluids are trapped within the formation and not allow to escape there is a pressure build-up leading to abnormally high formation pressures. This will generally require a mud weight of greater than 9.0 ppg to control. Excess pressure, called "overpressure" or "geopressure", can cause a well to blow out or become uncontrollable during drilling. Subnormal formation pressure Subnormal formation pressure is a formation pressure that is less than the normal pressure for the given depth. It is common in formations that had undergone production of original hydrocarbon or formation fluid in them.Schlumberger Limited article, "Abnormal Pressure", "Schlumberger OilField Glossary". Retrieved 2011-04-09.Schlumberger Limited article, "Normal Pressure", "Schlumberger OilField Glossary". Retrieved 2011-04-09. Overburden pressure Overburden pressure is the pressure exerted by the weight of the rocks and contained fluids above the zone of interest. Overburden pressure varies in different regions and formations. It is the force that tends to compact a formation vertically. The density of these usual ranges of rocks is about 18 to 22 ppg (2,157 to 2,636 kg/m3). This range of densities will generate an overburden pressure gradient of about 1 psi/ft (22.7 kPa/m). Usually, the 1 psi/ft is not applicable for shallow marine sediments or massive salt. In offshore however, there is a lighter column of sea water, and the column of underwater rock does not go all the way to the surface. Therefore, a lower overburden pressure is usually generated at an offshore depth, than would be found at the same depth on land. Mathematically, overburden pressure can be derived as: S = ρb× D×g where g = acceleration due to gravity S = overburden pressure ρb = average formation bulk density D = vertical thickness of the overlying sediments The bulk density of the sediment is a function of rock matrix density, porosity within the confines of the pore spaces, and porefluid density. This can be expressed as ρb = φρf + (1 – φ)ρm where φ = rock porosity ρf = formation fluid density ρm = rock matrix densityRehm, Bill; Schubert, Jerome; Haghshenas, Arash; Paknejad, Amir Saman; Hughes, Jim (2008). Managed Pressure Drilling. Gulf Publishing Company. Online version available at: Knovel-48, pp. 22/23 section 1.7 (online version) Fracture pressure Fracture pressure can be defined as pressure required to cause a formation to fail or split. As the name implies, it is the pressure that causes the formation to fracture and the circulating fluid to be lost. Fracture pressure is usually expressed as a gradient, with the common units being psi/ft (kPa/m) or ppg (kg/m3). To fracture a formation, three things are generally needed, which are: Pump into the formation. This will require a pressure in the wellbore greater than formation pressure. The pressure in the wellbore must also exceed the rock matrix strength. And finally the wellbore pressure must be greater than one of the three principal stresses in the formation.Rehm, Bill; et al.. (2008). Managed Pressure Drilling, p.23, section 1.8.1 (online version). Pump pressure (system pressure losses) Pump pressure, which is also referred to as system pressure loss, is the sum total of all the pressure losses from the oil well surface equipment, the drill pipe, the drill collar, the drill bit, and annular friction losses around the drill collar and drill pipe. It measures the system pressure loss at the start of the circulating system and measures the total friction pressure. Slow pump pressure (SPP) Slow pump pressure is the circulating pressure (pressure used to pump fluid through the whole active fluid system, including the borehole and all the surface tanks that constitute the primary system during drilling) at a reduced rate. SPP is very important during a well kill operation in which circulation (a process in which drilling fluid is circulated out of the suction pit, down the drill pipe and drill collars, out the bit, up the annulus, and back to the pits while drilling proceeds) is done at a reduced rate to allow better control of circulating pressures and to enable the mud properties (density and viscosity) to be kept at desired values. The slow pump pressure can also be referred to as "kill rate pressure" or "slow circulating pressure" or "kill speed pressure" and so on.Schlumberger Limited article,"Circulate", "Schlumberger OilField Glossary". Retrieved 9 April 2011. Shut-in drill pipe pressure Shut-in drill pipe pressure (SIDPP), which is recorded when a well is shut in on a kick, is a measure of the difference between the pressure at the bottom of the hole and the hydrostatic pressure (HSP) in the drillpipe. During a well shut-in, the pressure of the wellbore stabilizes, and the formation pressure equals the pressure at the bottom of the hole. The drillpipe at this time should be full of known-density fluid. Therefore, the formation pressure can be easily calculated using the SIDPP. This means that the SIDPP gives a direct of formation pressure during a kick. Shut-in casing pressure (SICP) The shut-in casing pressure (SICP) is a measure of the difference between the formation pressure and the HSP in the annulus when a kick occurs. The pressures encountered in the annulus can be estimated using the following mathematical equation: FP = HSPmud + HSPinflux + SICP where FP = formation pressure (psi) HSPmud = Hydrostatic pressure of the mud in the annulus (psi) HSPinflux = Hydrostatic pressure of the influx (psi) SICP = shut-in casing pressure (psi) Bottom-hole pressure (BHP) Bottom-hole pressure (BHP) is the pressure at the bottom of a well. The pressure is usually measured at the bottom of the hole. This pressure may be calculated in a static, fluid-filled wellbore with the equation: BHP = D × ρ × C, where BHP = bottom-hole pressure D = the vertical depth of the well ρ = density C = units conversion factor (or, in the English system, BHP = D × MWD × 0.052). In Canada the formula is depth in meters x density in kgs x the constant gravity factor (0.00981), which will give the hydrostatic pressure of the well bore or (hp) hp=bhp with pumps off. The bottom-hole pressure is dependent on the following: Hydrostatic pressure (HSP) Shut-in surface pressure (SIP) Friction pressure Surge pressure (occurs when transient pressure increases the bottom-hole pressure) Swab pressure (occurs when transient pressure reduces the bottom-hole pressure) Therefore, BHP can be said to be the sum of all pressures at the bottom of the wellhole, which equals: BHP = HSP + SIP + friction + Surge - swabRehm, Bill; et al. (2008). Managed Pressure Drilling, p.11, section 1.4.1 (online version). Basic calculations in oil well control There are some basic calculations that need to be carried during oil well control. A few of these essential calculations will be discussed below. Most of the units here are in US oil field units, but these units can be converted to their SI units equivalent by using this Conversion of units link. Capacity The capacity of drill string is an essential issue in oil well control. The capacity of drillpipe, drill collars or hole is the volume of fluid that can be contained within them. The capacity formula is as shown below: Capacity = ID2/1029.4 where Capacity = Volume in barrels per foot(bbl/ft) ID = Inside diameter in inches 1029.4 = Units conversion factor Also the total pipe or hole volume is given by : Volume in barrels (bbls) = Capacity (bbl/ft) × length (ft) Feet of pipe occupied by a given volume is given by: Feet of pipe (ft) = Volume of mud (bbls) / Capacity (bbls/ft) Capacity calculation is important in oil well control due to the following: Volume of the drillpipe and the drill collars must be pumped to get kill weight mud to the bit during kill operation. It is used to spot pills and plugs at various depths in the wellbore. Annular capacity This is the volume contained between the inside diameter of the hole and the outside diameter of the pipe. Annular capacity is given by : Annular capacity (bbl/ft) = (IDhole2 - ODpipe2) / 1029.4 where IDhole2 = Inside diameter of the casing or open hole in inches ODpipe2 = Outside diameter of the pipe in inches Similarly Annular volume (bbls) = Annular capacity (bbl/ft) × length (ft) and Feet occupied by volume of mud in annulus = Volume of mud (bbls) / Annular Capacity (bbls/ft). Fluid level drop Fluid level drop is the distance the mud level will drop when a dry string(a bit that is not plugged) is being pulled from the wellbore and it is given by: Fluid level drop = Bbl disp / (CSG cap + Pipe disp) or Fluid level drop = Bbl disp / (Ann cap + Pipe cap) and the resulting loss of HSP is given by: Lost HSP = 0.052 × MW × Fluid drop where Fluid drop = distance the fluid falls (ft) Bbl disp = displacement of the pulled pipe (bbl) CSG cap = casing capacity (bbl/ft) Pipe disp = pipe displacement (bbl/ft) Ann cap = Annular capacity between casing and pipe (bbl/ft) Pipe cap = pipe capacity Lost HSP = Lost hydrostatic pressure (psi) MW = mud weight (ppg) When pulling a wet string (the bit is plugged) and the fluid from the drillpipe is not returned to the hole. The fluid drop is then changed to the following: Fluid level drop = Bbl disp / Ann cap Kill Mud weight (KMW) Kill Mud weight is the density of the mud required to balance formation pressure during kill operation. The Kill Weight Mud can be calculated by: KWM = SIDPP/(0.052 × TVD) + OWM where KWM = kill weight mud (ppg) SIDPP = shut-in drillpipe pressure (psi) TVD   = true vertical depth (ft) OWM   = original weight mud (ppg) But when the formation pressure can be determined from data sources such as bottom hole pressure, then KWM can be calculated as follows: KWM = FP / (0.052 × TVD) where FP = Formation pressure. KicksKick''' is the entry of formation fluid into the wellbore during drilling operations. It occurs because the pressure exerted by the column of drilling fluid is not great enough to overcome the pressure exerted by the fluids in the formation drilled. The whole essence of oil well control is to prevent kick from occurring and if it happens to prevent it from developing into blowout. An uncontrolled kick usually results from not deploying the proper equipment, using poor practices, or a lack of training of the rig crews. Loss of oil well control may lead into blowout, which represents one of the most severe threats associated with the exploration of petroleum resources involving the risk of lives and environmental and economic consequences.IDPT/IPM article, "Basic Well Control", Scribd site. Accessed 10/04/2011, p.3. Causes of kicks A kick will occur when the bottom hole pressure(BHP) of a well falls below the formation pressure and the formation fluid flows into the wellbore. There are usually causes for kicks some of which are: Failure to keep the hole full during a trip Swabbing while tripping Lost circulation Insufficient density of fluid Abnormal pressure Drilling into an adjacent well Lost control during drill stem test Improper fill on trips Failure to keep the hole full during a tripTripping is the complete operation of removing the drillstring from the wellbore and running it back in the hole. This operation is typically undertaken when the bit (which is the tool used to crush or cut rock during drilling) becomes dull or broken, and no longer drills the rock efficiently. A typical drilling operation of deep oil or gas wells may require up to 8 or more trips of the drill string to replace a dull rotary bit for one well. Tripping out of the hole means that the entire volume of steel (of drillstring) is being removed, or has been removed, from the well. This displacement of the drill string (the steel) will leave out a volume of space that must be replaced with an equal volume of mud. If the replacement is not done, the fluid level in the wellbore will drop, resulting in a loss of hydrostatic pressure (HSP) and bottom hole pressure (BHP). If this bottom hole pressure reduction goes below the formation pressure, a kick will definitely occur. Swabbing while trippingSwabbing occurs when bottom hole pressure is reduced due to the effects of pulling the drill string upward in the bored hole. During the tripping out of the hole, the space formed by the drillpipe, drill collar, or tubing (which are being removed) must be replaced by something, usually mud. If the rate of tripping out is greater than the rate the mud is being pumped into the void space (created by the removal of the drill string), then swab will occur. If the reduction in bottom hole pressure caused by swabbing is below formation pressure, then a kick will occur. Lost circulationLost circulation'' usually occurs when the hydrostatic pressure fractures an open formation. When this occurs, there is loss in circulation, and the height of the fluid column decreases, leading to lower HSP in the wellbore. A kick can occur if steps are not taken to keep the hole full. Lost circulation can be caused by: excessive mud weights excessive annular friction loss excessive surge pressure during trips, or "spudding" the bit excessive shut-in pressures. Insufficient density of fluid If the density of the drilling fluid or mud in the well bore is not sufficient to keep the formation pressure in check, then a kick can occur. Insufficient density of the drilling fluid can be as a result of the following : attempting to drill by using an underbalanced weight solution excessive dilution of the mud heavy rains in the pits barite settling in the pits spotting low density pills in the well. Abnormal pressure Another cause of kicks is drilling accidentally into abnormally-pressured permeable zones. The increased formation pressure may be greater than the bottom hole pressure, resulting in a kick. Drilling into an adjacent well Drilling into an adjacent well is a potential problem, particularly in offshore drilling where a large number of directional wells are drilled from the same platform. If the drilling well penetrates the production string of a previously completed well, the formation fluid from the completed well will enter the wellbore of the drilling well, causing a kick. If this occurs at a shallow depth, it is an extremely dangerous situation and could easily result in an uncontrolled blowout with little to no warning of the event. Lost control during drill stem test A drill-stem test is performed by setting a packer above the formation to be tested, and allowing the formation to flow. During the course of the test, the bore hole or casing below the packer, and at least a portion of the drill pipe or tubing, is filled with formation fluid. At the conclusion of the test, this fluid must be removed by proper well control techniques to return the well to a safe condition. Failure to follow the correct procedures to kill the well could lead to a blowout. Improper fill on trips Improper fill on trip occurs when the volume of drilling fluid to keep the hole full on a Trip (complete operation of removing the drillstring from the wellbore and running it back in the hole) is less than that calculated or less than Trip Book Record. This condition is usually caused by formation fluid entering the wellbore due to the swabbing action of the drill string, and, if action is not taken soon, the well will enter a kick state. Kick warning signs In oil well control, a kick should be able to be detected promptly, and if a kick is detected, proper kick prevention operations must be taken immediately to avoid a blowout. There are various tell-tale signs that signal an alert crew that a kick is about to start. Knowing these signs will keep a kicking oil well under control, and avoid a blowout: Sudden increase in drilling rate A sudden increase in penetration rate (drilling break) is usually caused by a change in the type of formation being drilled. However, it may also signal an increase in formation pore pressure, which may indicate a possible kick. Increase in annulus flow rate If the rate at which the pumps are running is held constant, then the flow from the annulus should be constant. If the annulus flow increases without a corresponding change in pumping rate, the additional flow is caused by formation fluid(s) feeding into the well bore or gas expansion. This will indicate an impending kick. Gain in pit volume If there is an unexplained increase in the volume of surface mud in the pit (a large tank that holds drilling fluid on the rig), it could signify an impending kick. This is because as the formation fluid feeds into the wellbore, it causes more drilling fluid to flow from the annulus than is pumped down the drill string, thus the volume of fluid in the pit(s) increases. Change in pump speed/pressure A decrease in pump pressure or increase in pump speed can happen as a result of a decrease in hydrostatic pressure of the annulus as the formation fluids enters the wellbore. As the lighter formation fluid flows into the wellbore, the hydrostatic pressure exerted by the annular column of fluid decreases, and the drilling fluid in the drill pipe tends to U-tube into the annulus. When this occurs, the pump pressure will drop, and the pump speed will increase. The lower pump pressure and increase in pump speed symptoms can also be indicative of a hole in the drill string, commonly referred to as a washout. Until a confirmation can be made whether a washout or a well kick has occurred, a kick should be assumed. Categories of oil well control There are basically three types of oil well control which are: primary oil well control, secondary oil well control, and tertiary oil well control. Those types are explained below. Primary Oil Well Control Primary oil well control is the process which maintains a hydrostatic pressure in the wellbore greater than the pressure of the fluids in the formation being drilled, but less than formation fracture pressure. It uses the mud weight to provide sufficient pressure to prevent an influx of formation fluid into the wellbore. If hydrostatic pressure is less than formation pressure, then formation fluids will enter the wellbore. If the hydrostatic pressure of the fluid in the wellbore exceeds the fracture pressure of the formation, then the fluid in the well could be lost into the formation. In an extreme case of lost circulation, the formation pressure may exceed hydrostatic pressure, allowing formation fluids to enter into the well. Secondary Oil Well Control Secondary oil well control is done after the Primary oil well control has failed to prevent formation fluids from entering the wellbore. This process uses "blow out preventer", a BOP, to prevent the escape of wellbore fluids from the well. As the rams and choke of the BOP remain closed, a pressure built up test is carried out and a kill mud weight calculated and pumped inside the well to kill the kick and circulate it out. Tertiary (or shearing) Oil Well Control Tertiary oil well control describes the third line of defense, where the formation cannot be controlled by primary or secondary well control (hydrostatic and equipment). This happens in underground blowout situations. The following are examples of tertiary well control: Drill a relief well to hit an adjacent well that is flowing and kill the well with heavy mud Rapid pumping of heavy mud to control the well with equivalent circulating density Pump barite or heavy weighting agents to plug the wellbore in order to stop flowing Pump cement to plug the wellbore Shut-in procedures Using shut-in procedures is one of the oil-well-control measures to curtail kicks and prevent a blowout from occurring. Shut-in procedures are specific procedures for closing a well in case of a kick. When any positive indication of a kick is observed, such as a sudden increase in flow, or an increase in pit level, then the well should be shut-in immediately. If a well shut-in is not done promptly, a blowout is likely to happen. Shut-in procedures are usually developed and practiced for every rig activity, such as drilling, tripping, logging, running tubular, performing a drill stem test, and so on. The primary purpose of a specific shut-in procedure is to minimize kick volume entering into a wellbore when a kick occurs, regardless of what phase of rig activity is occurring. However, a shut-in procedure is a company-specific procedure, and the policy of a company will dictate how a well should be shut-in. They are generally two type of Shut-in procedures which are soft shut-in or hard shut-in. Of these two methods, the hard shut-in is the fastest method to shut in the well; therefore, it will minimize the volume of kick allowed into the wellbore. Well kill procedures Source: A well kill procedure is an oil well control method. Once the well has been shut-in on a kick, proper kill procedures must be done immediately. The general idea in well kill procedure is to circulate out any formation fluid already in the wellbore during kick, and then circulate a satisfactory weight of kill mud called Kill Weight Mud (KWM) into the well without allowing further fluid into the hole. If this can be done, then once the kill mud has been fully circulated around the well, it is possible to open up the well and restart normal operations. Generally, a kill weight mud (KWM) mix, which provides just hydrostatic balance for formation pressure, is circulated. This allows approximately constant bottom hole pressure, which is slightly greater than formation pressure to be maintained, as the kill circulation proceeds because of the additional small circulating friction pressure loss. After circulation, the well is opened up again. The major well kill procedures used in oil well control are listed below: Wait and Weight Driller method Circulate and Weight Concurrent Method Reverse Circulation Dynamic Kill procedure Bullheading Volumetric Method Lubricate and Bleed Oil well control incidents - root causes There will always be potential oil well control problems, as long as there are drilling operations anywhere in the world. Most of these well control problems are as a result of some errors and can be eliminated, even though some are actually unavoidable. Since we know the consequences of failed well control are severe, efforts should be made to prevent some human errors which are the root causes of these incidents. These causes include: Lack of knowledge and skills of rig personnel Improper work practices Lack of understanding of oil well control training Lack of application of policies, procedures, and standards Inadequate risk management Organizations for building well-control culture An effective oil-well-control culture can be established within a company by requiring well control training of all rig workers, by assessing well control competence at the rigsite, and by supporting qualified personnel in carrying out safe well control practices during the drilling process. Such a culture also requires personnel involved in oil well control to commit to following the right procedures at the right time. Clearly communicated policies and procedures, credible training, competence assurance, and management support can minimize and mitigate well control incidents. An effective well control culture is built upon technically competent personnel who are also trained and skilled in crew resource management (a discipline within human factors), which comprises situation awareness, decision-making (problem-solving), communication, teamwork, and leadership. Training programs are developed and accredited by organizations such as the International Association of Drilling Contractors (IADC) and International Well Control Forum (IWCF). IADC, headquartered in Houston, TX, is a nonprofit industry association that accredits well control training through a program called WellSharp, which is aimed at providing the necessary knowledge and practical skills critical to successful well control. This training comprises drilling and well servicing activities, as well as course levels applicable to everyone involved in supporting or conducting drilling operations—from the office support staff to the floorhands and drillers and up to the most-experienced supervisory personnel. Training such as those included in the WellSharp program and the courses offered by IWCF contribute to the competence of personnel, but true competence can be assessed only at the jobsite during operations. Therefore, IADC also accredits industry competence assurance programs to help ensure quality and consistency of the competence assurance process for drilling operations. IADC has regional offices all over the world and accredits companies worldwide. IWCF is an NGO, headquartered in Europe, whose main aim is to develop and administer well-control certification programs for personnel employed in oil-well drilling and for workover and well-intervention operations. See also Blowout (well drilling) Drilling formula sheets Formation fluid Oil well Oil well fire References Oil spills Oilfield terminology Oil wells Petroleum geology Drilling technology Petroleum engineering
Oil well control
Chemistry,Engineering,Environmental_science
6,742
23,547,165
https://en.wikipedia.org/wiki/Structural%20rigidity
In discrete geometry and mechanics, structural rigidity is a combinatorial theory for predicting the flexibility of ensembles formed by rigid bodies connected by flexible linkages or hinges. Definitions Rigidity is the property of a structure that it does not bend or flex under an applied force. The opposite of rigidity is flexibility. In structural rigidity theory, structures are formed by collections of objects that are themselves rigid bodies, often assumed to take simple geometric forms such as straight rods (line segments), with pairs of objects connected by flexible hinges. A structure is rigid if it cannot flex; that is, if there is no continuous motion of the structure that preserves the shape of its rigid components and the pattern of their connections at the hinges. There are two essentially different kinds of rigidity. Finite or macroscopic rigidity means that the structure will not flex, fold, or bend by a positive amount. Infinitesimal rigidity means that the structure will not flex by even an amount that is too small to be detected even in theory. (Technically, that means certain differential equations have no nonzero solutions.) The importance of finite rigidity is obvious, but infinitesimal rigidity is also crucial because infinitesimal flexibility in theory corresponds to real-world minuscule flexing, and consequent deterioration of the structure. A rigid graph is an embedding of a graph in a Euclidean space which is structurally rigid. That is, a graph is rigid if the structure formed by replacing the edges by rigid rods and the vertices by flexible hinges is rigid. A graph that is not rigid is called flexible. More formally, a graph embedding is flexible if the vertices can be moved continuously, preserving the distances between adjacent vertices, with the result that the distances between some nonadjacent vertices are altered. The latter condition rules out Euclidean congruences such as simple translation and rotation. It is also possible to consider rigidity problems for graphs in which some edges represent compression elements (able to stretch to a longer length, but not to shrink to a shorter length) while other edges represent tension elements (able to shrink but not stretch). A rigid graph with edges of these types forms a mathematical model of a tensegrity structure. Mathematics of rigidity The fundamental problem is how to predict the rigidity of a structure by theoretical analysis, without having to build it. Key results in this area include the following: In any dimension, the rigidity of rod-and-hinge linkages is described by a matroid. The bases of the two-dimensional rigidity matroid (the minimally rigid graphs in the plane) are the Laman graphs. Cauchy's theorem states that a three-dimensional convex polyhedron constructed with rigid plates for its faces, connected by hinges along its edges, forms a rigid structure. Flexible polyhedra, non-convex polyhedra that are not rigid, were constructed by Raoul Bricard, Robert Connelly, and others. The bellows conjecture, now proven, states that every continuous motion of a flexible polyhedron preserves its volume. In the grid bracing problem, where the framework to be made rigid is a square grid with added diagonals as cross bracing, the rigidity of the structure can be analyzed by translating it into a problem on the connectivity of an underlying bipartite graph. However, in many other simple situations it is not yet always known how to analyze the rigidity of a structure mathematically despite the existence of considerable mathematical theory. History One of the founders of the mathematical theory of structural rigidity was the physicist James Clerk Maxwell. The late twentieth century saw an efflorescence of the mathematical theory of rigidity, which continues in the twenty-first century. "[A] theory of the equilibrium and deflections of frameworks subjected to the action of forces is acting on the hardnes of quality... in cases in which the framework ... is strengthened by additional connecting pieces ... in cases of three dimensions, by the regular method of equations of forces, every point would have three equations to determine its equilibrium, so as to give 3s equations between e unknown quantities, if s be the number of points and e the number of connexions[sic]. There are, however, six equations of equilibrium of the system which must be fulfilled necessarily by the forces, on account of the equality of action and reaction in each piece. Hence if e = 3s − 6, the effect of any eternal force will be definite in producing tensions or pressures in the different pieces; but if e > 3s − 6, these forces will be indeterminate...." See also Chebychev–Grübler–Kutzbach criterion Counting on Frameworks Kempe's universality theorem Notes References . . . . . Mathematics of rigidity Mechanics
Structural rigidity
Physics,Engineering
985
68,604,778
https://en.wikipedia.org/wiki/DashO%20%28software%29
DashO is a code obfuscator, compactor, optimizer, watermarker, and encryptor for Java, Kotlin and Android applications. It aims to achieve little or no performance loss even as the code complexity increases. DashO can also statically analyze the code to find unused types, methods, and fields, and delete them, thereby making the application smaller. DashO can delete used methods that are not needed in published applications, such as debugging and logging calls. See also Dotfuscator — a code obfuscator for .NET. ProGuard (software) — a code obfuscator for Java. References Software obfuscation Java development tools Android (operating system) development software
DashO (software)
Technology,Engineering
155
70,579,771
https://en.wikipedia.org/wiki/Boris%20Vainshtein
Boris Konstantinovich Vainshtein (Russian: Бори́с Константи́нович Вайнште́йн, 10 July 1921 – 28 October 1996) was a Russian crystallographer. He headed the Laboratory of Protein Crystallography of the Shubnikov Institute of Crystallography RAS, and was the director of the institute, where he spent the majority of his career. Vainshtein studied at the Lomonosov Moscow State University and the Institute of Steel. In 1990 Vainshtein won the second IUCr Ewald Prize "for his contributions to the development of theories and methods of structure analysis by electron and X-ray diffraction and for his applications of his theories to structural investigations of polymers, liquid crystals, peptides and proteins". See also Alexei Vasilievich Shubnikov Bibliography References 1921 births 1996 deaths Crystallographers Russian physical chemists Members of the Russian Academy of Sciences Moscow State University alumni
Boris Vainshtein
Chemistry,Materials_science
205
7,762,273
https://en.wikipedia.org/wiki/Cheeger%20constant%20%28graph%20theory%29
In mathematics, the Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling. The graph theoretical notion originated after the Cheeger isoperimetric constant of a compact Riemannian manifold. The Cheeger constant is named after the mathematician Jeff Cheeger. Definition Let be an undirected finite graph with vertex set and edge set . For a collection of vertices , let denote the collection of all edges going from a vertex in to a vertex outside of (sometimes called the edge boundary of ): Note that the edges are unordered, i.e., . The Cheeger constant of , denoted , is defined by The Cheeger constant is strictly positive if and only if is a connected graph. Intuitively, if the Cheeger constant is small but positive, then there exists a "bottleneck", in the sense that there are two "large" sets of vertices with "few" links (edges) between them. The Cheeger constant is "large" if any possible division of the vertex set into two subsets has "many" links between those two subsets. Example: computer networking In applications to theoretical computer science, one wishes to devise network configurations for which the Cheeger constant is high (at least, bounded away from zero) even when (the number of computers in the network) is large. For example, consider a ring network of computers, thought of as a graph . Number the computers clockwise around the ring. Mathematically, the vertex set and the edge set are given by: Take to be a collection of of these computers in a connected chain: So, and This example provides an upper bound for the Cheeger constant , which also tends to zero as . Consequently, we would regard a ring network as highly "bottlenecked" for large , and this is highly undesirable in practical terms. We would only need one of the computers on the ring to fail, and network performance would be greatly reduced. If two non-adjacent computers were to fail, the network would split into two disconnected components. Cheeger Inequalities The Cheeger constant is especially important in the context of expander graphs as it is a way to measure the edge expansion of a graph. The so-called Cheeger inequalities relate the eigenvalue gap of a graph with its Cheeger constant. More explicitly in which is the maximum degree for the nodes in and is the spectral gap of the Laplacian matrix of the graph. The Cheeger inequality is a fundamental result and motivation for spectral graph theory. See also Spectral graph theory Algebraic connectivity Cheeger bound Conductance Connectivity Expander graph Notes References Computer network analysis Graph invariants
Cheeger constant (graph theory)
Mathematics
599
61,776,737
https://en.wikipedia.org/wiki/Agnostic%20%28data%29
In computing, a device or software program is said to be agnostic or data agnostic if the method or format of data transmission is irrelevant to the device or program's function. This means that the device or program can receive data in multiple formats or from multiple sources, and still process that data effectively. Definition Many devices or programs need data to be presented in a specific format to process the data. For example, Apple Inc devices generally require applications to be downloaded from their App Store. This is a non data-agnostic method, as it uses a specified file type, downloaded from a specific location, and does not function unless those requirements are met. Non data-agnostic devices and programs can present problems. For example, if your file contains the right type of data (such as text), but in the wrong format, you may have to create a new file and enter the text manually in the proper format in order to use that program. Various file conversion programs exist because people need to convert their files to a different format in order to use them effectively. Implementation Data agnostic devices and programs work to solve these problems in a variety of ways. Devices can treat files in the same way whether they are downloaded over the internet or transferred over a USB or other cable. Devices and programs can become more data-agnostic by using a generic storage format to create, read, update and delete files. Formats like XML and JSON can store information in a data agnostic manner. For example, XML is data agnostic in that it can save any type of information. However, if you use Data Transform Definitions (DTD) or XML Schema Definitions (XSD) to define what data should be placed where, it becomes non-data agnostic; it produces an error if the wrong type of data is placed in a field. Once you have your data saved in a generic storage format, this source can act as an entity synchronization layer. The generic storage format can interface with a variety of different programs, with the data extraction method formatting the data in a way that the specific program can understand. This allows two programs that require different data formats to access the same data. Multiple devices and programs can create, read, update and delete (CRUD) the same information from the same storage location without formatting errors. When multiple programs are accessing the same records, they may have different defined fields for the same type of concept. Where the fields are differently labelled but contain the same data, the program pulling the information can ensure the correct data is used. If one program contains fields and information that another does not, those fields can be saved to the record and pulled for that program, but ignored by other programs. As the entity synchronization layer is data agnostic, additional fields can be added without worrying about recoding the whole database, and concepts created in other programs (that do not contain that field) are fine. Since the information formatting is imposed on the data by the program extracting it, the format can be customized to the device or program extracting and displaying that data. The information extracted from the entity synchronization layer can therefore be dynamically rendered to display on the user's device, regardless of the device or program being used. Having data agnostic devices and programs allows you to transfer data easily between them, without having to convert that data. Companies like Great Ideaz provide data agnostic services by storing the data in an entity synchronization layer. This acts as a compatibility layer, as TSQL statements can retrieve, update, sort, and write data regardless of the format employed. It also allows you to synchronize data between multiple applications, as the applications can all pull data from the same location. This prevents compatibility problems between different programs that have to access the same data, as well as reducing data replication. Benefits Keeping your devices and programs as data agnostic as possible has some clear advantages. Since the data is stored in an agnostic format, developers do not need to hard-code ways to deal with all different kinds of data. A table with information about dogs and one with information about cats can be treated in the same way; extract the field definitions and the field content from the data agnostic storage format and display it based on the field definitions. Using the same code for the different concepts to CRUD, the amount of code is significantly reduced, and what remains is tested with each concept you extract from the entity synchronization layer. The field definitions and formatting can be stored in the entity synchronization layer with the data they are acting on. Allows fields and formatting to change, without having to hardcode and compile programs. The data and formatting are then generated dynamically by the code used to extract the data and the formatting information. The data itself only needs to be distinguished when it is being acted on or displayed in a specific way. If the data is being transferred between devices or databases, it does not need to be interpreted as a specific object. Whenever the data can be treated as agnostic, the coding is simplified, as it only has to deal with one case (the data agnostic case) rather than multiple (PNG, PDF, etc.). When the data must be displayed or acted on, then it is interpreted based on the field definitions and formatting information, and returned to a data agnostic format as soon as possible to reduce the number of individual cases that must be accounted for. Risks There are, however, a few problems introduced when attempting to make a device or program data agnostic. Since only one piece of code is being used for CRUD operations (regardless of the type of concept), there is a single point of failure. If that code breaks down, the whole system is broken. This risk is mitigated because the code is tested so many times (as it is used every time a record is stored or retrieved). Additionally, data agnostic storage mediums can increase load speed, as the code has to search for the field definitions and display format as well as the specific data to be displayed. The load speed can be improved by pre-shredding the data. This uses a copy of the record with the data already extracted to index the fields, instead of having to extract the fields and formatting information at the same time as the data. While this improves the speed, it adds a non-data agnostic element to the process; however, it can be created easily through code generation. References Computer data Computer science
Agnostic (data)
Technology
1,340
31,351,992
https://en.wikipedia.org/wiki/Model%20steam%20engine
A model steam engine is a small steam engine not built for serious use. Often they are built as an educational toy for children, in which case it is also called a toy steam engine, or for live steam enthusiasts. Between the 18th and early 20th centuries, demonstration models were also in use at universities and engineering schools, frequently designed and built by students as part of their curriculum. Model steam engines have been made in many forms by a number of manufacturers, but building model steam engines from scratch is popular among adult steam enthusiasts, although this generally requires access to a lathe and/or milling machine. Those without a lathe can alternatively purchase prefabricated parts. History In the late 19th century, manufacturers such as German toy company Bing introduced the two main types of model/toy steam engines, namely stationary engines with accessories that were supposed to mimic a 19th-century factory, and mobile engines such as steam locomotives and boats. Later, especially in the early 20th century, steam rollers, fire engines, traction engines and steam wagons began to appear. At the peak of their popularity, around the mid 20th century, there were hundreds of companies making steam toys and models. Today, companies such as Wilesco (Germany), Mamod (UK), and Jensen (US) continue to produce model/toy steam engines. Design features Toy steam engines will commonly have fewer features (such as mechanical lubricators or governors), and operate at lower pressures, while model steam engines will place more emphasis on similarity to life-sized engines. Manufacturers such as Wilesco sell both simple toy engines for beginners (e.g. the D3) and more intricate model engines that are meant to be used to drive things like workshops or boats. Model steam engines typically use hexamine fuel tablets, methylated spirits (aka meths or denatured alcohol), butane gas, or electricity to heat the boiler. Cylinders are either oscillating (single-acting or double-acting) or fixed cylinder using slide-valves, piston valves or poppet valves (normally double-acting). Spring safety valves and steam whistles are other common features of model steam engines. Some stationary engines also have feedwater pumps to replenish boiler water, allowing them to run indefinitely as long as sufficient fuel is available. Gallery See also Live steam Model engineering Notes References and further reading Stan Bray: Making Simple Model Steam Engines, 192 pp, Tubal Cain: Building Simple Model Steam Engines, 112 pp, Paul Hasluck: The Model Engineer's Handybook: A Practical Manual on Model Steam Engines. Archive.org e-book Bob Gordon: Toy Steam Engines, Bob Gordon: Model Steam Engines, :Category:Toy steam engine manufacturers Steam engines Educational toys Live steam Steam Powered toys Scale modeling Articles containing video clips
Model steam engine
Physics,Technology
573
5,589,115
https://en.wikipedia.org/wiki/Camino%20Real%20de%20Tierra%20Adentro
El Camino Real de Tierra Adentro (), also known as the Silver Route, was a Spanish road between Mexico City and San Juan Pueblo (Ohkay Owingeh), New Mexico (in the modern U.S.), that was used from 1598 to 1882. It was the northernmost of the four major "royal roads" that linked Mexico City to its major tributaries during and after the Spanish colonial era. In 2010, 55 sites and five existing UNESCO World Heritage Sites along the Mexican section of the route were collectively added to the World Heritage List, including historic cities, towns, bridges, haciendas and other monuments along the route between the Historic Center of Mexico City (also a World Heritage Site on its own) and the town of Valle de Allende, Chihuahua. The section of the route within the United States was proclaimed the El Camino Real de Tierra Adentro National Historic Trail, a part of the National Historic Trail system, on October 13, 2000. The historic route is overseen by both the National Park Service and the U.S. Bureau of Land Management with aid from the El Camino Real de Tierra Adentro Trail Association (CARTA). A portion of the trail near San Acacia, New Mexico, was listed on the U.S. National Register of Historic Places in 2014. Route The road is identified as beginning at the Plaza Santo Domingo very close to the present Zócalo and Mexico City Metropolitan Cathedral in Mexico City. Traveling north through San Miguel de Allende, Guanajuato, the road's northern terminus is located at Ohkay Owingeh, New Mexico. History Pre-Columbian history Long before Europeans arrived, the various indigenous tribes and kingdoms that had arisen throughout the northern central steppe of Mexico had established the route that would later become the Camino Real de Tierra Adentro as a major thoroughfare for hunting and trading. The route connected the peoples of the Valley of Mexico with those of the north through the exchange of products such as turquoise, obsidian, salt and feathers. By the year AD 1000, a flourishing trade network existed from Mesoamerica to the Rocky Mountains. European incursion After Tenochtitlan was subdued in 1521, Spanish conquistadors and colonists began a series of expeditions with the purpose of expanding their domains and obtaining greater wealth for the Spanish Crown. Their initial efforts led them to follow the trails established by the natives who exchanged goods between the north and the south. In April 1598, a group of military scouts led by Juan de Oñate, the newly appointed colonial governor of the province of Santa Fe de Nuevo México, became lost in the desert south of Paso del Norte while seeking the best route to the Río del Norte. A local Indian they had captured named Mompil drew in the sand a map of the only safe passage to the river. The group arrived at the Río del Norte just south of present-day El Paso and Ciudad Juárez in late April, where they celebrated the Catholic Feast of the Ascension on April 30, before crossing the river. They then mapped and extended the route to what is now Española, where Oñate would establish the capital of the new province. This trail became the Camino Real de Tierra Adentro, the northernmost of the four main "royal roads" – the Caminos Reales – that linked Mexico City to its major tributaries in Acapulco, Veracruz, Audiencia (Guatemala) and Santa Fe. After the Pueblo Revolt of 1680, which violently forced the Spanish out of Nuevo México, the Spanish Crown decided not to abandon the province altogether but instead maintained a channel to the province so as not to completely abandon their subjects remaining there. The Viceroyalty organized a system, the so-called conducta, to supply the missions, presidios, and northern ranchos. The conducta consisted of wagon caravans that departed every three years from Mexico City to Santa Fe along the Camino Real de Tierra Adentro. The trip required a long and difficult journey of six months, including 2–3 weeks of rest along the way. Many were the uncertainties that the conducta and other travelers faced. River floods could force weeks of waiting on the banks until the caravan could wade across. At other times, prolonged droughts in the area could make water scarce and difficult to find. The most feared section of the journey was the crossing of the Jornada del Muerto beyond El Paso del Norte: nearly of expansive, barren desert without any water sources to hydrate the men and beasts. Beyond the sustenance needs, the greatest danger to the caravan was that of local assaults. Groups of bandits roamed throughout the territory and threatened the caravan from the current state of Mexico to the state of Querétaro, seeking articles of value. And from the southern part of Zacatecas onward to the north, the greatest threat was the native Chichimecas, who became more likely to attack as the caravan progressed further north. The main objective of the Chichimecas was horses, but they would also often take women and children. A series of presidios along the way allowed for relays of troops to provide additional protection to the caravans. At night in the most dangerous areas, the caravans would form a circle with their wagons with the people and animals inside. The Camino Real was actively used as a commercial route for more than 300 years, from the middle of the 16th century to the 19th century, mainly for the transport of silver extracted from northern mines. During this time, the road was continuously improved, and over time the risks became smaller as haciendas and population centers emerged. 18th century During the 18th century, the sites along the Camino Real de Tierra Adentro increased significantly. The area between the villas of Durango and Santa Fe came to be known as "the Chihuahua Trail". The villa of San Felipe el Real (today city of Chihuahua), established in 1709 to support the surrounding mines, became the most important commercial center and financial area along this segment. The villa of San Felipe Neri de Alburquerque (present-day Albuquerque, New Mexico) was founded in 1706 and it also became an important terminal. Because of its defensive position on the Camino Real, the Villa de Alburquerque became the center of commercial exchange between Nuevo México and the rest of New Spain during the 18th century, trading cattle, wool, textiles, animal skins, salt, and nuts. This exchange occurred mainly with the mining cities of Chihuahua, Santa Bárbara, and Parral. El Paso del Norte (present-day Ciudad Juárez) became another major terminal on the route. In 1765, the population of El Paso del Norte was estimated to be 2,635 inhabitants, which created what was then the largest urban center on the northern border of New Spain. El Paso del Norte became an important center of agriculture and rancheria, known for its wines, brandy, vinegar, and raisins. In the 18th century, the Spanish Crown authorized the establishment of fairs along the Camino Real to promote commerce (although some form of these had already been existing for some time prior). Some of the most important Fairs along the Camino Real included the Fair de San Juan de los Lagos in Jalisco, the Fair de Saltillo, and the Fair de Chihuahua, which was of great importance to Nuevo México merchants. The Fair de Taos was also an important annual event where the Comanches and the Utes traded weapons, ammunition, horses, agricultural products, furs, and meats with the Spanish. Spain at the same time maintained a monopoly on the products of its northern provinces, thus no trade occurred with the French colony of Louisiana. For the second half of the 18th century, the northern frontier of New Spain represented a fundamental interest for the Spanish Empire and its reformist policy, with the aim of ensuring Spanish sovereignty over its northern provinces, highly coveted geopolitically by other European powers – especially the English and the French. The Spanish Crown labored to incorporate the natives into the social and economic welfare of its provinces and give them reasons to participate in the defense of the Spanish border. Thus, Captain Nicolás de Lafora (assigned by the then Marqués de Rubí) gives a description of the frontier of New Spain in his "Viaje a los presidios internos de la América septentrional", the product of an expedition that took place between 1766 and 1768. This expedition was part of a larger commission on the defensive issues and military capabilities entrusted by the Spanish Crown to the Marquis of Rubí, to assess the tactical placement of the Presidios, inspect troop readiness, review military regulations and propose what might be done to strengthen the government and the defense of the State. From its review, the Marquis proposed a line of Presidios along the northern frontier of New Spain, to be established from the Gulf of Mexico to the Gulf of California to protect itself from the Utes, Apaches, Comanches, and Navajos. Don José de Gálvez, special commissioner to New Spain for Charles III, promoted a "Comandancia General de las Provincias Internas" ("General Commander of the Internal Provinces") for the northern provinces of New Spain. However, he also recognized that a long war with the natives would be impossible to win or sustain due to the lack of military resources in the area. With that view, he himself promoted the establishment of a strong peace in the provinces and a greater commercial presence in 1779. In 1786, the nephew of José de Gálvez, Bernardo de Gálvez, viceroy of New Spain published his "Instructions" which included three strategies for dealing with the Natives: Continuing the military pressure on hostile and unaligned tribes; Pursuing the formation of alliances with friendly tribes; and promoting economic dependency with those natives who had entered into peace treaties with the Spanish Crown. In the last decade of the 18th century, a tenuous peace was achieved between the Spaniards and the Apache tribes as a result of the aforementioned administrative and strategic changes. As a consequence, commerce along the Camino Real greatly expanded with products from all over the world, including products from the other provinces of New Spain, brought in over land; European products brought in by the Spanish fleet; and even those that came from the Manila galleon that arrived annually at Acapulco from the western Pacific. As an example, for this time, the most typical products sold by the merchants in the city of Parral along the "Chihuahua Trail" included: Platoncillos from Michoacán; Jarrillos from Cuautitlán of the State of Mexico; Majolica from the State of Puebla; Porcelain junks from China; and clay products from Guadalajara. 19th century The 19th century brought many changes for both Mexico and its northern border. From the Napoleonic Wars to the start of the Mexican War of Independence, the colonial government was unstable and struggled to continue sending resources to the northern provinces. This void led to the establishment of alternate suppliers and supply routes into those provinces. In 1807, American merchant and military agent Zebulon Pike was sent to explore the southwestern borders between the US and New Spain with the intention to find a trail to bring US commerce into Nuevo México and Nueva Vizcaya (Chihuahua). Pike was captured on 26 February 1807 by the Spanish authorities in northern Nuevo México, who sent him on the Camino Real to the city of Chihuahua for interrogation. While Pike was in this city, he gained access to several maps of México and learned of the discontent with Spanish domination. In 1821, after 11 years of struggle, Mexico gained its independence from Spain. The Camino Real maintained an important role in this period, since travelers brought communication about the events that were taking place in the center of the country to the towns and villages of the internal provinces. During the Mexican War of Independence, the Camino Real was used by both forces, rebels and royal forces. For example, after the liberator Miguel Hidalgo y Costilla launched the war of independence, he used the road to retreat from the Battle of the Bridge of Calderón fought on the banks of the Calderón River 60 km (37 mi) east of Guadalajara in present-day Zapotlanejo, Jalisco, northward, eventually arriving at the Wells of Baján in Coahuila where he was captured and executed by royal forces. Between 1821 and 1822, after the end of the war for the Independence of Mexico, the Santa Fe Trail was established to connect the US territory of Missouri with Santa Fe. At first, US merchants were arrested and imprisoned for bringing contraband into Mexican territory; however, the growing economic crisis in northern Mexico gave rise to an increased tolerance of this type of trade. In fact, the Santa Fe Trail (Sendero de Santa Fe) provided needed markets for local products (such as cotton) and manufactured products from New Mexico, so New Mexicans looked favorably on this new trade route. By 1827, a lucrative and commercial connection had been forged between Missouri, New Mexico, and Chihuahua. In 1846, the dispute over the Texas-Mexico border with the United States gave rise to the subsequent invasion by US military forces and the Mexican–American War began. One of these forces was commanded by the general Stephen Kearny, who traveled by the Santa Fe Trail to seize the capital of New Mexico. Another of the forces commanded by Colonel Alexander William Doniphan defeated a small group of Mexican contingents on the Camino Real in the Los Brazitos area south of what is now Las Cruces, New Mexico. Doniphan's forces went on to capture El Paso del Norte and, later, the city of Chihuahua. During 1846–1847, the Camino Real de Tierra Adentro became a path of continuous use, with American forces using it to travel into the interior of Mexico. On their journey, many American travelers kept journals and wrote home about what they saw as they travelled. One of the soldiers provided an estimate of the population of several cities along the Camino, including: Algodones, New Mexico, with 1,000 inhabitants; Bernalillo with 500; Sandía Pueblo with 300 to 400, Albuquerque without an estimated number but extant for seven or eight miles along the Rio Grande; Rancho de los Placeres with 200 or 300; Tomé with 2,000; Socorro, described as a "considerable city"; Paso del Norte with 5,000 to 6,000, and Carrizal, Chihuahua, with 400 inhabitants. The soldiers even kept notes of the products, prices, and animals that they found on their journeys. With the Treaty of Guadalupe Hidalgo signed in February 1848, the war officially ended, with Mexico ceding most of its northern territories to the US, including parts of what are now the US states of New Mexico, Colorado, Arizona, and all of California, Nevada and Utah. Uses of the name The name is sometimes a source of confusion, since during the Viceroyalty of New Spain all roads passable by horse and cart were called "Camino Real", and a significant number of roads throughout the viceroyalty bore this designation. Similarly, all of the interior territories outside of Mexico City were once called "Tierra Adentro", and particularly the northern parts of the Kingdom. This is why the portion of the road between Santiago de Querétaro and Saltillo was alternatively called "La Puerta de Tierra Adentro" ("The Door of Tierra Adentro"). There have historically been several designated "Caminos Reales de Tierra Adentro" throughout New Spain, perhaps the second most important one after the road to Santa Fe being the one that led out of Saltillo, Coahuila, to the Province of Texas. World Heritage Site The section of the road that runs through Mexico was nominated to the UNESCO World Heritage List in November 2001, under the cultural criteria (i) and (ii), which referred to i) "Representing a masterpiece of the creative genius of man"; and ii) "Being the manifestation of a considerable exchange of influences, during a specific period or in a specific cultural area, in the development of architecture or technology, monumental arts, urban planning or landscape design". Criteria (iv) "Offering an eminent example of a type of building, architectural, technological or landscape, that illustrates a significant stage of human history" was added in 2010. On August 1, 2010, UNESCO designated this road as a World Heritage Site. The designation identified a core zone of 3,102 hectares with a buffer zone of 268,057 hectares distributed across 60 historical sites. UNESCO identified / recognized 60 sites along the road in their declaration of the road being a World Heritage Site. Five of them (Mexico City, Querétaro, Guanajuato, San Miguel de Allende and Zacatecas) had been separately recognized in the past. The original historical route does not exactly match the route identified by UNESCO, since UNESCO's declaration omitted several sections such as the portion that ran north of Valle de Allende in Chihuahua and the portion that ran through the Hacienda de San Diego del Jaral de Berrio in Guanajuato, as well as the portion in the United States. For this reason, a possible expansion of the declaration has been proposed for the future. The Instituto Nacional de Antropología e Historia is conducting research to find and gather evidence for additional portions and sites of the original stretches of the historical road, such as bridges, pavements, haciendas, etc. that might be added to the original UNESCO designation. Declared sites Mexico City and State of Mexico 1351-000: Historic center of Mexico City. 1351-001: Old College of Templo de San Francisco Javier (Tepotzotlán) in Tepotzotlán. 1351-002: Aculco de Espinoza. 1351-003: Bridge of Atongo. 1351-004: Section of the Camino Real between Aculco de Espinoza and San Juan del Río. State of Hidalgo 1351-005: Templo and exconvento de San Francisco in Tepeji del Río de Ocampo and bridge. 1351-006: Section of the Camino Real between the bridge of La Colmena and the Hacienda de La Cañada. State of Querétaro 1351-007: Historic center of San Juan del Río. 1351-008: Hacienda de Chichimequillas. 1351-009: Chapel of the hacienda de Buenavista. 1351-010: Historic center of Santiago de Querétaro. State of Guanajuato 1351-011: Bridge of El Fraile. 1351-012: Antiguo Real Hospital de San Juan de Dios in San Miguel de Allende. 1351-013: Bridge of San Rafael in Guanajuato. 1351-014: Bridge La Quemada. 1351-015: Sanctuario de Jesús Nazareno de Atotonilco in the Municipality of San Miguel de Allende. 1351-016: Historic center of Guanajuato and its adjacent mines. State of Jalisco 1351-017: Historic center of Lagos de Moreno and bridge. 1351-018: Historic center of Ojuelos de Jalisco. 1351-019: Bridge of Ojuelos de Jalisco. 1351-020: Hacienda de Ciénega de Mata. 1351-021: Old Cemetery of Encarnación de Díaz. State of Aguascalientes 1351-022: Hacienda de Peñuelas. 1351-023: Hacienda de Cieneguilla. 1351-024: Historic center of Aguascalientes. 1351-025: Hacienda de Pabellón de Hidalgo. State of Zacatecas 1351-026: Chapel of San Nicolás Tolentino of the Hacienda de San Nicolás de Quijas. 1351-027: Town of Pinos. 1351-028: Templo de Nuestra Señora de los Ángeles of the town of Noria de Ángeles. 1351-029: Templo de Nuestra Señora de los Dolores in Villa González Ortega. 1351-030: Colegio de Nuestra Señora de Guadalupe de Propaganda Fide. 1351-031: Historic center of Sombrerete. 1351-032: Templo de San Pantaleón Mártir in the town of Noria de San Pantaleón. 1351-033: Sierra de Órganos. 1351-034: Architectural set of the town of Chalchihuites. 1351-035: Section of the Camino Real between Ojocaliente and Zacatecas. 1351-036: Cave of Ávalos. 1351-037: Historic center of Zacatecas. 1351-038: Sanctuary of Plateros. State of San Luis Potosí 1351-039: Historic center of San Luis Potosí. State of Durango 1351-040: Chapel of San Antonio of the Hacienda de Juana Guerra. 1351-041: Churches in the town of Nombre de Dios. 1351-042: Hacienda de San Diego de Navacoyán and Bridge del Diablo. 1351-043: Historic center of Durango. 1351-044: Churches in the town of Cuencamé and Cristo de Mapimí. 1351-045: Templo de Nuestra Señora del Refugio in the Hacienda La Pedriceña in Los Cuatillos, Cuencamé Municipality. 1351-046: Iglesia Principal of the town of San José de Avino. 1351-047: Chapel of the Hacienda de la Inmaculada Concepción of Palmitos de Arriba. 1351-048: Chapel of the Hacienda de la Limpia Concepción of Palmitos de Abajo. 1351-049: Architectural set of Nazas. 1351-050: Town of San Pedro del Gallo. 1351-051: Architectural set of the town of Mapimí. 1351-052: Town of Indé. 1351-053: Chapel of San Mateo of the Hacienda de San Mateo de la Zarca. 1351-054: Hacienda de la Limpia Concepción of Canutillo. 1351-055: Templo de San Miguel in the town of Villa Ocampo. 1351-056: Section of the Camino Real between Nazas and San Pedro del Gallo. 1351-057: Ojuela Mine. 1351-058: Cave of Las Mulas de Molino. State of Chihuahua 1351-059: Town of Valle de Allende. Undeclared historic locations of the Camino Real in State of Chihuahua Santa Bárbara Parral Chihuahua Carrizal Laguna de Patos Ojo el Lucero Puerto Ancho Ciudad Juárez Senucú San Lorenzo Misión de Nuestra Señora de Guadalupe Presidio del Nuestra Senora del Pilar del Paso del Rio Norte Location National Historic Trail In the United States, from the Texas–New Mexico border to San Juan Pueblo north of Española, the original route (at one point designated U.S. Route 85 but later superseded with US Interstate Highways 10 and 25) has been designated a National Scenic Byway called El Camino Real. Pedestrian, bicycle, and equestrian trails have been added to portions of the trade route corridor over the past few decades. These include the existing Paseo del Bosque Trail in Albuquerque and portions of the proposed Rio Grande Trail. Its northern terminus, Santa Fe, is also a terminus of the Old Spanish Trail and the Santa Fe Trail. Along the trail, parajes (stopovers) that have been preserved today include El Rancho de las Golondrinas. Fort Craig and Fort Selden are also located along the trail. CARTA The El Camino Real de Tierra Adentro Trail Association (CARTA) is a non-profit trail organization that aims to help promote, educate, and preserve the cultural and historic trail in collaboration with the U.S. National Park Service, the Bureau of Land Management, the New Mexico Department of Cultural Affairs, and various Mexican organizations. CARTA publishes an informative quarterly journal, Chronicles of the Trail, which provides people with further history and current affairs of the trail and what CARTA, as an organization, is doing to help preserve it. Chihuahua Trail The Chihuahua Trail is an alternate name used to describe the route as it passes from New Mexico through the state of Chihuahua to central Mexico. By the late 16th century, Spanish exploration and colonization had advanced from Mexico City northward by the great central plateau to its ultimate goal in Santa Fe. Until Mexican independence in 1821, all communications between New Mexico and the rest of the world were restricted to this trail. Over it came ox carts and mule trains, missionaries and governors, soldiers and colonists. When the Santa Fe Trail was established as an overland route between Santa Fe and Missouri, traders from the United States extended their operations southward down the Chihuahua Trail and beyond to Durango and Zacatecas. Ultimately superseded by railroads in the 19th century, the ancient Mexico City–Santa Fe road was revived in the mid-20th century as one of the great automobile highways of Mexico. The part that runs from Santa Fe, New Mexico to El Paso, Texas, US State Highway 85, was pioneered by Franciscan missionaries in 1581 and may be the oldest highway in the United States. See also Camino Real in New Mexico - El Camino Real de Tierra Adentro El Camino Real (California) – the California Mission Trail El Camino Real de Los Tejas – El Camino Real from Texas east to Louisiana National Register of Historic Places listings in Socorro County, New Mexico Old San Antonio Road – a section of El Camino Real de Los Tejas Scenic byways in the United States Supply of Franciscan missions in New Mexico References Further reading Dictionary of American History by James Truslow Adams, New York: Charles Scribner's Sons, 1940 Boyle, Susan Calafate. Los Capitalistas: Hispano Merchants and the Santa Fe Trade. Albuquerque: University of New Mexico Press, 1997. Moorhead, Max L. New Mexico's Royal Road. Norman: University of Oklahoma Press, 1958. Palmer, Gabrielle G., et al.. El Camino Real de Tierra Dentro. Santa Fe: Bureau of Land Management, 1993. Palmer, Gabrielle G. and Stephen L. Fosberg. El Camino Real de Tierra Dentro. Santa Fe: Bureau of Land Management, 1999. Preston, Douglas and José Antonio Esquibel. The Royal Road. Albuquerque: University of New Mexico Press, 1998. External links National Park Service: official El Camino Real de Tierra Adentro National Historic Trail website El Camino Real International Heritage Center El Camino Real de Tierra Adentro – Integrated education curriculum CARTA – El Camino Real de Tierra Adentro Trail Association: website N.M.-Monuments.org – "A Road Over Time" Historic trails and roads in Mexico Historic trails and roads in New Mexico Historic trails and roads in Texas Colonial Mexico Colonial New Mexico New Spain Spanish Texas National Historic Trails of the United States National Scenic Byways Bureau of Land Management areas in New Mexico Historic Civil Engineering Landmarks Protected areas established in 2000 Units of the National Landscape Conservation System Roads on the National Register of Historic Places in New Mexico New Mexico Scenic and Historic Byways World Heritage Sites in Mexico National Register of Historic Places in Socorro County, New Mexico 2000 establishments in Texas 2000 establishments in New Mexico 2000 establishments in Mexico
Camino Real de Tierra Adentro
Engineering
5,739
12,241,687
https://en.wikipedia.org/wiki/C4H6O3
{{DISPLAYTITLE:C4H6O3}} The molecular formula C4H6O3 may refer to: Acetic anhydride Acetoacetic acid Dioxanones p-Dioxanone Trimethylene carbonate trans-4-Hydroxycrotonic acid α-Ketobutyric acid 2-Methyl-3-oxopropanoic acid Methyl pyruvate Propylene carbonate Succinic semialdehyde
C4H6O3
Chemistry
101
906,511
https://en.wikipedia.org/wiki/Water%20block
A water block is the watercooling equivalent of a heatsink. It is a type of plate heat exchanger and can be used on many different computer components, including the central processing unit (CPU), GPU, PPU, and northbridge chipset on the motherboard. There are also Monoblocks on the market that are mounted on PC motherboards and cover the CPU and its power delivery VRMs (Voltage Regulator Modules) that surround the CPU socket area. It consists of at least two main parts; the "base", which is the area that makes contact with the device being cooled and is usually manufactured from metals with high thermal conductivity such as aluminum or copper. The second part, the "top" ensures the water is contained safely inside the water block and has connections that allow hosing to connect it with the water cooling loop. The top can be made of the same metal as the base, transparent Perspex, Delrin, Nylon, or HDPE. Most newer high-end water blocks also contain mid-plates which serve to add jet tubes, nozzles, and other flow altering devices. The base, top, and mid-plate(s) are sealed together to form a "block" with some sort of path for water to flow through. The ends of the path have inlet/outlet connectors for the tubing that connects it to the rest of the watercooling system. Early designs included spiral, zig-zag pattern or heatsink like fins to allow the largest possible surface area for heat to transfer from the device being cooled to the water. These designs generally were used because the conjecture was that maximum flow was required for high performance. Trial and error and the evolution of water block design has shown that trading flow for turbulence can often improve performance. The Storm series of water blocks is an example of this. Its jet tube mid plate and cupped base design makes it more restrictive to the flow of water than early maze designs but the increased turbulence results in a large increase in performance. Newer designs include "pin" style blocks, "jet cup" blocks, further refined maze designs, micro-fin designs, and variations on these designs. Increasingly restrictive designs have only been possible because of increases in maximum head pressure of commercially viable water pumps. A water block is better at dissipating heat than an air-cooled heatsink due to water's higher specific heat capacity and thermal conductivity. The water is usually pumped through to a radiator which allows a fan pushing air through it to take the heat created from the device and expel it into the air. A radiator is more efficient than a standard CPU or GPU heatsink/air cooler at removing heat because it has a much larger surface area. Installation of a water block is also similar to that of a heatsink, with a thermal pad or thermal grease placed between it and the device being cooled to aid in heat conduction. References Computer hardware cooling Heat exchangers
Water block
Chemistry,Engineering
608
11,171,340
https://en.wikipedia.org/wiki/Overtime%20rate
Overtime rate is a calculation of hours worked by a worker that exceed those hours defined for a standard workweek. This rate can have different meanings in different countries and jurisdictions, depending on how that jurisdiction's labor law defines overtime. In many jurisdictions, additional pay is mandated for certain classes of workers when this set number of hours is exceeded. In others, there is no concept of a standard workweek or analogous time period, and no additional pay for exceeding a set number of hours within that week. The overtime rate calculates the ratio between employee overtime with the regular hours in a specific time period. Even if the work is planned or scheduled, it can still be considered overtime if it exceeds what is considered the standard workweek in that jurisdiction. A high overtime rate is a good indicator of a temporary or permanent high workload, and can be a contentious issue in labor-management relations. It could result in a higher illness rate, lower safety rate, higher labor costs, and lower productivity. United States In the United States a standard workweek is considered to be 40 hours. Most waged employees or so-called non-exempt workers under U.S. federal labor and tax law must be paid at a wage rate of 150% of their regular hourly rate for hours that exceed 40 in a week. The start of the pay week can be defined by the employer, and need not be a standard calendar week start (e.g., Sunday midnight). Many employees, especially shift workers in the U.S., have some amount of overtime built into their schedules so that 24/7 coverage can be obtained. Formula References External links The Death of Overtime by Nick Hanauer Working time Human resource management Metrics Labor rights Labor relations Labor history Wages and salaries
Overtime rate
Mathematics
356
20,375,066
https://en.wikipedia.org/wiki/Iatromathematicians
Iatromathematicians (from Greek ἰατρική "medicine" and μαθηματικά "mathematics") were a school of physicians in 17th-century Italy who tried to apply the laws of mathematics and mechanics in order to understand the functioning of the human body. They were also keen students of anatomy. These iatromathematicians made an effort to prove that applying a purely mechanical conception to the study of the human body is futile. The mechanical conceptions that they had referred to was Leonardo da Vinci’s studies of the human body, and the writings of Aristotle about the motion of animals related to geometric analysis. Iatromathematicians considered the bodies functioning to be measured by quantifiable numbers, weights, and measures. Iatromathematics The field of iatromathematics is allied to science; however, it lacks the applicability of the proper scientific method and is therefore considered a form of pseudoscience. It applies the study of astrology to medicine. Iatromathematicians viewed the human body through astrological reasoning as well as mechanics. They associate various stars, or zodiac signs with the functioning of the human body. The twelve astrological signs contribute to each part of the body from head to toe. Moreover, planets and existing cosmos in space are correlated with certain parts of the body. Through examining a natal chart, iatromathematicians attempt to predict biological setbacks in an individual. Iatromathematicians examine the active and energetic temperament of the human body. Moreover, they explore the causes of various health problems and attempt to find ways to treat certain detrimental diseases. In iatromathematics, there is a particular assumption that there is an impact of various energetic fields caused on the star bodies. The star body of an individual is often referred to by astrologers as an energetic matrix and is believed to be spawned by heavenly bodies such as the sun, moon, planets, and several other astrological signs. Iatromathematicians study these conceptions and try to regulate the path of the star body of individuals so that it will give a positive, rather than a negative result. By doing so, they believe that it will contribute to a healthier lifestyle. Its doctrine is based on cosmobiology in which several emotional and physiological dilemmas in the body are associated with the positioning of celestial bodies in outer space. Iatromathematics is closely correlated with biomechanics because the field of biomechanics investigates macrobiotic bodies to a macroscopic degree through the appliance of several engineering principles. The perspective of iatromathematicians differed from that of iatrophysicists and iatrochemists in terms of the way human bodies function. Iatrophysicists predicted the deviations from the biological norm of the body through the appliance of physics, while iatrochemists measured the detrimental problems of the body by chemical means. Ibn Ezra Several individuals contributed to this field study of iatromathematics. For example, Ibn Ezra (Rabbi Avraham Ben Meir Ibn Ezra) wrote nine different astrological treatises. He covered all the subsections of astrology which include the branches of natal, medicinal, horary, electional, and mundane. Ibn Ezra's best known work was known as The Beginning of Wisdom. Over time, various individuals studied his works comprehensively. One such person was George Sarton, who is the founder of the History of Science Society. Recently, Archibald Pitcairne was mentioned as the "forgotten father of mathematical medicine" and his contribution praised to creating the bases of iatromathematics. See also Medical astrology References Bibliography History of medicine History of mathematics Biomechanics
Iatromathematicians
Physics
774
918,938
https://en.wikipedia.org/wiki/DOT%20pictograms
The DOT pictograms are a set of fifty pictograms used to convey information useful to travelers without using words. Such images are often used in airports, train stations, hotels, and other public places for foreign tourists, as well as being easier to identify than strings of text. Among these pictograms are graphics representing toilets and telephones. As a result of their near-universal acceptance, some describe them as the "Helvetica" of pictograms, and the character portrayed within them as Helvetica Man. As works of the United States government, the images are in the public domain and thus can be used by anyone for any purpose, without licensing issues. History In the 1970s, the United States Department of Transportation recognized the shortcomings of pictograms drawn on an ad hoc basis at transportation-related facilities across the United States and commissioned the American Institute of Graphic Arts to produce a comprehensive set of pictograms. In collaboration with Roger Cook and Don Shanosky of Cook and Shanosky Associates, the designers conducted an exhaustive survey of pictograms already in use around the world, which drew from sources as diverse as Tokyo International Airport and the 1972 Olympic Games in Munich. The designers rated these pictograms based on criteria such as their legibility, their international recognizability and their resistance to vandalism. After determining which features were the most successful and appropriate, the designers drew a set of pictograms to represent 34 meanings requested by the DOT. The results of this research, as well as guidelines on how to best implement the symbols was presented in a report titled Symbol Signs – The development of Passenger/Pedestrian Oriented Symbols for Use in Transportation-Related Facilities in November 1974. In 1979, 16 symbols were added, bringing the total count to 50. Development of symbols Initial groundwork Symbols were collected from a variety of sources, including railways, Olympic events, airports and government agencies to form a catalog of each type of symbol to be created for close examination. A key goal was to avoid starting from scratch when possible, and instead build off previous development of robust symbol designs in existing systems. Evaluation The first overall step was to identify the symbols that were to be developed for the project, these were referred to as 'message areas'. The Department of Transportation's Office of Facilitation and AIGA committee devised the initial list of 34 messages. These messages were broken into four broad categories: 'Public Services', facility services and modes of transport (Telephones, toilets, first aid); 'Concessions', commercial activities (Car rental, coffee shop, shops); 'Processing Activities', passenger related processes (Ticket purchase, customs); 'Regulations', (No smoking, No entry). Symbols that conveyed the messages sought by the committee from the 24 sources were broken into 'concept groups', a simple grouping of symbols that used similar general designs to convey the message. For example, 'Telephone' symbols were divided into 4 concept groups: 'Telephone handset', 'Telephone dial', 'Front view of dial telephone' and 'Handset and dial'. Scoring Symbols were assessed on three characteristics: Semantic, syntactic, pragmatic. Scores for these three categories were awarded by each committee member on a scale of 1 (weak) to 5 (strong). In addition to the individual score of each symbol, 'concept groups' were given an overall score based on how well the concept met the three categories. Recommendations Finally they made recommendations and observations based on their scores and discussions about the symbols they reviewed. For the 'Telephone' symbol, the handset icon was common but an odd shape that could be confusable for other items, like wrenches; while symbols with dials were easy to understand but already obsolete with the increased use of the push-button telephone. The recommendations were summarized to suggest the final course of action to be taken with designing a symbol for the concept. For "Telephone", the decision was made to "Modify Group 1 concept; experiment with front view of modern telephone." Implementation Symbol Signs provides some general guidelines as to implement the symbols in a facility. The guidelines present guidance to a design team, rather than a strict set of requirements for typeface, sizes, colors, illumination, etc, that must be adhered to. This decision is intended to strike a balance between creating a perfect system while allowing symbols to appropriately integrate into the environment they're being used in. A typeface is not recommended, to allow flexibility for architectural and cultural needs of the facility. Emphasis is instead placed on examining the legibility and suitability for a particular typeface in the specific environment. In examples provided in Symbol Signs and when designing the symbols, Helvetica Medium, in an initial caps/start case was used by designers. This was particularly true of the design of the directional arrow. Letter size should be decided on a situational basis, using testing, however a general rule is that in height for ever of viewing distance. The 1974 edition of Symbol Signs was strict in its presentation of symbols: Symbols must appear in a 'symbol field', consisting of a square with rounded corners. The 'figure' must be black on a white symbol field, and never the reverse, white symbols on a black field. Symbols were determined to be typically legible from approximately with a symbol to with a symbol. Attention should also be given to the mounting height of signs, as signs mounted so they fall outside of 10 degrees of the natural line of vision will no longer be in a normal line of vision, and require the viewer to actively look up in order to see and read the sign. Symbols Original Set (1974) The original set of symbols developed consisted of 34 symbols, primarily intended for transportation facilities. First Aid, No Smoking, No Parking and No Entry used "Ostwald number 6 1/2 pa" for the color red. 1979 Additions In 1979, the Department of Transportation requested 16 additional symbols, to fill in gaps observed in the original set. First Aid, No Smoking, No Parking, No Dogs, and No Entry used Pantone Red 032 C and Exit used Pantone Green 340 C. 2000s An unofficial change has been forced to the original symbols following increased efforts by the American Red Cross to discourage and eliminate usage of the 'red cross' symbol as a generic symbol of first aid or medical services. For example, in 1999 the Red Cross informed Ultimate Symbol that their 1996 publication Official Signs & Icons, featuring various symbol collections, that the Red Cross in the AIGA pictogram collection was a violation of the Geneva Convention and United States trademark laws, and asked for its removal from future editions. In 2005, the second edition of Official Signs & Icons, the red Greek cross was replaced with an identical Greek cross colored 'Safety Green' from ANSI Z535.1–2002. The adoption of a green Greek cross or white Greek cross on a green background is a common replacement, due to the visual similarity and wide usage, as the white cross on green background is used in ISO 7010 to represent first aid. See also ISO 7001 – The International Organization for Standardization's equivalent standard Notes References External links Symbol signs, AIGA Airport , an animated film made from AIGA pictograms Friconix board DOT 50 original set of pictograms Graphic design Infographics Pictograms Pictograms Symbols introduced in 1974
DOT pictograms
Mathematics
1,521
931,064
https://en.wikipedia.org/wiki/YORP%20effect
The Yarkovsky–O'Keefe–Radzievskii–Paddack effect, or YORP effect for short, changes the rotation state of a small astronomical body – that is, the body's spin rate and the obliquity of its pole(s) – due to the scattering of solar radiation off its surface and the emission of its own thermal radiation. The YORP effect is typically considered for asteroids with their heliocentric orbit in the Solar System. The effect is responsible for the creation of binary and tumbling asteroids as well as for changing an asteroid's pole towards 0°, 90°, or 180° relative to the ecliptic plane and so modifying its heliocentric radial drift rate due to the Yarkovsky effect. Term The term was coined by David P. Rubincam in 2000 to honor four important contributors to the concepts behind the so-named YORP effect. In the 19th century, Ivan Yarkovsky realized that the thermal radiation escaping from a body warmed by the Sun carries off momentum as well as heat. Translated into modern physics, each emitted photon possesses a momentum p = E/c where E is its energy and c is the speed of light. Vladimir Radzievskii applied the idea to rotation based on changes in albedo and Stephen Paddack realized that shape was a much more effective means of altering a body's spin rate. Stephen Paddack and John O'Keefe suggested that the YORP effect leads to rotational bursting and by repeatedly undergoing this process, small asymmetric bodies are eventually reduced to dust. Physical mechanism In principle, electromagnetic radiation interacts with the surface of an asteroid in three significant ways: radiation from the Sun is (1) absorbed and (2) diffusively reflected by the surface of the body and the body's internal energy is (3) emitted as thermal radiation. Since photons possess momentum, each of these interactions leads to changes in the angular momentum of the body relative to its center of mass. If considered for only a short period of time, these changes are very small, but over longer periods of time, these changes may integrate to significant changes in the angular momentum of the body. For bodies in a heliocentric orbit, the relevant long period of time is the orbital period (i.e. year), since most asteroids have rotation periods (i.e. days) shorter than their orbital periods. Thus, for most asteroids, the YORP effect is the secular change in the rotation state of the asteroid after averaging the solar radiation torques over first the rotational period and then the orbital period. Observations In 2007 there was direct observational confirmation of the YORP effect on the small asteroids 54509 YORP (then designated ) and 1862 Apollo. The spin rate of 54509 YORP will double in just 600,000 years, and the YORP effect can also alter the axial tilt and precession rate, so that the entire suite of YORP phenomena can send asteroids into interesting resonant spin states, and helps explain the existence of binary asteroids. Observations show that asteroids larger than 125 km in diameter have rotation rates that follow a Maxwellian frequency distribution, while smaller asteroids (in the 50 to 125 km size range) show a small excess of fast rotators. The smallest asteroids (size less than 50 km) show a clear excess of very fast and slow rotators, and this becomes even more pronounced as smaller-sized populations are measured. These results suggest that one or more size-dependent mechanisms are depopulating the centre of the spin rate distribution in favour of the extremes. The YORP effect is a prime candidate. It is not capable of significantly modifying the spin rates of large asteroids by itself, so a different explanation must be sought for objects such as 253 Mathilde. In late 2013 asteroid P/2013 R3 was observed breaking apart, likely because of a high rotation speed from the YORP effect. Examples Assume a rotating spherical asteroid has two wedge-shaped fins attached to its equator, irradiated by parallel rays of sunlight. The reaction force from photons departing from any given surface element of the spherical core will be normal to the surface, such that no torque is produced (the force vectors all pass through the centre of mass). Thermally-emitted photons reradiated from the sides of the wedges, however, can produce a torque, as the normal vectors do not pass through the centre of mass. Both fins present the same cross section to the incoming light (they have the same height and width), and so absorb and reflect the same amount of energy each and produce an equal force. Due to the fin surfaces being oblique, however, the normal forces from the reradiated photons do not cancel out. In the diagram, fin A's outgoing radiation produces an equatorial force parallel to the incoming light and no vertical force, but fin B's force has a smaller equatorial component and a vertical component. The unbalanced forces on the two fins lead to torque and the object spins. The torque from the outgoing light does not average out, even over a full rotation, so the spin accelerates over time. An object with some "windmill" asymmetry can therefore be subjected to minuscule torque forces that will tend to spin it up or down as well as make its axis of rotation precess. The YORP effect is zero for a rotating ellipsoid if there are no irregularities in surface temperature or albedo. In the long term, the object's changing obliquity and rotation rate may wander randomly, chaotically or regularly, depending on several factors. For example, assuming the Sun remains on its equator, asteroid 951 Gaspra, with a radius of 6 km and a semi-major axis of 2.21 AU, would in 240 Ma (240 million years) go from a rotation period of 12 h to 6 h and vice versa. If 243 Ida were given the same radius and orbit values as Gaspra, it would spin up or down twice as fast, while a body with Phobos' shape would take several billion years to change its spin by the same amount. Size as well as shape affects the amount of the effect. Smaller objects will spin up or down much more quickly. If Gaspra were smaller by a factor of 10 (to a radius of 500 m), its spin will halve or double in just a few million years. Similarly, the YORP effect intensifies for objects closer to the Sun. At 1 AU, Gaspra would double/halve its spin rate in a mere 100,000 years. After one million years, its period may shrink to ~2 h, at which point it could start to break apart. According to a 2019 model, the YORP effect is likely to cause "widespread fragmentation of asteroids" as the Sun expands into a luminous red giant, and may explain the dust disks and apparent infalling matter observed at many white dwarfs. This is one mechanism through which binary asteroids may form, and it may be more common than collisions and planetary near-encounter tidal disruption as the primary means of binary formation. Asteroid was later named 54509 YORP to honor its part in the confirmation of this phenomenon. See also 25143 Itokawa—Smallest asteroid to be visited by a spacecraft Citations General and cited references Draft manuscript/report. Further reading External links Asteroid rotation discovery reported Asteroids Orbital perturbations Radiation effects Rotation
YORP effect
Physics,Materials_science,Engineering
1,521
3,704,228
https://en.wikipedia.org/wiki/Pipe%20%28fluid%20conveyance%29
A pipe is a tubular section or hollow cylinder, usually but not necessarily of circular cross-section, used mainly to convey substances which can flow — liquids and gases (fluids), slurries, powders and masses of small solids. It can also be used for structural applications; a hollow pipe is far stiffer per unit weight than the solid members. In common usage the words pipe and tube are usually interchangeable, but in industry and engineering, the terms are uniquely defined. Depending on the applicable standard to which it is manufactured, pipe is generally specified by a nominal diameter with a constant outside diameter (OD) and a schedule that defines the thickness. Tube is most often specified by the OD and wall thickness, but may be specified by any two of OD, inside diameter (ID), and wall thickness. Pipe is generally manufactured to one of several international and national industrial standards. While similar standards exist for specific industry application tubing, tube is often made to custom sizes and a broader range of diameters and tolerances. Many industrial and government standards exist for the production of pipe and tubing. The term "tube" is also commonly applied to non-cylindrical sections, i.e., square or rectangular tubing. In general, "pipe" is the more common term in most of the world, whereas "tube" is more widely used in the United States. Both "pipe" and "tube" imply a level of rigidity and permanence, whereas a hose (or hosepipe) is usually portable and flexible. Pipe assemblies are almost always constructed with the use of fittings such as elbows, tees, and so on, while tube may be formed or bent into custom configurations. For materials that are inflexible, cannot be formed, or where construction is governed by codes or standards, tube assemblies are also constructed with the use of tube fittings. Uses Plumbing Tap water Irrigation Pipelines transporting gas or liquid over long distances Compressed air systems Casing for concrete pilings used in construction projects High-temperature or high-pressure manufacturing processes The petroleum industry: Oil well casing Oil refinery equipment Delivery of fluids, either gaseous or liquid, in a process plant from one point to another point in the process Delivery of bulk solids, in a food or process plant from one point to another point in the process The construction of high pressure storage vessels (large pressure vessels are constructed from plate, not pipe owing to their wall thickness and size). Additionally, pipes are used for many purposes that do not involve conveying fluid. Handrails, scaffolding, and support structures are often constructed from structural pipes, especially in an industrial environment. History The first known use of pipes was in Ancient Egypt. The Pyramid of Sahure, completed around the 25th century BC, included a temple with an elaborate drainage system including more than of copper piping. During the Napoleonic Wars Birmingham gunmakers tried to use rolling mills to make iron musket barrels. One of them, Henry Osborne, developed a relatively effective process in 1817 with which he started to make iron gas tubes ca. 1820, selling some to gas lighting pioneer Samuel Clegg. When steel pipes were introduced in 19th century, they initially were riveted, and later clamped with H-shaped bars (even though methods for making weldless steel tubes were known already in the 1870s), until by the early 1930s these methods were replaced by welding, which is still widely used today. Manufacture There are three processes for metallic pipe manufacture. Centrifugal casting of hot alloyed metal is one of the most prominent process. Ductile iron pipes are generally manufactured in such a fashion. Seamless pipe (SMLS) is formed by drawing a solid billet over a piercing rod to create the hollow shell in a process called rotary piercing. As the manufacturing process does not include any welding, seamless pipes are perceived to be stronger and more reliable. Historically, seamless pipe was regarded as withstanding pressure better than other types, and was often more available than welded pipe. Advances since the 1970s, in materials, process control, and non-destructive testing, allow correctly specified welded pipe to replace seamless in many applications. Welded pipe is formed by rolling plate and welding the seam (usually by Electric resistance welding ("ERW"), or Electric Fusion Welding ("EFW")). The weld flash can be removed from both inner and outer surfaces using a scarfing blade. The weld zone can also be heat-treated to make the seam less visible. Welded pipe often has tighter dimensional tolerances than the seamless type, and can be cheaper to manufacture. There are a number of processes that may be used to produce ERW pipes. Each of these processes leads to coalescence or merging of steel components into pipes. Electric current is passed through the surfaces that have to be welded together; as the components being welded together resist the electric current, heat is generated which forms the weld. Pools of molten metal are formed where the two surfaces are connected as a strong electric current is passed through the metal; these pools of molten metal form the weld that binds the two abutted components. ERW pipes are manufactured from the longitudinal welding of steel. The welding process for ERW pipes is continuous, as opposed to welding of distinct sections at intervals. ERW process uses steel coil as feedstock. The High Frequency Induction Technology (HFI) welding process is used for manufacturing ERW pipes. In this process, the current to weld the pipe is applied by means of an induction coil around the tube. HFI is generally considered to be technically superior to "ordinary" ERW when manufacturing pipes for critical applications, such as for usage in the energy sector, in addition to other uses in line pipe applications, as well as for casing and tubing. Large-diameter pipe ( or greater) may be ERW, EFW, or Submerged Arc Welded ("SAW") pipe. There are two technologies that can be used to manufacture steel pipes of sizes larger than the steel pipes that can be produced by seamless and ERW processes. The two types of pipes produced through these technologies are longitudinal-submerged arc-welded (LSAW) and spiral-submerged arc-welded (SSAW) pipes. LSAW are made by bending and welding wide steel plates and most commonly used in oil and gas industry applications. Due to their high cost, LSAW pipes are seldom used in lower value non-energy applications such as water pipelines. SSAW pipes are produced by spiral (helicoidal) welding of steel coil and have a cost advantage over LSAW pipes, as the process uses coils rather than steel plates. As such, in applications where spiral-weld is acceptable, SSAW pipes may be preferred over LSAW pipes. Both LSAW pipes and SSAW pipes compete against ERW pipes and seamless pipes in the diameter ranges of 16”-24”. Tubing for flow, either metal or plastic, is generally extruded. Materials Pipe is made out of many types of material including ceramic, glass, fiberglass, many metals, concrete and plastic. In the past, wood and lead (Latin plumbum, from which comes the word 'plumbing') were commonly used. Typically metallic piping is made of steel or iron, such as unfinished, black (lacquer) steel, carbon steel, stainless steel, galvanized steel, brass, and ductile iron. Iron based piping is subject to corrosion if used within a highly oxygenated water stream. Aluminum pipe or tubing may be utilized where iron is incompatible with the service fluid or where weight is a concern; aluminum is also used for heat transfer tubing such as in refrigerant systems. Copper tubing is popular for domestic water (potable) plumbing systems; copper may be used where heat transfer is desirable (i.e. radiators or heat exchangers). Inconel, chrome moly, and titanium steel alloys are used in high temperature and pressure piping in process and power facilities. When specifying alloys for new processes, the known issues of creep and sensitization effect must be taken into account. Lead piping is still found in old domestic and other water distribution systems, but is no longer permitted for new potable water piping installations due to its toxicity. Many building codes now require that lead piping in residential or institutional installations be replaced with non-toxic piping or that the tubes' interiors be treated with phosphoric acid. According to a senior researcher and lead expert with the Canadian Environmental Law Association, "[...] there is no safe level of lead [for human exposure]". In 1991 the US EPA issued the Lead and Copper Rule, a federal regulation which limits the concentration of lead and copper allowed in public drinking water, as well as the permissible amount of pipe corrosion occurring due to the water itself. In the US it is estimated that 6.5 million lead service lines (pipes that connect water mains to home plumbing) installed before the 1930s are still in use. Plastic tubing is widely used for its light weight, chemical resistance, non-corrosive properties, and ease of making connections. Plastic materials include polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC), fibre reinforced plastic (FRP), reinforced polymer mortar (RPMP), polypropylene (PP), polyethylene (PE), cross-linked high-density polyethylene (PEX), polybutylene (PB), and acrylonitrile butadiene styrene (ABS), for example. In many countries, PVC pipes account for most pipe materials used in buried municipal applications for drinking water distribution and wastewater mains. Pipe may be made from concrete or ceramic, usually for low-pressure applications such as gravity flow or drainage. Pipes for sewage are still predominantly made from concrete or vitrified clay. Reinforced concrete can be used for large-diameter concrete pipes. This pipe material can be used in many types of construction, and is often used in the gravity-flow transport of storm water. Usually such pipe will have a receiving bell or a stepped fitting, with various sealing methods applied at installation. Traceability and positive material identification (PMI) When the alloys for piping are forged, metallurgical tests are performed to determine material composition by % of each chemical element in the piping, and the results are recorded in a material test report, also known as a Mill Test Report (MTR). These tests can be used to prove that the alloy conforms to various specifications (e.g. 316 SS). The tests are stamped by the mill's QA/QC department and can be used to trace the material back to the mill by future users, such as piping and fitting manufacturers. Maintaining the traceability between the alloy material and associated MTR is an important quality assurance issue. QA often requires the heat number to be written on the pipe. Precautions must also be taken to prevent the introduction of counterfeit materials. As a backup to etching/labeling of the material identification on the pipe, positive material identification (PMI) is performed using a handheld device; the device scans the pipe material using an emitted electromagnetic wave (x-ray fluorescence/XRF) and receives a reply that is spectrographically analyzed. Sizes Pipe sizes can be confusing because the terminology may relate to historical dimensions. For example, a half-inch iron pipe does not have any dimension that is a half inch. Initially, a half inch pipe did have an inner diameter of —but it also had thick walls. As technology improved, thinner walls became possible, but the outside diameter stayed the same so it could mate with existing older pipe, increasing the inner diameter beyond half an inch. The history of copper pipe is similar. In the 1930s, the pipe was designated by its internal diameter and a wall thickness. Consequently, a copper pipe had a outside diameter. The outside diameter was the important dimension for mating with fittings. The wall thickness on modern copper is usually thinner than , so the internal diameter is only "nominal" rather than a controlling dimension. Newer pipe technologies sometimes adopted a sizing system as its own. PVC pipe uses the Nominal Pipe Size. Pipe sizes are specified by a number of national and international standards, including API 5L, ANSI/ASME B36.10M and B36.19M in the US, BS 1600 and BS EN 10255 in the United Kingdom and Europe. There are two common methods for designating pipe outside diameter (OD). The North American method is called NPS ("Nominal Pipe Size") and is based on inches (also frequently referred to as NB ("Nominal Bore")). The European version is called DN ("Diametre Nominal" / "Nominal Diameter") and is based on millimetres. Designating the outside diameter allows pipes of the same size to be fit together no matter what the wall thickness. For pipe sizes less than NPS 14 inch (DN 350), both methods give a nominal value for the OD that is rounded off and is not the same as the actual OD. For example, NPS 2 inch and DN 50 are the same pipe, but the actual OD is . The only way to obtain the actual OD is to look it up in a reference table. For pipe sizes of NPS 14 inch (DN 350) and greater the NPS size is the actual diameter in inches and the DN size is equal to NPS times 25 (not 25.4) rounded to a convenient multiple of 50. For example, NPS 14 has an OD of , and is equivalent to DN 350. Since the outside diameter is fixed for a given pipe size, the inside diameter will vary depending on the wall thickness of the pipe. For example, 2" Schedule 80 pipe has thicker walls and therefore a smaller inside diameter than 2" Schedule 40 pipe. Steel pipe has been produced for about 150 years. The pipe sizes that are in use today in PVC and galvanized were originally designed years ago for steel pipe. The number system, like Sch 40, 80, 160, were set long ago and seem a little odd. For example, Sch 20 pipe is even thinner than Sch 40, but same OD. And while these pipes are based on old steel pipe sizes, there is other pipe, like cpvc for heated water, that uses pipe sizes, inside and out, based on old copper pipe size standards instead of steel. Many different standards exist for pipe sizes, and their prevalence varies depending on industry and geographical area. The pipe size designation generally includes two numbers; one that indicates the outside (OD) or nominal diameter, and the other that indicates the wall thickness. In the early twentieth century, American pipe was sized by inside diameter. This practice was abandoned to improve compatibility with pipe fittings that must usually fit the OD of the pipe, but it has had a lasting impact on modern standards around the world. In North America and the UK, pressure piping is usually specified by Nominal Pipe Size (NPS) and schedule (SCH). Pipe sizes are documented by a number of standards, including API 5L, ANSI/ASME B36.10M (Table 1) in the US, and BS 1600 and BS 1387 in the United Kingdom. Typically the pipe wall thickness is the controlled variable, and the Inside Diameter (I.D.) is allowed to vary. The pipe wall thickness has a variance of approximately 12.5 percent. In the rest of Europe pressure piping uses the same pipe IDs and wall thicknesses as Nominal Pipe Size, but labels them with a metric Diameter Nominal (DN) instead of the imperial NPS. For NPS larger than 14, the DN is equal to the NPS multiplied by 25. (Not 25.4) This is documented by EN 10255 (formerly DIN 2448 and BS 1387) and ISO 65:1981, and it is often called DIN or ISO pipe. Japan has its own set of standard pipe sizes, often called JIS pipe. The Iron pipe size (IPS) is an older system still used by some manufacturers and legacy drawings and equipment. The IPS number is the same as the NPS number, but the schedules were limited to Standard Wall (STD), Extra Strong (XS), and Double Extra Strong (XXS). STD is identical to SCH 40 for NPS 1/8 to NPS 10, inclusive, and indicates .375" wall thickness for NPS 12 and larger. XS is identical to SCH 80 for NPS 1/8 to NPS 8, inclusive, and indicates .500" wall thickness for NPS 8 and larger. Different definitions exist for XXS, however it is never the same as SCH 160. XXS is in fact thicker than SCH 160 for NPS 1/8" to 6" inclusive, whereas SCH 160 is thicker than XXS for NPS 8" and larger. Another old system is the Ductile Iron Pipe Size (DIPS), which generally has larger ODs than IPS. Copper plumbing tube for residential plumbing follows an entirely different size system in America, often called Copper Tube Size (CTS); see domestic water system. Its nominal size is neither the inside nor outside diameter. Plastic tubing, such as PVC and CPVC, for plumbing applications also has different sizing standards. Agricultural applications use PIP sizes, which stands for Plastic Irrigation Pipe. PIP comes in pressure ratings of , , , , and and is generally available in diameters of . Standards The manufacture and installation of pressure piping is tightly regulated by the ASME "B31" code series such as B31.1 or B31.3 which have their basis in the ASME Boiler and Pressure Vessel Code (BPVC). This code has the force of law in Canada and the US. Europe and the rest of the world has an equivalent system of codes. Pressure piping is generally pipe that must carry pressures greater than 10 to 25 atmospheres, although definitions vary. To ensure safe operation of the system, the manufacture, storage, welding, testing, etc. of pressure piping must meet stringent quality standards. Manufacturing standards for pipes commonly require a test of chemical composition and a series of mechanical strength tests for each heat of pipe. A heat of pipe is all forged from the same cast ingot, and therefore had the same chemical composition. Mechanical tests may be associated to a lot of pipe, which would be all from the same heat and have been through the same heat treatment processes. The manufacturer performs these tests and reports the composition in a mill traceability report and the mechanical tests in a material test report, both of which are referred to by the acronym MTR. Material with these associated test reports is called traceable. For critical applications, third party verification of these tests may be required; in this case an independent lab will produce a certified material test report(CMTR), and the material will be called certified. Some widely used pipe standards or piping classes are: The API range – now ISO 3183. E.g.: API 5L Grade B – now ISO L245 where the number indicates yield strength in MPa ASME SA106 Grade B (Seamless carbon steel pipe for high temperature service) ASTM A312 (Seamless and welded austenitic stainless steel pipe) ASTM C76 (Concrete Pipe) ASTM D3033/3034 (PVC Pipe) ASTM D2239 (Polyethylene Pipe) ISO 14692 (Petroleum and natural gas industries. Glass-reinforced plastics (GRP) piping. Qualification and manufacture) ASTM A36 (Carbon steel pipe for structural or low pressure use) ASTM A795 (Steel pipe specifically for fire sprinkler systems) API 5L was changed in the second half of 2008 to edition 44 from edition 43 to make it identical to ISO 3183. It is important to note that the change has created the requirement that sour service, ERW pipe, pass a hydrogen induced cracking (HIC) test per NACE TM0284 in order to be used for sour service. ACPA [American Concrete Pipe Association] AWWA [American Water Works Association] AWWA M45 Installation Pipe installation is often more expensive than the material and a variety of specialized tools, techniques, and parts have been developed to assist this. Pipe is usually delivered to a customer or jobsite as either "sticks" or lengths of pipe (typically , called single random length) or they are prefabricated with elbows, tees and valves into a prefabricated pipe spool [A pipe spool is a piece of pre-assembled pipe and fittings, usually prepared in a shop so that installation on the construction site can be more efficient.]. Typically, pipe smaller than are not pre-fabricated. The pipe spools are usually tagged with a bar code and the ends are capped (plastic) for protection. The pipe and pipe spools are delivered to a warehouse on a large commercial/industrial job and they may be held indoors or in a gridded laydown yard. The pipe or pipe spool is retrieved, staged, rigged, and then lifted into place. On large process jobs the lift is made using cranes and hoist and other material lifts. They are typically temporarily supported in the steel structure using beam clamps, straps, and small hoists until the pipe supports are attached or otherwise secured. An example of a tool used for installation for a small plumbing pipe (threaded ends) is the pipe wrench. Small pipe is typically not heavy and can be lifted into place by the installation craft laborer. However, during a plant outage or shutdown, the small (small bore) pipe may also be pre-fabricated to expedite installation during the outage. After the pipe is installed it will be tested for leaks. Before testing it may need to be cleaned by blowing air or steam or flushing with a liquid. Pipe supports Pipes are usually either supported from below or hung from above (but may also be supported from the side), using devices called pipe supports. Supports may be as simple as a pipe "shoe" which is akin to a half of an I-beam welded to the bottom of the pipe; they may be "hung" using a clevis, or with trapeze type of devices called pipe hangers. Pipe supports of any kind may incorporate springs, snubbers, dampers, or combinations of these devices to compensate for thermal expansion, or to provide vibration isolation, shock control, or reduced vibration excitation of the pipe due to earthquake motion. Some dampers are simply fluid dashpots, but other dampers may be active hydraulic devices that have sophisticated systems that act to dampen peak displacements due to externally imposed vibrations or mechanical shocks. The undesired motions may be process derived (such as in a fluidized bed reactor) or from a natural phenomenon such as an earthquake (design basis event or DBE). Pipe hanger assembles are usually attached with pipe clamps. Possible exposure to high temperatures and heavy loads should be included when specifying which clamps are needed. Joining Pipes are commonly joined by welding, using threaded pipe and fittings; sealing the connection with a pipe thread compound, Polytetrafluoroethylene (PTFE) Thread seal tape, oakum, or PTFE string, or by using a mechanical coupling. Process piping is usually joined by welding using a TIG or MIG process. The most common process pipe joint is the butt weld. The ends of pipe to be welded must have a certain weld preparation called an End Weld Prep (EWP) which is typically at an angle of 37.5 degrees to accommodate the filler weld metal. The most common pipe thread in North America is the National Pipe Thread (NPT) or the Dryseal (NPTF) version. Other pipe threads include the British Standard Pipe Thread (BSPT), the garden hose thread (GHT), and the fire hose coupling (NST). Copper pipes are typically joined by soldering, brazing, compression fittings, flaring, or crimping. Plastic pipes may be joined by solvent welding, heat fusion, or elastomeric sealing. If frequent disconnection will be required, gasketed pipe flanges or union fittings provide better reliability than threads. Some thin-walled pipes of ductile material, such as the smaller copper or flexible plastic water pipes found in homes for ice makers and humidifiers, for example, may be joined with compression fittings. typically uses a "push-on" gasket style of pipe that compresses a gasket into a space formed between the two adjoining pieces. Push-on joints are available on most types of pipe. A pipe joint lubricant must be used in the assembly of the pipe. Under buried conditions, gasket-joint pipes allow for lateral movement due to soil shifting as well as expansion/contraction due to temperature differentials. Plastic MDPE and HDPE gas and water pipes are also often joined with Electrofusion fittings. Large above ground pipe typically uses a flanged joint, which is generally available in ductile iron pipe and some others. It is a gasket style where the flanges of the adjoining pipes are bolted together, compressing the gasket into a space between the pipe. Mechanical grooved couplings or Victaulic joints are also frequently used for frequent disassembly and assembly. Developed in the 1920s, these mechanical grooved couplings can operate up to working pressures and available in materials to match the pipe grade. Another type of mechanical coupling is a flareless tube fitting (Major brands include Swagelok, Ham-Let, Parker); this type of compression fitting is typically used on small tubing under in diameter. When pipes join in chambers where other components are needed for the management of the network (such as valves or gauges), dismantling joints are generally used, in order to make mounting/dismounting easier. Fittings and valves Fittings are also used to split or join a number of pipes together, and for other purposes. A broad variety of standardized pipe fittings are available; they are generally broken down into either a tee, an elbow, a branch, a reducer/enlarger, or a wye. Valves control fluid flow and regulate pressure. The piping and plumbing fittings and valves articles discuss them further. Cleaning The inside of pipes can be cleaned with a tube cleaning process, if they are contaminated with debris or fouling. This depends on the process that the pipe will be used for and the cleanliness needed for the process. In some cases the pipes are cleaned using a displacement device formally known as a Pipeline Inspection Gauge or "pig"; alternately the pipes or tubes may be chemically flushed using specialized solutions that are pumped through. In some cases, where care has been taken in the manufacture, storage, and installation of pipe and tubing, the lines are blown clean with compressed air or nitrogen. Other uses Pipe is widely used in the fabrication of handrails, guardrails, and railings. Applications Steel pipe Steel pipe (or black iron pipe) was once the most popular choice for supply of water and flammable gases. Steel pipe is still used in many homes and businesses to convey natural gas or propane fuel, and is a popular choice in fire sprinkler systems due to its high heat resistance. In commercial buildings, steel pipe is used to convey heating or cooling water to heat exchangers, air handlers, variable air volume (VAV) devices, or other HVAC equipment. Steel pipe is sometimes joined using threaded connections, where tapered threads (see National Pipe Thread) are cut into the end of the tubing segment, sealant is applied in the form of thread sealing compound or thread seal tape (also known as PTFE or Teflon tape), and it is then threaded into a corresponding threaded fitting using two pipe wrenches. Beyond domestic or light commercial settings, steel pipe is often joined by welding, or by use of mechanical couplings made by companies such as Victaulic or Anvil International (formerly Grinnell) that hold the pipe joint together via a groove pressed or cut (a rarely used older practice), into the ends of the pipes. Other variations of steel pipe include various stainless steel and chrome alloys. In high-pressure situations these are usually joined by TIG welding. In Canada, with respect to natural gas (NG) and propane (LP gas), black iron pipe (BIP) is commonly used to connect an appliance to the supply. It must however be marked (either painted yellow or yellow banding attached at certain intervals) and certain restrictions apply to which nominal pipe size (NPS) can be put through walls and buildings. With propane in particular, BIP can be run from an exterior tank (or cylinder) provided it is well protected from the weather, and an anode-type of protection from corrosion is in place when the pipe is to be installed underground. Copper pipe Copper tubing is most often used for supply of hot and cold water, and as refrigerant line in HVAC systems. There are two basic types of copper tubing, soft copper and rigid copper. Copper tubing is joined using flare connection, compression connection, or solder. Copper offers a high level of resistance to corrosion, but is becoming very costly. Soft copper Soft (or ductile) copper tubing can be bent easily to travel around obstacles in the path of the tubing. While the work hardening of the drawing process used to size the tubing makes the copper hard/rigid, it is carefully annealed to make it soft again; it is therefore more expensive to produce than non-annealed, rigid copper tubing. It can be joined by any of the three methods used for rigid copper, and it is the only type of copper tubing suitable for flare connections. Soft copper is the most popular choice for refrigerant lines in split-system air conditioners and heat pumps. Flare connections Flare connections require that the end of a tubing section be spread outward in a bell shape using a flare tool. A flare nut then compresses this bell-shaped end onto a male fitting. Flare connections are a labor-intensive method of making connections, but are quite reliable over the course of many years. Rigid copper Rigid copper is a popular choice for water lines. It is joined using a sweat, compression or crimped/pressed connection. Rigid copper, rigid due to the work hardening of the drawing process, cannot be bent and must use elbow fittings to go around corners or around obstacles. If heated and allowed to slowly cool, called annealing, then rigid copper will become soft and can be bent/formed without cracking. Soldered connections Solder fittings are smooth, and easily slip onto the end of a tubing section. Both the male and female ends of the pipe or pipe connectors are cleaned thoroughly then coated with flux to make sure there is no surface oxide and to ensure that the solder will bond properly with the base metal. The joint is then heated using a torch, and solder is melted into the connection. When the solder cools, it forms a very strong bond which can last for decades. Solder-connected rigid copper is the most popular choice for water supply lines in modern buildings. In situations where many connections must be made at once (such as plumbing of a new building), solder offers much quicker and much less expensive joinery than compression or flare fittings. The term sweating is sometimes used to describe the process of soldering pipes. Compression connections Compression fittings use a soft metal or thermoplastic ring (the compression ring or "ferrule") which is squeezed onto the pipe and into the fitting by a compression nut. The soft metal conforms to the surface of the tubing and the fitting, and creates a seal. Compression connections do not typically have the long life that sweat connections offer, but are advantageous in many cases because they are easy to make using basic tools. A disadvantage in compression connections is that they take longer to make than sweat, and sometimes require retightening over time to stop leaks. Crimped or pressed connections Crimped or pressed connections use special copper fittings which are permanently attached to rigid copper tubing with a powered crimper. The special fittings, manufactured with sealant already inside, slide over the tubing to be connected. Thousands of pounds-force per square inch of pressure are used to deform the fitting and compress the sealant against the inner copper tubing, creating a watertight seal. Advantages of this method are: A correctly crimped connection should last as long as the tubing. It takes less time to complete than other methods. It is cleaner in both appearance and the materials used to make the connection. No open flame is used during the connection process. Disadvantages are: The fittings used are harder to find and cost significantly more than sweat type fittings. The fittings are not re-usable. If a design change is required or if a joint is found to be defective or improperly crimped, the already installed fittings must be cut out and discarded. In addition, the cutting required to remove the fitting often will leave insufficient tubing to install the new fitting, So couplers and additional tubing will need to be installed on either side of the replacement fitting. Whereas with a soldered fitting, a defective joint can just be re-soldered, or heated and turned if a minor change is required, or heated and removed without requiring any of the tubing to be cut away. This also allows more expensive fittings like valves to be re-used if they are otherwise in good to new condition, something not possible if the fitting is crimped on. The cost of the tooling is very expensive. , a basic toolkit required to sweat solder all the copper pipes of a typical single family residence, including fuel and solder, can be purchased for approximately $200. By contrast, the minimum cost of a basic powered crimping tool starts at around $1800, and can be as high as $4000 for the better brands with a complete set of crimping dies. Aluminium pipe Aluminium is sometimes used due to its low cost, resistance to corrosion and solvents, and its ductility. Aluminium tube is more desirable than steel for the conveyance of flammable solvents, since it cannot create sparks when manipulated. Aluminium tubing can be connected by flare or compression fittings, or it can be welded by the TIG or heliarc processes. Glass pipe Tempered glass pipes are used for specialized applications, such as corrosive liquids, medical or laboratory wastes, or pharmaceutical manufacturing. Connections are generally made using specialized gasket or O-ring fittings. Plastic pipe Plastic pipes used in manufacturing. Plastic pipe fittings include PVC pipe fittings, PP / PPH pipe fitting mould, PE pipe and ABS pipe fitting. See also Cast iron pipe Copper tubing Double-walled pipe Ductile iron pipe Galvanized pipe Garden hose HDPE pipe Hollow structural section Hose Hydraulic pipes List of equations in fluid mechanics MS Pipe, MS Tube National Pipe Thread (NPT) Nominal Pipe Size (NPS) Panzergewinde Pipe and tube bender Pipeline transport Pipe support Piping Piping and plumbing fittings Plastic pressure pipe systems Plastic pipework Plumbing Reinforced thermoplastic pipes Sprayed in place pipe Trap (plumbing) Tube Tube beading Victaulic Water pipe References Bibliography External links Irrigation Piping Plumbing
Pipe (fluid conveyance)
Chemistry,Engineering
7,397
11,627
https://en.wikipedia.org/wiki/Faith%20healing
Faith healing is the practice of prayer and gestures (such as laying on of hands) that are believed by some to elicit divine intervention in spiritual and physical healing, especially the Christian practice. Believers assert that the healing of disease and disability can be brought about by religious faith through prayer or other rituals that, according to adherents, can stimulate a divine presence and power. Religious belief in divine intervention does not depend on empirical evidence of an evidence-based outcome achieved via faith healing. Virtually all scientists and philosophers dismiss faith healing as pseudoscience. Claims that "a myriad of techniques" such as prayer, divine intervention, or the ministrations of an individual healer can cure illness have been popular throughout history. There have been claims that faith can cure blindness, deafness, cancer, HIV/AIDS, developmental disorders, anemia, arthritis, corns, defective speech, multiple sclerosis, skin rashes, total body paralysis, and various injuries. Recoveries have been attributed to many techniques commonly classified as faith healing. It can involve prayer, a visit to a religious shrine, or simply a strong belief in a supreme being. Many people interpret the Bible, especially the New Testament, as teaching belief in, and the practice of, faith healing. According to a 2004 Newsweek poll, 72 percent of Americans said they believe that praying to God can cure someone, even if science says the person has an incurable disease. Unlike faith healing, advocates of spiritual healing make no attempt to seek divine intervention, instead believing in divine energy. The increased interest in alternative medicine at the end of the 20th century has given rise to a parallel interest among sociologists in the relationship of religion to health. Faith healing can be classified as a spiritual, supernatural, or paranormal topic, and, in some cases, belief in faith healing can be classified as magical thinking. The American Cancer Society states "available scientific evidence does not support claims that faith healing can actually cure physical ailments". "Death, disability, and other unwanted outcomes have occurred when faith healing was elected instead of medical care for serious injuries or illnesses." When parents have practiced faith healing but not medical care, many children have died that otherwise would have been expected to live. Similar results are found in adults. In various belief systems Christianity Overview Regarded as a Christian belief that God heals people through the power of the Holy Spirit, faith healing often involves the laying on of hands. It is also called supernatural healing, divine healing, and miracle healing, among other things. Healing in the Bible is often associated with the ministry of specific individuals including Elijah, Jesus and Paul. Christian physician Reginald B. Cherry views faith healing as a pathway of healing in which God uses both the natural and the supernatural to heal. Being healed has been described as a privilege of accepting Christ's redemption on the cross. Pentecostal writer Wilfred Graves Jr. views the healing of the body as a physical expression of salvation. , after describing Jesus exorcising at sunset and healing all of the sick who were brought to him, quotes these miracles as a fulfillment of the prophecy in : "He took up our infirmities and carried our diseases". Even those Christian writers who believe in faith healing do not all believe that one's faith presently brings about the desired healing. "[Y]our faith does not effect your healing now. When you are healed rests entirely on what the sovereign purposes of the Healer are." Larry Keefauver cautions against allowing enthusiasm for faith healing to stir up false hopes. "Just believing hard enough, long enough or strong enough will not strengthen you or prompt your healing. Doing mental gymnastics to 'hold on to your miracle' will not cause your healing to manifest now." Those who actively lay hands on others and pray with them to be healed are usually aware that healing may not always follow immediately. Proponents of faith healing say it may come later, and it may not come in this life. "The truth is that your healing may manifest in eternity, not in time". New Testament Parts of the four canonical gospels in the New Testament say that Jesus cured physical ailments well outside the capacity of first-century medicine. Jesus' healing acts are considered miraculous and spectacular due to the results being impossible or statistically improbable. One example is the case of "a woman who had had a discharge of blood for twelve years, and who had suffered much under many physicians, and had spent all that she had, and was not better but rather grew worse". After healing her, Jesus tells her "Daughter, your faith has made you well. Go in peace! Be cured from your illness". At least two other times Jesus credited the sufferer's faith as the means of being healed: and . Jesus endorsed the use of the medical assistance of the time (medicines of oil and wine) when he told the parable of the Good Samaritan (Luke 10:25–37), who "bound up [an injured man's] wounds, pouring on oil and wine" (verse 34) as a physician would. Jesus then told the doubting teacher of the law (who had elicited this parable by his self-justifying question, "And who is my neighbor?" in verse 29) to "go, and do likewise" in loving others with whom he would never ordinarily associate (verse 37). The healing in the gospels is referred to as a "sign" to prove Jesus' divinity and to foster belief in him as the Christ. However, when asked for other types of miracles, Jesus refused some but granted others in consideration of the motive of the request. Some theologians' understanding is that Jesus healed all who were present every single time. Sometimes he determines whether they had faith that he would heal them. Four of the seven miraculous signs performed in the Fourth Gospel that indicated he was sent from God were acts of healing or resurrection. He heals the Capernaum official's son, heals a paralytic by the pool in Bethsaida, healing a man born blind, and resurrecting Lazarus of Bethany. Jesus told his followers to heal the sick and stated that signs such as healing are evidence of faith. Jesus also told his followers to "cure sick people, raise up dead persons, make lepers clean, expel demons. You received free, give free". Jesus sternly ordered many who received healing from him: "Do not tell anyone!" Jesus did not approve of anyone asking for a sign just for the spectacle of it, describing such as coming from a "wicked and adulterous generation". The apostle Paul believed healing is one of the special gifts of the Holy Spirit, and that the possibility exists that certain persons may possess this gift to an extraordinarily high degree. In the New Testament Epistle of James, the faithful are told that to be healed, those who are sick should call upon the elders of the church to pray over [them] and anoint [them] with oil in the name of the Lord. The New Testament says that during Jesus' ministry and after his Resurrection, the apostles healed the sick and cast out demons, made lame men walk, raised the dead and performed other miracles. Apostles were holy men who had direct access to God and could channel his power to help and heal people. For example, Saint Peter healed a disabled man. Jesus used miracles to convince people that he was inaugurating the Messianic Age, as in Mt 12.28. Scholars have described Jesus' miracles as establishing the kingdom during his lifetime. Early Christian church Accounts or references to healing appear in the writings of many Ante Nicene Fathers, although many of these mentions are very general and do not include specifics. Catholicism The Roman Catholic Church recognizes two "not mutually exclusive" kinds of healing, one justified by science and one justified by faith: healing by human "natural means through the practice of medicine" which emphasizes that the theological virtue of "charity demands that we not neglect natural means of healing people who are ill" and the cardinal virtue of prudence forewarns not "to employ a technique that has no scientific support (or even plausibility)" healing by divine grace "interceded on behalf of the sick through the invocation of the name of the Lord Jesus, asking for healing through the power of the Holy Spirit, whether in the form of the sacramental laying on of hands and anointing with oil or of simple prayers for healing, which often include an appeal to the saints for their aid" In 2000, the Congregation for the Doctrine of the Faith issued "Instruction on prayers for healing" with specific norms about prayer meetings for obtaining healing, which presents the Catholic Church's doctrines of sickness and healing. It accepts "that there may be means of natural healing that have not yet been understood or recognized by science", but it rejects superstitious practices which are neither compatible with Christian teaching nor compatible with scientific evidence. Faith healing is reported by Catholics as the result of intercessory prayer to a saint or to a person with the gift of healing. According to U.S. Catholic magazine, "Even in this skeptical, postmodern, scientific agemiracles really are possible." According to a Newsweek poll, three-fourths of American Catholics say they pray for "miracles" of some sort. According to John Cavadini, when healing is granted, "The miracle is not primarily for the person healed, but for all people, as a sign of God's work in the ultimate healing called 'salvation', or a sign of the kingdom that is coming." Some might view their own healing as a sign they are particularly worthy or holy, while others do not deserve it. The Catholic Church has a special Congregation dedicated to the careful investigation of the validity of alleged miracles attributed to prospective saints. Pope Francis tightened the rules on money and miracles in the canonization process. Since Catholic Christians believe the lives of canonized saints in the Church will reflect Christ's, many have come to expect healing miracles. While the popular conception of a miracle can be wide-ranging, the Catholic Church has a specific definition for the kind of miracle formally recognized in a canonization process. According to Catholic Encyclopedia, it is often said that cures at shrines and during Christian pilgrimages are mainly due to psychotherapypartly to confident trust in Divine providence, and partly to the strong expectancy of cure that comes over suggestible persons at these times and places. Among the best-known accounts by Catholics of faith healings are those attributed to the miraculous intercession of the apparition of the Blessed Virgin Mary known as Our Lady of Lourdes at the Sanctuary of Our Lady of Lourdes in France and the remissions of life-threatening disease claimed by those who have applied for aid to Saint Jude, who is known as the "patron saint of lost causes". , Catholic medics have asserted that there have been 67 miracles and 7,000 unexplainable medical cures at Lourdes since 1858. In a 1908 book, it says these cures were subjected to intense medical scrutiny and were only recognized as authentic spiritual cures after a commission of doctors and scientists, called the Lourdes Medical Bureau, had ruled out any physical mechanism for the patient's recovery. Evangelicalism In some Pentecostal and Charismatic Evangelical churches, a special place is thus reserved for faith healings with laying on of hands during worship services or for campaigns evangelization. Faith healing or divine healing is considered to be an inheritance of Jesus acquired by his death and resurrection. Biblical inerrancy ensures that the miracles and healings described in the Bible are still relevant and may be present in the life of the believer. At the beginning of the 20th century, the new Pentecostal movement drew participants from the Holiness movement and other movements in America that already believed in divine healing. By the 1930s, several faith healers drew large crowds and established worldwide followings. The first Pentecostals in the modern sense appeared in Topeka, Kansas, in a Bible school conducted by Charles Fox Parham, a holiness teacher and former Methodist pastor. Pentecostalism achieved worldwide attention in 1906 through the Azusa Street Revival in Los Angeles led by William Joseph Seymour. Smith Wigglesworth was also a well-known figure in the early part of the 20th century. A former English plumber turned evangelist who lived simply and read nothing but the Bible from the time his wife taught him to read, Wigglesworth traveled around the world preaching about Jesus and performing faith healings. Wigglesworth claimed to raise several people from the dead in Jesus' name in his meetings. During the 1920s and 1930s, Aimee Semple McPherson was a controversial faith healer of growing popularity during the Great Depression. Subsequently, William M. Branham has been credited as the initiator of the post-World War II healing revivals. The healing revival he began led many to emulate his style and spawned a generation of faith healers. Because of this, Branham has been recognized as the "father of modern faith healers". According to writer and researcher Patsy Sims, "the power of a Branham service and his stage presence remains a legend unparalleled in the history of the Charismatic movement". By the late 1940s, Oral Roberts, who was associated with and promoted by Branham's Voice of Healing magazine also became well known, and he continued with faith healing until the 1980s. Roberts discounted faith healing in the late 1950s, stating, "I never was a faith healer and I was never raised that way. My parents believed very strongly in medical science and we have a doctor who takes care of our children when they get sick. I cannot heal anyone – God does that." A friend of Roberts was Kathryn Kuhlman, another popular faith healer, who gained fame in the 1950s and had a television program on CBS. Also in this era, Jack Coe and A. A. Allen were faith healers who traveled with large tents for large open-air crusades. Oral Roberts's successful use of television as a medium to gain a wider audience led others to follow suit. His former pilot, Kenneth Copeland, started a healing ministry. Pat Robertson, Benny Hinn, and Peter Popoff became well-known televangelists who claimed to heal the sick. Richard Rossi is known for advertising his healing clinics through secular television and radio. Kuhlman influenced Benny Hinn, who adopted some of her techniques and wrote a book about her. Christian Science Christian Science claims that healing is possible through prayer based on an understanding of God and the underlying spiritual perfection of God's creation. The material world as humanly perceived is believed to not be the spiritual reality. Christian Scientists believe that healing through prayer is possible insofar as it succeeds in bringing the spiritual reality of health into human experience. Prayer does not change the spiritual creation but gives a clearer view of it, and the result appears in the human scene as healing: the human picture adjusts to coincide more nearly with the divine reality. Therefore, Christian Scientists do not consider themselves to be faith healers since faith or belief in Christian Science is not required on the part of the patient, and because they consider healings reliable and provable rather than random. Although there is no hierarchy in Christian Science, practitioners devote full time to prayer for others on a professional basis, and advertise in an online directory published by the church. Christian Scientists sometimes tell their stories of healing at weekly testimony meetings at local Christian Science churches, or publish them in the church's magazines including The Christian Science Journal printed monthly since 1883, the Christian Science Sentinel printed weekly since 1898, and The Herald of Christian Science a foreign language magazine beginning with a German edition in 1903 and later expanding to Spanish, French, and Portuguese editions. Christian Science Reading Rooms often have archives of such healing accounts. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS) has had a long history of faith healings. Many members of the LDS Church have told their stories of healing within the LDS publication, the Ensign. The church believes healings come most often as a result of priesthood blessings given by the laying on of hands; however, prayer often accompanied with fasting is also thought to cause healings. Healing is always attributed to be God's power. Latter-day Saints believe that the Priesthood of God, held by prophets (such as Moses) and worthy disciples of the Savior, was restored via heavenly messengers to the first prophet of this dispensation, Joseph Smith. According to LDS doctrine, even though members may have the restored priesthood authority to heal in the name of Jesus Christ, all efforts should be made to seek the appropriate medical help. Brigham Young stated this effectively, while also noting that the ultimate outcome is still dependent on the will of God. Islam A number of healing traditions exist among Muslims. Some healers are particularly focused on diagnosing cases of possession by jinn or demons. Buddhism Chinese-born Australian businessman Jun Hong Lu was a prominent proponent of the "Guan Yin Citta Dharma Door", claiming that practicing the three "golden practices" of reciting texts and mantras, liberation of beings, and making vows, laid a solid foundation for improved physical, mental, and psychological well-being, with many followers publicly attesting to have been healed through practice. Scientology Some critics of Scientology have referred to some of its practices as being similar to faith healing, based on claims made by L. Ron Hubbard in Dianetics: The Modern Science of Mental Health and other writings. Scientific investigation Nearly all scientists dismiss faith healing as pseudoscience. Believers assert that faith healing makes no scientific claims and thus should be treated as a matter of faith that is not testable by science. Critics reply that claims of medical cures should be tested scientifically because, although faith in the supernatural is not in itself usually considered to be the purview of science, claims of reproducible effects are nevertheless subject to scientific investigation. Scientists and doctors generally find that faith healing lacks biological plausibility or epistemic warrant, which is one of the criteria used to judge whether clinical research is ethical and financially justified. A Cochrane review of intercessory prayer found "although some of the results of individual studies suggest a positive effect of intercessory prayer, the majority do not". The authors concluded: "We are not convinced that further trials of this intervention should be undertaken and would prefer to see any resources available for such a trial used to investigate other questions in health care". A review in 1954 investigated spiritual healing, therapeutic touch and faith healing. Of the hundred cases reviewed, none revealed that the healer's intervention alone resulted in any improvement or cure of a measurable organic disability. In addition, at least one study has suggested that adult Christian Scientists, who generally use prayer rather than medical care, have a higher death rate than other people of the same age. The Global Medical Research Institute (GMRI) was created in 2012 to start collecting medical records of patients who claim to have received a supernatural healing miracle as a result of Christian Spiritual Healing practices. The organization has a panel of medical doctors who review the patient's records looking at entries prior to the claimed miracles and entries after the miracle was claimed to have taken place. "The overall goal of GMRI is to promote an empirically grounded understanding of the physiological, emotional, and sociological effects of Christian Spiritual Healing practices". This is accomplished by applying the same rigorous standards used in other forms of medical and scientific research. A 2011 article in the New Scientist magazine cited positive physical results from meditation, positive thinking and spiritual faith Criticism Skeptics of faith healing offer primarily two explanations for anecdotes of cures or improvements, relieving any need to appeal to the supernatural. The first is post hoc ergo propter hoc, meaning that a genuine improvement or spontaneous remission may have been experienced coincidental with but independent from anything the faith healer or patient did or said. These patients would have improved just as well even had they done nothing. The second is the placebo effect, through which a person may experience genuine pain relief and other symptomatic alleviation. In this case, the patient genuinely has been helped by the faith healer or faith-based remedy, not through any mysterious or numinous function, but by the power of their own belief that they would be healed. In both cases the patient may experience a real reduction in symptoms, though in neither case has anything miraculous or inexplicable occurred. Both cases, however, are strictly limited to the body's natural abilities. According to the American Cancer Society: The American Medical Association considers that prayer as therapy should not be a medically reimbursable or deductible expense. Belgian philosopher and skeptic Etienne Vermeersch coined the term Lourdes effect as a criticism of the magical thinking and placebo effect possibilities for the claimed miraculous cures as there are no documented events where a severed arm has been reattached through faith healing at Lourdes. Vermeersch identifies ambiguity and equivocal nature of the miraculous cures as a key feature of miraculous events. Negative impact on public health Reliance on faith healing to the exclusion of other forms of treatment can have a public health impact when it reduces or eliminates access to modern medical techniques. This is evident in both higher mortality rates for children and in reduced life expectancy for adults. Critics have also made note of serious injury that has resulted from falsely labelled "healings", where patients erroneously consider themselves cured and cease or withdraw from treatment. For example, at least six people have died after faith healing by their church and being told they had been healed of HIV and could stop taking their medications. It is the stated position of the AMA that "prayer as therapy should not delay access to traditional medical care". Choosing faith healing while rejecting modern medicine can and does cause people to die needlessly. Christian theological criticism of faith healing Christian theological criticism of faith healing broadly falls into two distinct levels of disagreement. The first is widely termed the "open-but-cautious" view of the miraculous in the church today. This term is deliberately used by Robert L. Saucy in the book Are Miraculous Gifts for Today?. Don Carson is another example of a Christian teacher who has put forward what has been described as an "open-but-cautious" view. In dealing with the claims of Warfield, particularly "Warfield's insistence that miracles ceased", Carson asserts, "But this argument stands up only if such miraculous gifts are theologically tied exclusively to a role of attestation; and that is demonstrably not so." However, while affirming that he does not expect healing to happen today, Carson is critical of aspects of the faith healing movement, "Another issue is that of immense abuses in healing practises.... The most common form of abuse is the view that since all illness is directly or indirectly attributable to the devil and his works, and since Christ by his cross has defeated the devil, and by his Spirit has given us the power to overcome him, healing is the inheritance right of all true Christians who call upon the Lord with genuine faith." The second level of theological disagreement with Christian faith healing goes further. Commonly referred to as cessationism, its adherents either claim that faith healing will not happen today at all, or may happen today, but it would be unusual. Richard Gaffin argues for a form of cessationism in an essay alongside Saucy's in the book Are Miraculous Gifts for Today? In his book Perspectives on Pentecost Gaffin states of healing and related gifts that "the conclusion to be drawn is that as listed in 1 Corinthians 12(vv. 9f., 29f.) and encountered throughout the narrative in Acts, these gifts, particularly when exercised regularly by a given individual, are part of the foundational structure of the church... and so have passed out of the life of the church." Gaffin qualifies this, however, by saying "At the same time, however, the sovereign will and power of God today to heal the sick, particularly in response to prayer (see e.g. James 5:14, 15), ought to be acknowledged and insisted on." According to the Catholic apologist Trent Horn, while the Bible teaches believers to pray when they are sick, this is not to be viewed as an exclusion of medical care, citing Sirach 38:9,12-14: Fraud Skeptics of faith healers point to fraudulent practices either in the healings themselves (such as plants in the audience with fake illnesses), or concurrent with the healing work supposedly taking place and claim that faith healing is a quack practice in which the "healers" use well known non-supernatural illusions to exploit credulous people in order to obtain their gratitude, confidence and money. James Randi's The Faith Healers investigates Christian evangelists such as Peter Popoff, who claimed to heal sick people on stage in front of an audience. Popoff pretended to know private details about participants' lives by receiving radio transmissions from his wife who was off-stage and had gathered information from audience members prior to the show. According to this book, many of the leading modern evangelistic healers have engaged in deception and fraud. The book also questioned how faith healers use funds that were sent to them for specific purposes. Physicist Robert L. Park and doctor and consumer advocate Stephen Barrett have called into question the ethics of some exorbitant fees. There have also been legal controversies. For example, in 1955 at a Jack Coe revival service in Miami, Florida, Coe told the parents of a three-year-old boy that he healed their son who had polio. Coe then told the parents to remove the boy's leg braces. However, their son was not cured of polio and removing the braces left the boy in constant pain. As a result, through the efforts of Joseph L. Lewis, Coe was arrested and charged on February 6, 1956, with practicing medicine without a license, a felony in the state of Florida. A Florida Justice of the Peace dismissed the case on grounds that Florida exempts divine healing from the law. Later that year Coe was diagnosed with bulbar polio, and died a few weeks later at Dallas' Parkland Hospital on December 17, 1956. Miracles for sale TV personality Derren Brown produced a show on faith healing entitled Miracles for Sale which arguably exposed the art of faith healing as a scam. In this show, Derren trained a scuba diver trainer picked from the general public to be a faith healer and took him to Texas to successfully deliver a faith healing session to a congregation. United States law The 1974 Child Abuse Prevention and Treatment Act (CAPTA) required states to grant religious exemptions to child neglect and child abuse laws in order to receive federal money. The CAPTA amendments of 1996 state: Thirty-one states have child-abuse religious exemptions. These are Alabama, Alaska, California, Colorado, Delaware, Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, Ohio, Oklahoma, Oregon, Pennsylvania, Vermont, Virginia, and Wyoming. In six of these states, Arkansas, Idaho, Iowa, Louisiana, Ohio and Virginia, the exemptions extend to murder and manslaughter. Of these, Idaho is the only state accused of having a large number of deaths due to the legislation in recent times. In February 2015, controversy was sparked in Idaho over a bill believed to further reinforce parental rights to deny their children medical care. Reckless homicide convictions Parents have been convicted of child abuse and felony reckless negligent homicide and found responsible for killing their children when they withheld lifesaving medical care and chose only prayers. See also Anointing of the sick Efficacy of prayer Egregore Energy medicine Folk medicine Self-efficacy Thaumaturgy Witch doctor List of ineffective cancer treatments List of topics characterized as pseudoscience Notes References Bibliography Beyer, Jürgen (2013) "Wunderheilung". In Enzyklopädie des Märchens. Handwörterbuch zur historischen und vergleichenden Erzählforschung, vol. 14, Berlin & Boston: Walter de Gruyter, coll. 1043–1050 External links Alternative medicine Religious practices Pseudoscience Religious terminology Magic (supernatural) Medical controversies Health fraud
Faith healing
Biology
5,860
1,153,848
https://en.wikipedia.org/wiki/Piperine
Piperine, possibly along with its isomer chavicine, is the compound responsible for the pungency of black pepper and long pepper. It has been used in some forms of traditional medicine. Preparation Due to its poor solubility in water, piperine is typically extracted from black pepper by using organic solvents like dichloromethane. The amount of piperine varies from 1–2% in long pepper, to 5–10% in commercial white and black peppers. Piperine can also be prepared by treating the solvent-free residue from a concentrated alcoholic extract of black pepper with a solution of potassium hydroxide to remove resin (said to contain chavicine, an isomer of piperine). The solution is decanted from the insoluble residue and left to stand overnight in alcohol. During this period, the alkaloid slowly crystallizes from the solution. Piperine has been synthesized by the action of piperonoyl chloride on piperidine. Reactions Piperine forms salts only with strong acids. The platinichloride B4·H2PtCl6 forms orange-red needles ("B" denotes one mole of the alkaloid base in this and the following formula). Iodine in potassium iodide added to an alcoholic solution of the base in the presence of a little hydrochloric acid gives a characteristic periodide, B2·HI·I2, crystallizing in steel-blue needles with melting point 145 °C. Piperine can be hydrolyzed by an alkali into piperidine and piperic acid. In light, especially ultraviolet light, piperine is changed into its isomers chavicine, isochavicine and isopiperine, which are tasteless. History Piperine was discovered in 1819 by Hans Christian Ørsted, who isolated it from the fruits of Piper nigrum, the source plant of both black and white pepper. Piperine was also found in Piper longum and Piper officinarum (Miq.) C. DC. (=Piper retrofractum Vahl), two species called "long pepper". See also Piperidine, a cyclic six-membered amine that results from hydrolysis of piperine Piperic acid, the carboxylic acid also derived from hydrolysis of piperine Capsaicin, the active piquant chemical in chili peppers Allyl isothiocyanate, the active piquant chemical in mustard, radishes, horseradish, and wasabi Allicin, the active piquant flavor chemical in raw garlic and onions (see those articles for discussion of other chemicals in them relating to pungency, and eye irritation) Ilepcimide Piperlongumine References CYP1A2 inhibitors CYP3A4 inhibitors Piperidine alkaloids Pungent flavors Monoamine oxidase inhibitors Carboxamides Benzodioxoles Polyenes Enones 1-Piperidinyl compounds Substances discovered in the 19th century
Piperine
Chemistry
618
50,771,553
https://en.wikipedia.org/wiki/David%20Henry%20Solomon
David Henry Solomon (born 19 November 1929 in Adelaide, South Australia) is an Australian polymer chemist. He is best known for his work in developing Living Radical Polymerization techniques, and polymer banknotes. Education Solomon received an Associate of Sydney Technical College, (equivalent to a Diploma of Chemistry) in 1950 and went on to complete a Bachelor of Science (BSc (Hons)) in 1952 from the New South Wales University of Technology (now the University of New South Wales), a Master of Science (MSc) from the same university in 1955, and a PhD from the University of New South Wales in 1959 with a thesis entitled Studies on the Chemistry of Carbonyl Compounds. In 1968 he was awarded a DSc from the University of New South Wales for his thesis Studies on the Chemistry of Coating Compounds. He also received an Honorary Doctorate in Applied Science from the University of Melbourne in 2005, one of only seven awarded in the university's history. Career Solomon joined British Australian Lead Manufacturers Pty Ltd (BALM, which later became Dulux Australia Ltd) as a trainee chemist in 1946 at the age of 16. It was here that he developed his lifelong interest in polymers, and made important observations that the current theories on polymers did not match with what was actually happening in the industrial processes. Solomon's strong interest in polymer research drew him to join CSIRO as a senior research scientist in the Division of Applied Mineralogy in 1963. In 1970 Solomon transferred to the Division of Applied Chemistry where he established the Polymer Research Group, before going on to become chief of the Division of Applied Organic Chemistry during a reorganisation in 1974, a position he held for the next 17 years. In 1990 he accepted an invitation to become the ICI Australia – Masson Professor and head of the School of Chemistry at the University of Melbourne. Here he started the Polymer Science Group, his third internationally acclaimed polymer research group. After ‘retirement’ in 1995 David took up the position of honorary professorial fellow in the Department of Chemical and Biomolecular Engineering at the university, moving the Polymer Science Group, to which he still acts as senior advisor. In 2015 he was awarded the title of professor emeritus at the University of Melbourne. Solomon is often referred to as the father of polymer research in Australia, having established three internationally acclaimed polymer research groups in industry (Dulux, 1960), in Australia's peak scientific research organisation, CSIRO (1970) and at the University of Melbourne (1990). Research achievements Solomon is well known for several of his research achievements. In particular his work on free radical polymerization revolutionized the field through the development of the first living free-radical polymerization technique; Nitroxide Mediated Polymerization (NMP). He also led the team, and was principal inventor of the world's first polymer banknote. Free Radical Polymerization Solomon's ground-breaking work on free radical polymerization was initiated through observations made in industry. For example, anomalies that were not explained by polymerization theory at the time, and the observation that during the production of polymer/mineral composites some batches underwent spontaneous combustion. This led to discoveries that had significant influence on the future directions of radical chemistry. It led to the development of Nitroxide Mediated Polymerization (NMP), the first example of a controlled, or living, radical polymerization technique. This research also produced early examples of what was to become known as RAFT, or Reversible addition−fragmentation chain-transfer polymerization. Solomon's work rewrote the theory on free radical polymerization, and he was co-author with Graeme Moad on the definitive reference book: The Chemistry of Radical Polymerization (Moad & Solomon, 2006). Previous theories attempted to explain radical polymerization on the basis of thermodynamic stability controlling structure. Solomon's work showed that kinetics was the major factor in controlling the way polymer chains formed. Polymer banknotes Following a major forgery of Australia's newly introduced $10 notes in 1967, Solomon was invited to a meeting about how to make more secure bank notes. Given his background in polymer science Solomon's idea was to print the notes on a plastic substrate rather than the traditional paper, and incorporate optically variable devices – defined as a device that changes its appearance when something external is done to the note. Solomon went on to lead the research team and was the principal inventor of the world's first polymer banknote, with the first note issued into circulation in 1988: the Australian bicentennial $10. He has chronicled the history of the development of polymer banknotes in The Plastic Banknote: From Concept to Reality, co-authored with Tom Spurling (published in 2014). Selected honours Solomon has been the recipient of numerous prestigious awards throughout his career. A selected list is outlined below. 2016 Companion of the Order of Australia. 2011 Prime Minister's Prize for Science, awarded jointly with Dr Ezio Rizzardo. 2006 Victoria Prize. 2001 Centenary Medal. 1994 Clunies Ross National Science and Technology Award. 1990 Member of the Order of Australia. 1989 Ian William Wark Medal and Lecture. Professional societies Solomon's considerable contributions to science have been recognised by his peers through election to the following Academies: Fellow of the Institution of Chemical Engineers (FIChemE, 2007) Fellow of the Royal Society (FRS, 2004). Foundation Fellow of the Australian Academy of Technological Sciences and Engineering (FTSE, 1976). Fellow of the Australian Academy of Science (FAA, 1975). Fellow of the Royal Australian Chemical Institute (FRACI, 1966) Solomon has always been active in these societies, in particular the Royal Australian Chemical Institute (RACI). In 2001 the RACI established the Solomon Lecture Series in recognition of his contribution to the field and to the RACI. A biennial series presented by an invited leading international polymer researcher, this Lecture Series recognizes the importance of promoting the exchange of ideas and expertise and to expose young scientists to the best in their field internationally. Publications Solomon is co-author of nine books, including an historical account of the development of plastic banknotes (The Plastic Banknote: From Concept to Reality) and several text books (including The Chemistry of Radical Polymerization). He is also co-author of over 250 journal papers and 45 patents. References 1929 births Polymer chemistry Living people Australian chemists Members of the Order of Australia Companions of the Order of Australia Fellows of the Royal Society Fellows of the Australian Academy of Science Fellows of the Australian Academy of Technological Sciences and Engineering University of New South Wales alumni
David Henry Solomon
Chemistry,Materials_science,Engineering
1,329
61,660,335
https://en.wikipedia.org/wiki/Discrete%20calculus
Discrete calculus or the calculus of discrete functions, is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change. Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus. The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications. Two initial constructions Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function. Suppose the functions are defined at points separated by an increment : The "doubling function" may be denoted by and the "squaring function" by . The "difference quotient" is the rate of change of the function over one of the intervals defined by the formula: It takes the function as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals: As the rate of change is that for the whole interval , any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a -cochain. The most common notation for the difference quotient is: If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of is how the position is changing in time, that is, it is the velocity of the ball. If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as , where is the independent variable, is the dependent variable, is the -intercept, and: This gives an exact value for the slope of a straight line. If the function is not linear, however, then the change in divided by the change in varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let be a function, and fix a point in the domain of . is a point on the graph of the function. If is the increment of , then is the next value of . Therefore, is the increment of . The slope of the line between these two points is So is the slope of the line between and . Here is a particular example, the difference quotient of the squaring function. Let be the squaring function. Then: The difference quotient of the difference quotient is called the second difference quotient and it is defined at and so on. Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called integration. In technical language, integral calculus studies a certain linear operator. The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis. A motivating example is the distances traveled in a given time. If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by and ) is the area of the shaded region . So, the interval between and is divided into a number of equal segments, the length of each segment represented by the symbol . For each small segment, we have one value of the function . Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment. Associated with each segment is the value of the function above it, . The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled. Suppose a function is defined at the mid-points of the intervals of equal length : Then the Riemann sum from to in sigma notation is: As this computation is carried out for each , the new function is defined at the points: The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus: If a function is defined on a partition of the interval , , and if is a function whose difference quotient is , then we have: Furthermore, for every , we have: This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences. History The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative. During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals: Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more L. E. J. Brouwer: simplicial approximation theorem Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators W. V. D. Hodge: the Hodge star operator, the Hodge decomposition Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product Hassler Whitney: cochains as integrands The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling. Applications Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus. Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay). In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling). In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation. The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies. In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets. In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures. Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function. Calculus of differences and sums Suppose a function (a -cochain) is defined at points separated by an increment : The difference (or the exterior derivative, or the coboundary operator) of the function is given by: It is defined at each of the above intervals; it is a -cochain. Suppose a -cochain is defined at each of the above intervals. Then its sum is a function (a -cochain) defined at each of the points by: These are their properties: Constant rule: If is a constant, then Linearity: if and are constants, Product rule: Fundamental theorem of calculus I: Fundamental theorem of calculus II: The definitions are applied to graphs as follows. If a function (a -cochain) is defined at the nodes of a graph: then its exterior derivative (or the differential) is the difference, i.e., the following function defined on the edges of the graph (-cochain): If is a -cochain, then its integral over a sequence of edges of the graph is the sum of its values over all edges of ("path integral"): These are the properties: Constant rule: If is a constant, then Linearity: if and are constants, Product rule: Fundamental theorem of calculus I: if a -chain consists of the edges , then for any -cochain Fundamental theorem of calculus II: if the graph is a tree, is a -cochain, and a function (-cochain) is defined on the nodes of the graph by where a -chain consists of for some fixed , then See references. Chains of simplices and cubes A simplicial complex is a set of simplices that satisfies the following conditions: 1. Every face of a simplex from is also in . 2. The non-empty intersection of any two simplices is a face of both and . By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as , with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean. Let be a simplicial complex. A simplicial k-chain is a finite formal sum where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example, The vector space of k-chains on is written . It has a basis in one-to-one correspondence with the set of k-simplices in . To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices. Let be an oriented k-simplex, viewed as a basis element of . The boundary operator is the linear operator defined by: where the oriented simplex is the th face of , obtained by deleting its th vertex. In , elements of the subgroup are referred to as cycles, and the subgroup is said to consist of boundaries. A direct computation shows that . In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces form a chain complex. Another equivalent statement is that is contained in . A cubical complex is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset of the form for some . An elementary cube is the finite product of elementary intervals, i.e. where are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube embedded in Euclidean space (for some with ). A set is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes. More general are cell complexes. A chain complex is a sequence of vector spaces connected by linear operators (called boundary operators) , such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy , or with indices suppressed, . The complex may be written out as follows. A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map from a simplicial complex to another is a function from the vertex set of to the vertex set of such that the image of each simplex in (viewed as a set of vertices) is a simplex in . It generates a linear map, called a chain map, from the chain complex of to the chain complex of . Explicitly, it is given on -chains by if are all distinct, and otherwise it is set equal to . A chain map between two chain complexes and is a sequence of homomorphisms for each that commutes with the boundary operators on the two chain complexes, so . This is written out in the following commutative diagram: A chain map sends cycles to cycles and boundaries to boundaries. See references. Discrete differential forms: cochains For each vector space Ci in the chain complex we consider its dual space and is its dual linear operator This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex The cochain complex is the dual notion to a chain complex. It consists of a sequence of vector spaces connected by linear operators satisfying . The cochain complex may be written out in a similar fashion to the chain complex. The index in either or is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of are called cocycles (or closed elements), and the elements in the image of are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles. The Poincaré lemma states that if is an open ball in , any closed -form defined on is exact, for any integer with . When we refer to cochains as discrete (differential) forms, we refer to as the exterior derivative. We also use the calculus notation for the values of the forms: Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval: Stokes' theorem says that the sum of a form over the boundary of some orientable manifold is equal to the sum of its exterior derivative over the whole of , i.e., It is worthwhile to examine the underlying principle by considering an example for dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. See references. The wedge product of forms In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree and to form a composite cochain of degree . For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension. For simplicial complexes, the wedge product is implemented as the cup product: if is a -cochain and is a -cochain, then where is a -simplex and , is the simplex spanned by into the -simplex whose vertices are indexed by . So, is the -th front face and is the -th back face of , respectively. The coboundary of the cup product of cochains and is given by The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary. The cup product operation satisfies the identity In other words, the corresponding multiplication is graded-commutative. See references. Laplace operator The Laplace operator of a function at a vertex , is (up to a factor) the rate at which the average value of over a cellular neighborhood of deviates from . The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena. The codifferential is an operator defined on -forms by: where is the exterior derivative or differential and is the Hodge star operator. The codifferential is the adjoint of the exterior derivative according to Stokes' theorem: Since the differential satisfies , the codifferential has the corresponding property The Laplace operator is defined by: See references. Related Discrete element method Divided differences Finite difference coefficient Finite difference method Finite element method Finite volume method Numerical differentiation Numerical integration Numerical methods for ordinary differential equations See also Calculus of finite differences Calculus on finite weighted graphs Cellular automaton Discrete differential geometry Discrete Laplace operator Calculus of finite differences, discrete calculus or discrete analysis Discrete Morse theory References Algebraic topology Applied mathematics Calculus Discrete mathematics Finite differences Linear operators in calculus Mathematical analysis Non-Newtonian calculus Numerical analysis Numerical differential equations
Discrete calculus
Mathematics
4,901
74,850,217
https://en.wikipedia.org/wiki/Collective%E2%80%93amoeboid%20transition
The collective–amoeboid transition (CMT) is a process by which collective multicellular groups dissociate into amoeboid single cells following the down-regulation of integrins. CMTs contrast with epithelial–mesenchymal transitions (EMT) which occur following a loss of E-cadherin. Like EMTs, CATs are involved in the invasion of tumor cells into surrounding tissues, with amoeboid movement more likely to occur in soft extracellular matrix (ECM) and mesenchymal movement in stiff ECM. Although once differentiated, cells typically do not change their migration mode, EMTs and CMTs are highly plastic with cells capable of interconverting between them depending on intracelluar regulatory signals and the surrounding ECM. CATs are the least common transition type in invading tumor cells, although they are noted in melanoma explants. See also Collective cell migration Dedifferentiation Invasion (cancer) References Animal developmental biology Cancer research Cell movement Cellular processes Tissue engineering
Collective–amoeboid transition
Chemistry,Engineering,Biology
216
315,578
https://en.wikipedia.org/wiki/Information%20processing%20%28psychology%29
In cognitive psychology, information processing is an approach to the goal of understanding human thinking that treats cognition as essentially computational in nature, with the mind being the software and the brain being the hardware. It arose in the 1940s and 1950s, after World War II. The information processing approach in psychology is closely allied to the computational theory of mind in philosophy; it is also related to cognitivism in psychology and functionalism in philosophy. Two types Information processing may be vertical or horizontal, either of which may be centralized or decentralized (distributed). The horizontally distributed processing approach of the mid-1980s became popular under the name connectionism. The connectionist network is made up of different nodes, and it works by a "priming effect," and this happens when a "prime node activates a connected node". But "unlike in semantic networks, it is not a single node that has a specific meaning, but rather the knowledge is represented in a combination of differently activated nodes"(Goldstein, as cited in Sternberg, 2012). Models and theories There are several proposed models or theories that describe the way in which we process information. Every individual has different information overload point with the same information load because individuals have different information-processing capacities. Sternberg's triarchic theory of intelligence Sternberg's theory of intelligence is made up of three different components: creative, analytical, and practical abilities. Creativeness is the ability to have new original ideas, and being analytical can help a person decide whether the idea is a good one or not. "Practical abilities are used to implement the ideas and persuade others of their value". In the middle of Sternberg's theory is cognition and with that is information processing. In Sternberg's theory, he says that information processing is made up of three different parts, meta components, performance components, and knowledge-acquisition components. These processes move from higher-order executive functions to lower-order functions. Meta components are used for planning and evaluating problems, while performance components follow the orders of the meta components, and the knowledge-acquisition component learns how to solve the problems. This theory in action can be explained by working on an art project. First is a decision about what to draw, then a plan and a sketch. During this process there is simultaneous monitoring of the process, and whether it is producing the desired accomplishment. All these steps fall under the meta component processing, and the performance component is the art. The knowledge-acquisition portion is the learning or improving drawing skills. Information processing model: the working memory Information processing has been described as "the sciences concerned with gathering, manipulating, storing, retrieving, and classifying recorded information". According to the Atkinson-Shiffrin memory model or multi-store model, for information to be firmly implanted in memory it must pass through three stages of mental processing: sensory memory, short-term memory, and long-term memory. An example of this is the working memory model. This includes the central executive, phonologic loop, episodic buffer, visuospatial sketchpad, verbal information, long-term memory, and visual information. The central executive is like the secretary of the brain. It decides what needs attention and how to respond. The central executive then leads to three different subsections. The first is phonological storage, subvocal rehearsal, and the phonological loop. These sections work together to understand words, put the information into memory, and then hold the memory. The result is verbal information storage. The next subsection is the visuospatial sketchpad which works to store visual images. The storage capacity is brief but leads to an understanding of visual stimuli. Finally, there is an episodic buffer. This section is capable of taking information and putting it into long-term memory. It is also able to take information from the phonological loop and visuospatial sketchpad, combining them with long-term memory to make "a unitary episodic representation. In order for these to work, the sensory register takes in via the five senses: visual, auditory, tactile, olfactory, and taste. These are all present since birth and are able to handle simultaneous processing (e.g., food – taste it, smell it, see it). In general, learning benefits occur when there is a developed process of pattern recognition. The sensory register has a large capacity and its behavioral response is very short (1–3 seconds). Within this model, sensory store and short term memory or working memory has limited capacity. Sensory store is able to hold very limited amounts of information for very limited amounts of time. This phenomenon is very similar to having a picture taken with a flash. For a few brief moments after the flash goes off, the flash it seems to still be there. However, it is soon gone and there is no way to know it was there. Short term memory holds information for slightly longer periods of time, but still has a limited capacity. According to Linden, "The capacity of STM had initially been estimated at "seven plus or minus two" items, which fits the observation from neuropsychological testing that the average digit span of healthy adults is about seven. However, it emerged that these numbers of items can only be retained if they are grouped into so-called chunks, using perceptual or conceptual associations between individual stimuli." Its duration is of 5–20 seconds before it is out of the subject's mind. This occurs often with names of people newly introduced to. Images or information based on meaning are stored here as well, but it decays without rehearsal or repetition of such information. On the other hand, long-term memory has a potentially unlimited capacity and its duration is as good as indefinite. Although sometimes it is difficult to access, it encompasses everything learned until this point in time. One might become forgetful or feel as if the information is on the tip of the tongue. Cognitive development theory Another approach to viewing the ways in which information is processed in humans was suggested by Jean Piaget in what is called the Piaget's Cognitive Development Theory. Piaget developed his model based on development and growth. He identified four different stages between different age brackets characterized by the type of information and by a distinctive thought process. The four stages are: the sensorimotor (from birth to 2 years), preoperational (2–6 years), concrete operational (6–11 years), and formal operational periods (11 years and older). During the sensorimotor stage, newborns and toddlers rely on their senses for information processing to which they respond with reflexes. In the preoperational stage, children learn through imitation and remain unable to take other people's point of view. The concrete operational stage is characterized by the developing ability to use logic and to consider multiple factors to solve a problem. The last stage is the formal operational, in which preadolescents and adolescents begin to understand abstract concepts and to develop the ability to create arguments and counter arguments. Furthermore, adolescence is characterized by a series of changes in the biological, cognitive, and social realms. In the cognitive area, the brain's prefrontal cortex as well as the limbic system undergoes important changes. The prefrontal cortex is the part of the brain that is active when engaged in complicated cognitive activities such as planning, generating goals and strategies, intuitive decision-making, and metacognition (thinking about thinking). This is consistent with Piaget's last stage of formal operations. The prefrontal cortex becomes complete between adolescence and early adulthood. The limbic system is the part of the brain that modulates reward sensitivity based on changes in the levels of neurotransmitters (e.g., dopamine) and emotions. In short, cognitive abilities vary according to our development and stages in life. It is at the adult stage that we are better able to be better planners, process and comprehend abstract concepts, and evaluate risks and benefits more aptly than an adolescent or child would be able to. In computing, information processing broadly refers to the use of algorithms to transform data—the defining activity of computers; indeed, a broad computing professional organization is known as the International Federation for Information Processing (IFIP). It is essentially synonymous with the terms data processing or computation, although with a more general connotation. See also References Bibliography , Cognitive psychology Information
Information processing (psychology)
Biology
1,733
11,887,815
https://en.wikipedia.org/wiki/Postreplication%20repair
Postreplication repair is the repair of damage to the DNA that takes place after replication. Some example genes in humans include: BRCA2 and BRCA1 BLM NBS1 Accurate and efficient DNA replication is crucial for the health and survival of all living organisms. Under optimal conditions, the replicative DNA polymerases ε, δ, and α can work in concert to ensure that the genome is replicated efficiently with high accuracy in every cell cycle. However, DNA is constantly challenged by exogenous and endogenous genotoxic threats, including solar ultraviolet (UV) radiation and reactive oxygen species (ROS) generated as a byproduct of cellular metabolism. Damaged DNA can act as a steric block to replicative polymerases, thereby leading to incomplete DNA replication or the formation of secondary DNA strand breaks at the sites of replication stalling. Incomplete DNA synthesis and DNA strand breaks are both potential sources of genomic instability. An arsenal of DNA repair mechanisms exists to repair various forms of damaged DNA and minimize genomic instability. Most DNA repair mechanisms require an intact DNA strand as template to fix the damaged strand. DNA damage prevents the normal enzymatic synthesis of DNA by the replication fork. At damaged sites in the genome, both prokaryotic and eukaryotic cells utilize a number of postreplication repair (PRR) mechanisms to complete DNA replication. Chemically modified bases can be bypassed by either error-prone or error-free translesion polymerases, or through genetic exchange with the sister chromatid. The replication of DNA with a broken sugar-phosphate backbone is most likely facilitated by the homologous recombination proteins that confer resistance to ionizing radiation. The activity of PRR enzymes is regulated by the SOS response in bacteria and may be controlled by the postreplication checkpoint response in eukaryotes. The elucidation of PRR mechanisms is an active area of molecular biology research, and the terminology is currently in flux. For instance, PRR has recently been referred to as "DNA damage tolerance" to emphasize the instances in which postreplication DNA damage is repaired without removing the original chemical modification to the DNA. While the term PRR has most frequently been used to describe the repair of single-stranded postreplication gaps opposite damaged bases, a more broad usage has been suggested. In this case, the term PRR would encompasses all processes that facilitate the replication of damaged DNA, including those that repair replication-induced double-strand breaks. Melanoma cells are commonly defective in postreplication repair of DNA damages that are in the form of cyclobutane pyrimidine dimers, a type of damage caused by ultraviolet radiation. A particular repair process that appears to be defective in melanoma cells is homologous recombinational repair. Defective postreplication repair of cyclobutane pyrimidine dimers can lead to mutations that are the primary driver of melanoma. References DNA repair
Postreplication repair
Biology
616
63,595,069
https://en.wikipedia.org/wiki/Un-24
un-24 is a gene in fungus such as Neurospora crassa, encode Ribonucleoside-diphosphate reductase large chain, involved in their heterokaryon incompatibility. See also Un-25 References Fungus genes
Un-24
Biology
58
32,745,764
https://en.wikipedia.org/wiki/Rogers%E2%80%93Szeg%C5%91%20polynomials
In mathematics, the Rogers–Szegő polynomials are a family of polynomials orthogonal on the unit circle introduced by , who was inspired by the continuous q-Hermite polynomials studied by Leonard James Rogers. They are given by where (q;q)n is the descending q-Pochhammer symbol. Furthermore, the satisfy (for ) the recurrence relation with and . References Orthogonal polynomials Q-analogs
Rogers–Szegő polynomials
Mathematics
85
3,059,400
https://en.wikipedia.org/wiki/CR-39
Poly(allyl diglycol carbonate) (PADC) is a plastic commonly used in the manufacture of eyeglass lenses alongside the material PMMA (polymethyl methacrylate). The monomer is allyl diglycol carbonate (ADC). The term CR-39 technically refers to the ADC monomer, but is more commonly used to refer to the finished plastic. The abbreviation stands for "Columbia Resin #39", which was the 39th formula of a thermosetting plastic developed by the Columbia Resins project in 1940. The first commercial use of CR-39 monomer (ADC) was to help create glass-reinforced plastic fuel tanks for the B-17 bomber aircraft in World War II, reducing the weight and increasing the range of the bomber. After the war, the Armorlite Lens Company in California is credited with manufacturing the first CR-39 eyeglass lenses in 1947. CR-39 plastic has an index of refraction of 1.498 and an Abbe number of 58. CR-39 is now a trade-marked product of PPG Industries. An alternative use includes a purified version that is used to measure ionising radiation such as alpha particles and neutrons. Although CR-39 is a type of polycarbonate, it should not be confused with the general term "polycarbonate", a tough homopolymer usually made from bisphenol A. Synthesis CR-39 is made by polymerization of ADC in presence of diisopropyl peroxydicarbonate (IPP) initiator. The presence of the allyl groups allows the polymer to form cross-links; thus, it is a thermoset resin. The polymerization schedule of ADC monomers using IPP is generally 20 hours long with a maximum temperature of 95 °C. The elevated temperatures can be supplied using a water bath or a forced air oven. Benzoyl peroxide (BPO) is an alternative organic peroxide that may be used to polymerize ADC. Pure benzoyl peroxide is crystalline and less volatile than diisopropyl peroxydicarbonate. Using BPO results in a polymer that has a higher yellowness index, and the peroxide takes longer to dissolve into ADC at room temperature than IPP. Applications Optics CR-39 is transparent in the visible spectrum and is almost completely opaque in the ultraviolet range. It has high abrasion resistance, in fact the highest abrasion/scratch resistance of any uncoated optical plastic. CR-39 is about half the weight of glass with an index of refraction only slightly lower than that of crown glass, and its high Abbe number yields low chromatic aberration, altogether making it an advantageous material for eyeglasses and sunglasses. A wide range of colors can be achieved by dyeing of the surface or the bulk of the material. CR-39 is also resistant to most solvents and other chemicals, gamma radiation, aging, and to material fatigue. It can withstand the small hot sparks from welding, something glass cannot do. It can be used continuously in temperatures up to 100 °C and up to one hour at 130 °C. Radiation detection In the radiation detection application, CR-39 is used as a solid-state nuclear track detector (SSNTD) to detect the presence of ionising radiation. Energetic particles colliding with the polymer structure leave a trail of broken chemical bonds within the CR-39. When immersed in a concentrated alkali solution (typically sodium hydroxide) hydroxide ions attack and break the polymer structure, etching away the bulk of the plastic at a nominally fixed rate. However, along the paths of damage left by charged particle interaction the concentration of radiation damage allows the chemical agent to attack the polymer more rapidly than it does in the bulk, revealing the paths of the charged particle ion tracks. The resulting etched plastic therefore contains a permanent record of not only the location of the radiation on the plastic but also gives spectroscopic information about the source. Principally used for the detection of alpha-emitting radionuclides (especially radon gas), the radiation-sensitivity properties of CR-39 are also used for proton and neutron dosimetry and historically cosmic ray investigations. The ability of CR-39 to record the location of a radiation source, even at extremely low concentrations is exploited in autoradiography studies with alpha particles, and for (comparatively cheap) detection of alpha-emitters like uranium. Typically, a thin section of a biological material is fixed against CR-39 and kept frozen for a timescale of months to years in an environment that is shielded as much as possible from possible radiological contaminants. Before etching, photographs are taken of the biological sample with the affixed CR-39 detector, with care taken to ensure that prescribed location marks on the detector are noted. After the etching process, automated or manual 'scanning' of the CR-39 is used to physically locate the ionising radiation recorded, which can then be mapped to the position of the radionuclide within the biological sample. There is no other non-destructive method for accurately identifying the location of trace quantities of radionuclides in biological samples at such low emission levels. See also Corrective lens References Plastics Polycarbonates Optical materials Particle detectors PPG Industries
CR-39
Physics,Technology,Engineering
1,095
12,428,690
https://en.wikipedia.org/wiki/Semi-elliptic%20operator
In mathematics — specifically, in the theory of partial differential equations — a semi-elliptic operator is a partial differential operator satisfying a positivity condition slightly weaker than that of being an elliptic operator. Every elliptic operator is also semi-elliptic, and semi-elliptic operators share many of the nice properties of elliptic operators: for example, much of the same existence and uniqueness theory is applicable, and semi-elliptic Dirichlet problems can be solved using the methods of stochastic analysis. Definition A second-order partial differential operator P defined on an open subset Ω of n-dimensional Euclidean space Rn, acting on suitable functions f by is said to be semi-elliptic if all the eigenvalues λi(x), 1 ≤ i ≤ n, of the matrix a(x) = (aij(x)) are non-negative. (By way of contrast, P is said to be elliptic if λi(x) > 0 for all x ∈ Ω and 1 ≤ i ≤ n, and uniformly elliptic if the eigenvalues are uniformly bounded away from zero, uniformly in i and x.) Equivalently, P is semi-elliptic if the matrix a(x) is positive semi-definite for each x ∈ Ω. References (See Section 9) Differential operators Partial differential equations
Semi-elliptic operator
Mathematics
267
946,689
https://en.wikipedia.org/wiki/Czech%20hedgehog
The Czech hedgehog ( or ) is a static anti-tank obstacle defense made of metal angle beams or I-beams (that is, lengths with an L- or 𝐈-shaped cross section). It is similar in shape to metal knucklebones, although on a much larger scale. The hedgehog is very effective in keeping light to medium tanks and vehicles from penetrating a line of defense; it maintains its function even when tipped over by a nearby explosion. Although Czech hedgehogs may provide some scant cover for attacking infantry, infantry forces are generally much less effective against fortified defensive positions than mechanized units. The author of the Czechoslovak invention is Major František Kašík. History Origin The Czech hedgehog's name refers to its origin in Czechoslovakia. The hedgehogs were originally used on the Czech–German border by the Czechoslovak border fortifications – a massive but never-completed fortification system that was turned over to Germany in 1938 after the occupation of the Sudetenland as a consequence of the Munich Agreement. The first hedgehogs were built of reinforced concrete, with a shape similar to later metal versions. However, the concrete hedgehogs proved ineffective during tests as they could be substantially damaged by machine-gun fire. Once they were fragmented, the debris provided more cover for enemy infantry than did their metal counterparts. Therefore, only the oldest sections of the Czechoslovak defensive line, built in 1935–1936, were equipped with concrete hedgehogs, and usually only in the second line. World War II The Czech hedgehog was widely used during World War II by the Soviet Union in anti-tank defense. They were produced from any sturdy piece of metal and sometimes wood, like railway sleepers. Czech hedgehogs were especially effective in urban combat, where a single hedgehog could block an entire street. Czech hedgehogs thus became a symbol of "defense at all costs" in the Soviet Union; hence, the memorial to Moscow defenders, built alongside the M-10 highway in 1966, is composed of three giant Czech hedgehogs. Czech hedgehogs were part of the German defenses of the Atlantic Wall. During the invasion of Normandy, the Allies cut up sizable numbers of intact and wrecked hedgehogs and welded them to the front of their M4 Sherman and M5 Stuart tanks. Known as Rhino tanks, these proved very useful for clearing the hedgerows that made up the bocages across Normandy. Cold War Postwar tests conducted by the Czechoslovak army proved the low efficiency of the metal hedgehogs against heavy armored vehicles such as the Soviet ISU-152 and T-54 or German Panther. As many as 40% of attempts at breakthrough were successful; therefore, the army developed new anti-tank obstacles for the border fortifications instituted during the Cold War. Nevertheless, the metal hedgehog was still used as a quick road-block against wheeled vehicles. Russo-Ukrainian War In early 2022, during the Russian invasion of Ukraine, hedgehogs were used in conjunction with concrete barriers and other techniques to thwart Russian forces. The Ukrainian Railways repurposed new tracks to make hundreds of hedgehogs at 33 of its own shops and some other sites. The railroad estimated they had enough material for some 1,800 hedgehogs. The Ukrainian military in Odesa, Kyiv and Lviv also made hedgehogs to be distributed to strategic locations. In Kyiv, hedgehogs from WWII were brought out of a museum and used at a roadblock. Technical details The hedgehog is not generally anchored to prevent movement, as it can be effective even if rolled by a large explosion. Its effectiveness lies in its dimensions, combined with the fact that a vehicle attempting to drive over it will likely become stuck (and possibly damaged) through rolling on top of the lower bar and lifting its treads (or wheels) off the ground. Industrially manufactured Czech hedgehogs were made of three pieces of metal angle (L 140/140/13 mm, length , weight ; later versions: length , weight joined by gusset plates, rivets and bolts, or welded together into a characteristic spatial three-armed cross with each bar at right angles to the other two, this pattern forming the axes of an octahedron. Two arms of the hedgehog were connected in the factory, while the third arm was connected on-site by M20 bolts. The arms were equipped with square "feet" to prevent sinking into the ground, as well as notches for attaching barbed wire. See also Caltrop Cheval de frise, a portable frame covered with many long iron or wooden spikes used in medieval times to deter cavalry. Dragon's teeth (fortification) Makibishi Sudis, an Ancient Roman stake which may have been lashed together to form a similar fortification References External links Engineering barrages Anti-tank obstacles Area denial weapons
Czech hedgehog
Engineering
983
1,548,201
https://en.wikipedia.org/wiki/Polyalkylimide
Polyalkylimide is a polymer whose structure contains no free monomers. It is used in permanent dermal fillers to treat soft tissue deficits such as facial lipoatrophy, gluteal atrophy, acne, and scars. In plastic and reconstructive surgery it is used for building facial volume in the cheeks, chin, jaw, and lips. Reports of infections and migration of polyalkylimide in the face has led Canada to remove it from the market, and the manufacturer of Biolcamid ceasing production. A class action lawsuit was filed against the company. See also Plastic Surgery References Polymers Plastic surgery filler
Polyalkylimide
Chemistry,Materials_science
138
74,940,609
https://en.wikipedia.org/wiki/Wolfe%20cycle
The Wolfe Cycle is a methanogenic pathway used by archaea; the archaeon takes H2 and CO2 and cycles them through a various intermediates to create methane. The Wolfe Cycle is modified in different orders and classes of archaea as per the resource availability and requirements for each species, but it retains the same basic pathway. The pathway begins with the reducing carbon dioxide to formylmethanofuran. The last step uses heterodisulfide reductase (Hdr) to reduce heterodisulfide into Coenzyme B and Coenzyme M using Fe4S4 clusters. Evidence suggests this last step goes hand-in-hand with the first step, and feeds back into it, creating a cycle. At various points in the Wolfe Cycle, intermediates that are formed are taken out of the cycle to be used in other metabolic processes. Since intermediates are being taken out at various points in the cycle, there is also a replenishing (anaplerotic) reaction that feeds into the Wolfe cycle, this is to regenerate necessary intermediates for the cycle to continue. Overall, including the replenishing reaction, the Wolfe Cycle has a total of nine steps. While Obligate CO2 reducing methanogens perform additional steps to reduce CO2 to CH3. Discovery In 1971, in a review published by Robert Stoner Wolfe, information regarding methanogenesis in M. bryantii was published. At the time, the only thing known about this process was that Coenzyme M was involved. In addition, methanogenesis was thought to follow a linear pathway. It was not until 1986 that the reduction of CO2 to CH4 was proposed to occur in a cycle when it was shown that Steps 8 and 1 are coupled. Steps The Wolfe Cycle follows multiple pathways, depending on the microbe. Below are generalized steps in the Wolfe Cycle. References Anaerobic digestion Archaea biology Metabolic pathways
Wolfe cycle
Chemistry,Engineering,Biology
401
44,689,981
https://en.wikipedia.org/wiki/Frenkel%20line
In thermodynamics, the Frenkel line is a proposed boundary on the phase diagram of a supercritical fluid, separating regions of qualitatively different behavior. Fluids on opposite sides of the line have been described as "liquidlike" or "gaslike", and exhibit different behaviors in terms of oscillation, excitation modes, and diffusion. Other proposed similar boundary lines include for example the Fisher-Widom line and the Widom line. Overview Two types of approaches to the behavior of liquids are present in the literature. The most common one is based on a van der Waals model. It treats the liquids as dense structureless gases. Although this approach allows explanation of many principal features of fluids, in particular the liquid-gas phase transition, it fails to explain other important issues such as, for example, the existence in liquids of transverse collective excitations such as phonons. Another approach to fluid properties was proposed by Yakov Frenkel. It is based on the assumption that at moderate temperatures, the particles of liquid behave in a manner similar to a crystal, i.e. the particles demonstrate oscillatory motions. However, while in crystals they oscillate around their nodes, in liquids, after several periods, the particles change their nodes. This approach is based on postulation of some similarity between crystals and liquids, providing insight into many important properties of the latter: transverse collective excitations, large heat capacity, and so on. From the discussion above, one can see that the microscopic behavior of particles of moderate and high temperature fluids is qualitatively different. If one heats a fluid from a temperature close to the melting point to some high temperature, a crossover from the solid-like to the gas-like regime occurs. The line of this crossover was named the Frenkel line, after Yakov Frenkel. Several methods to locate the Frenkel line are proposed in the literature. The exact criterion defining the Frenkel line is the one based on a comparison of characteristic times in fluids. One can define a 'jump time' via , where is the size of the particle and is the diffusion coefficient. This is the time necessary for a particle to move a distance comparable to its own size. The second characteristic time corresponds to the shortest period of transverse oscillations of particles within the fluid, . When these two time scales are roughly equal, one cannot distinguish between the oscillations of the particles and their jumps to another position. Thus the criterion for the Frenkel line is given by . There exist several approximate criteria to locate the Frenkel line on the pressure-temperature plane. One of these criteria is based on the velocity autocorrelation function (vacf): below the Frenkel line, the vacf demonstrates oscillatory behaviour, while above it, the vacf monotonically decays to zero. The second criterion is based on the fact that at moderate temperatures, liquids can sustain transverse excitations, which disappear upon heating. One further criterion is based on isochoric heat capacity measurements. The isochoric heat capacity per particle of a monatomic liquid near the melting line is close to (where is the Boltzmann constant). The contribution to the heat capacity due to the potential part of transverse excitations is . Therefore, at the Frenkel line, where transverse excitations vanish, the isochoric heat capacity per particle should be , a direct prediction from the phonon theory of liquid thermodynamics. Crossing the Frenkel line leads also to some structural crossovers in fluids. Currently Frenkel lines of several idealised liquids, such as Lennard-Jones and soft spheres, as well as realistic models such as liquid iron, hydrogen, water, and carbon dioxide, have been reported in the literature. See also Supercritical liquid–gas boundaries References External links Liquids and Supercritical Fluids - University of Salford Statistical mechanics
Frenkel line
Physics
818
1,113,514
https://en.wikipedia.org/wiki/MyoD
MyoD, also known as myoblast determination protein 1, is a protein in animals that plays a major role in regulating muscle differentiation. MyoD, which was discovered in the laboratory of Harold M. Weintraub, belongs to a family of proteins known as myogenic regulatory factors (MRFs). These bHLH (basic helix loop helix) transcription factors act sequentially in myogenic differentiation. Vertebrate MRF family members include MyoD1, Myf5, myogenin, and MRF4 (Myf6). In non-vertebrate animals, a single MyoD protein is typically found. MyoD is one of the earliest markers of myogenic commitment. MyoD is expressed at extremely low and essentially undetectable levels in quiescent satellite cells, but expression of MyoD is activated in response to exercise or muscle tissue damage. The effect of MyoD on satellite cells is dose-dependent; high MyoD expression represses cell renewal, promotes terminal differentiation and can induce apoptosis. Although MyoD marks myoblast commitment, muscle development is not dramatically ablated in mouse mutants lacking the MyoD gene. This is likely due to functional redundancy from Myf5 and/or Mrf4. Nevertheless, the combination of MyoD and Myf5 is vital to the success of myogenesis. History MyoD was cloned by a functional assay for muscle formation reported in Cell in 1987 by Davis, Weintraub, and Lassar. It was first described as a nuclear phosphoprotein in 1988 by Tapscott, Davis, Thayer, Cheng, Weintraub, and Lassar in Science. The researchers expressed the complementary DNA (cDNA) of the murine MyoD protein in a different cell lines (fibroblast and adipoblast) and found MyoD converted them to myogenic cells. The following year, the same research team performed several tests to determine both the structure and function of the protein, confirming their initial proposal that the active site of the protein consisted of the helix loop helix (now referred to as basic helix loop helix) for dimerization and a basic site upstream of this bHLH region facilitated DNA binding only once it became a protein dimer. MyoD has since been an active area of research as still relatively little is known concerning many aspects of its function. Function The function of MyoD in development is to commit mesoderm cells to a skeletal myoblast lineage, and then to regulate that continued state. MyoD may also regulate muscle repair. MyoD mRNA levels are also reported to be elevated in aging skeletal muscle. One of the main actions of MyoD is to remove cells from the cell cycle (halt proliferation for terminal cell cycle arrest in differentiated myocytes) by enhancing the transcription of p21 and myogenin. MyoD is inhibited by cyclin dependent kinases (CDKs). CDKs are in turn inhibited by p21. Thus MyoD enhances its own activity in the cell in a feedforward manner. Sustained MyoD expression is necessary for retaining the expression of muscle-related genes. MyoD is also an important effector for the fast-twitch muscle fiber (types IIA, IIX, and IIB) phenotype. Mechanisms MyoD is a transcription factor and can also direct chromatin remodelling through binding to a DNA motif known as the E-box. MyoD is known to have binding interactions with hundreds of muscular gene promoters and to permit myoblast proliferation. While not completely understood, MyoD is now thought to function as a major myogenesis controller in an on/off switch association mediated by KAP1 (KRAB [Krüppel-like associated box]-associated protein 1) phosphorylation. KAP1 is localized at muscle-related genes in myoblasts along with both MyoD and Mef2 (a myocyte transcription enhancer factor). Here, it serves as a scaffold and recruits the coactivators p300 and LSD1, in addition to several corepressors which include G9a and the Histone deacetylase HDAC1. The consequence of this coactivator/corepressor recruitment is silenced promoting regions on muscle genes. When the kinase MSK1 phosphorylates KAP1, the corepressors previously bound to the scaffold are released allowing MyoD and Mef2 to activate transcription. Once the "master controller" MyoD has become active, SETDB1 is required to maintain MyoD expression within the cell. Setdb1 appears to be necessary to maintain both MyoD expression and also genes that are specific to muscle tissues because reduction of Setdb1 expression results in a severe delay of myoblast differentiation and determination. In Setdb1 depleted myoblasts that are treated with exogenous MyoD, myoblastic differentiation is successfully restored. In one model of Setdb1 action on MyoD, Setdb1 represses an inhibitor of MyoD. This unidentified inhibitor likely acts competitively against MyoD during typical cellular proliferation. Evidence for this model is that reduction of Setdb1 results in direct inhibition of myoblast differentiation which may be caused by the release of the unknown MyoD inhibitor. MyoD has also been shown to function cooperatively with the tumor suppressor gene, Retinoblastoma (pRb) to cause cell cycle arrest in the terminally differentiated myoblasts. This is done through regulation of the Cyclin, Cyclin D1. Cell cycle arrest (in which myoblasts would indicate the conclusion of myogenesis) is dependent on the continuous and stable repression of the D1 cyclin. Both MyoD and pRb are necessary for the repression of cyclin D1, but rather than acting directly on cyclin D1, they act on Fra-1 which is immediately early of cyclin D1. MyoD and pRb are both necessary for repressing Fra-1 (and thus cyclin D1) as either MyoD or pRb on its own is not sufficient alone to induce cyclin D1 repression and thus cell cycle arrest. In an intronic enhancer of Fra-1 there were two conserved MyoD binding sites discovered. There is cooperative action of MyoD and pRb at the Fra-1 intronic enhancer that suppresses the enhancer, therefore suppressing cyclin D1 and ultimately resulting in cell cycle arrest for terminally differentiated myoblasts. Wnt signalling can affect MyoD Wnt signalling from adjacent tissues has been shown to induce cells in somites that receive these Wnt signals to express Pax3 and Pax7 in addition to myogenic regulatory factors, including Myf5 and MyoD. Specifically, Wnt3a can directly induce MyoD expression via cis-element interactions with a distal enhancer and Wnt response element. Wnt1 from dorsal neural tube and Wnt6/Wnt7a from surface ectoderm have also been implicated in promoting myogenesis in the somite; the latter signals may act primarily through Myod. In typical adult muscles in a resting condition (absence of physiological stress) the specific Wnt family proteins that are expressed are Wnt5a, Wnt5b, Wnt7a and Wnt4. When a muscle becomes injured (thus requiring regeneration) Wnt5a, Wnt5b, and Wnt7a are increased in expression. As the muscle completes repair Wnt7b and Wnt3a are increased as well. This patterning of Wnt signalling expression in muscle cell repair induces the differentiation of the progenitor cells, which reduces the number of available satellite cells. Wnt plays a crucial role in satellite cell regulation and skeletal muscle aging and also regeneration. Wnts are known to active the expression of Myf5 and MyoD by Wnt1 and Wnt7a. Wnt4, Wnt5, and Wnt6 function to increase the expression of both of the regulatory factors but at a more subtle level. Additionally, MyoD increases Wnt3a when myoblasts undergo differentiation. Whether MyoD is activated by Wnt via cis-regulation direct targeting or through indirect physiological pathways remains to be elucidated. Coactivators and repressors IFRD1 is a positive cofactor of MyoD, as it cooperates with MyoD at inducing the transcriptional activity of MEF2C (by displacing HDAC4 from MEF2C); moreover IFRD1 also represses the transcriptional activity of NF-κB, which is known to inhibit MyoD mRNA accumulation. NFATc1 is a transcription factor that regulates composition of fiber type and the fast-to-slow twitch transition resulting from aerobic exercise requires the expression of NFATc1. MyoD expression is a key transcription factor in fast twitch fibers which is inhibited by NFATc1 in oxidative fiber types. NFATc1 works to inhibit MyoD via a physical interaction with the MyoD N-terminal activation domain resulting in inhibited recruitment of the necessary transcriptional coactivator p300. NFATc1 physically disrupts the interaction between MyoD and p300. This establishes the molecular mechanism by which fiber types transition in vivo through exercise with opposing roles for NFATc1 and MyoD. NFATc1 controls this balance by physical inhibition of MyoD in slow-twitch muscle fiber types. The histone deacetyltransferase p300 functions with MyoD in an interaction that is essential for the myotube generation from fibroblasts that is mediated by MyoD. Recruitment of p300 is the rate-limiting process in the conversion of fibroblasts to myotubes. In addition to p300, MyoD is also known to recruit Set7, H3K4me1, H3K27ac, and RNAP II to the enhancer that is bound with and this allows for the activation of muscle gene that is condition-specific and established by MyoD recruitment. Endogenous p300 though, is necessary for MyoD functioning by acting as an essential coactivator. MyoD associatively binds to the enhancer region in conjunction with a placeholding "putative pioneer factor" which helps to establish and maintain a both of them in a specific and inactive conformation. Upon the removal or inactivation on the placeholder protein bound to the enhancer, the recruitment of the additional group of transcription factors that help to positively regulate enhancer activity is permitted and this results in the MyoD-transcription factor-enhancer complex to assume a transcriptionally active state. Interactions MyoD has been shown to interact with: C-jun, CREB-binding protein, CSRP3, Cyclin-dependent kinase 4, Cyclin-dependent kinase inhibitor 1C, EP300, HDAC1, ID1, ID2, MDFI, MOS, Retinoblastoma protein, Retinoid X receptor alpha STAT3, and TCF3. References External links Transcription factors Human proteins
MyoD
Chemistry,Biology
2,330
294,993
https://en.wikipedia.org/wiki/Petr%20Beckmann
Petr Beckmann (November 13, 1924 – August 3, 1993) was a professor of electrical engineering and advocate of libertarianism and nuclear power who disputed Albert Einstein's theory of relativity and other accepted theories in modern physics. Biography In 1939, when Beckmann was 14, his family fled their home in Prague, Czechoslovakia to escape the Nazis. From 1942 to 1945, he served in a Czech squadron of the Royal Air Force. He worked as a radar mechanic on the newly invented radar systems that helped Britain win the Battle of the Atlantic. He received a B.Sc. in 1949, a Ph.D. in 1955, and a D.Sc. in 1962, all from Prague's Czech Academy of Sciences in electrical engineering. He defected to the United States in 1963 and became a professor (later, emeritus) of electrical engineering at the University of Colorado. In the United States, he became acquainted with novelist Ayn Rand, a contributing editor to a publication devoted to her ideas, The Intellectual Activist, and a speaker at The Thomas Jefferson School, an intellectual conference of similar purpose. Beckmann was a prolific author; he wrote several electrical engineering textbooks and non-technical works. By 1968, he had founded Golem Press, which published most of his books. The Golem Press books included The Health Hazards of Not Going Nuclear (1976), which argued in favor of nuclear power during the height of the anti-nuclear movement by making "apples-to-apples" comparisons of the risks of nuclear power with the risks in the same terms (e.g., deaths per terawatt hour) of the alternative power sources. Beckmann also wrote A History of , documenting the history of the calculation of . The book also expresses opposition to the Roman culture, Catholicism (and other religions), Nazism, and Communism. He published his own monthly newsletter, Access to Energy, which since September 1993 has been written by biochemist Arthur B. Robinson. In 1981, he took early retirement with emeritus status, in order to devote himself fully to what he saw as the defense of science, technology and free enterprise, through his newsletter, Access to Energy. He founded the Golem Press in 1967, publishing more than nine books. These included The History of , Einstein Plus Two, and The Health Hazards of Not Going Nuclear (with an Introduction by Edward Teller). He wrote some 60 scientific papers and eight technical books. Beckmann spoke at the 1990 San Francisco Conference of International Society for Individual Liberty (ISIL), where he received a standing ovation for his speech in which he attacked "sham environmentalists". Beckmann was also a frequent participant in Usenet debates. In them, he claimed to have debunked Albert Einstein's theory of special relativity in his book Einstein Plus Two, as well as in the journal Galilean Electrodynamics, which he also founded. Books (with coauthor A. Spizzichino) See also Criticism of the theory of relativity List of Soviet and Eastern Bloc defectors References External links Einstein Plus Two Rethinking Relativity, by Tom Bethell (profile of Beckmann and his theories) 20th-century American physicists Engineering educators Engineering writers Mathematics writers American technology writers University of Colorado Boulder faculty American libertarians 1924 births 1993 deaths Czechoslovak defectors Czechoslovak emigrants to the United States 20th-century American non-fiction writers Relativity critics
Petr Beckmann
Physics
689
1,794,051
https://en.wikipedia.org/wiki/Armstrong%27s%20mixture
Armstrong's mixture is a highly shock and friction sensitive explosive. Formulations vary, but one consists of 67% potassium chlorate, 27% red phosphorus, 3% sulfur, and 3% calcium carbonate. It is named for Sir William Armstrong, who invented it sometime prior to 1872 for use in explosive shells. Toys Armstrong's mixture can be used as ammunition for toy cap guns. The mixture is suspended in water with some gum arabic or similar binder and deposited in drops, each containing a few milligrams of explosive, to dry between layers of paper backing. The dots explode with some smoke when struck. Armstrong's mixture can be used in impact firecrackers known as cap torpedoes, which explode on impact when the ball (made of clay or papier-mâché) is thrown or (with some types) launched by slingshot. The firecrackers may include gravel with the explosive mixture to ensure enough friction is generated to produce a detonation. Military use With the addition of a grit such as boron carbide (in a modified formulation given as 70% KClO3, 19% red phosphorus, 3% sulfur, 3% chalk, and 5% boron carbide by weight), Armstrong's mixture has been considered for use in firearm primers. This use as primer for artillery propellants may have been Armstrong's original purpose. It also was seen in various patents for matches, novelty fireworks, and signalling devices. Armstrong's mixture has been used in thrown impact-detonated improvised explosive devices, made simply by loading it into hollow balls. Safety Armstrong's mixture is both very sensitive and very explosive, a dangerous combination that limits its practical use to toy caps. Such toy caps and fireworks typically contain no more than 10 milligrams each, but gram quantities can cause maiming hand injuries. The mixture is likely to explode if mixed dry and is even dangerous wet. If the pH is not made neutral, phosphoric acids that may have been generated by oxidized phosphorus on contact with the water could cause it to deteriorate while slowly drying. Generally the wet slurry or paste is loaded into the final casing while wet and was heat-dried in rotating drums prior to being coated with water glass to securely protect them from leakage when globe torpedos were still in production commercially. Simple mixtures of red phosphorus and potassium chlorate can detonate at a wide range of proportions; a 20% phosphorus mixture had 27% of the equivalent power of a like mass of TNT in a laboratory experiment, and the detonation of the 10% and 20% phosphorus mixtures even in small unconfined samples of 1 gram was described by the authors of one study as "impressive" and "scary". Pyrotechnician John Donner wrote in 1996 that it "is the most hazardous mixture commonly used in small fireworks." Davis Tenney called it "a combination which is the most sensitive, dangerous, and unpredictable of the many with which the pyrotechnist has to deal. Their preparation ought under no conditions to be attempted by an amateur." Toy charges, such as the several-milligram dots used for cap guns, are individually harmless but potentially dangerous in large numbers. On May 14, 1878, such an accident occurred in Paris. A store containing some six to eight million paper caps, totaling about 64 kilograms of explosive mass, caught fire and exploded, killing 14 and injuring 16 more. References Explosives Pyrotechnic compositions
Armstrong's mixture
Chemistry
726
46,566,148
https://en.wikipedia.org/wiki/CXorf67
Uncharacterized protein CXorf67 is a protein that in humans is encoded by the CXorf67 gene. The Accession Number for the human gene is NM_203407. Aliases include MGC47837 and LOC340602. The gene is located on the positive strand of the X chromosome at Xp11.22. The mRNA is 1939 base pairs long and contains 1 exon and no introns. Expression Expression of CXorf67 in humans is generally low in all tissues. Higher RNA expression has been reported in the testis and placenta and relatively higher nuclear protein expression has been observed in the placenta, testis and ovarian follicles. Protein The translated human CXorf67 protein is 503 amino acids in length. The protein has a molecular weight of 51.9 kdal and an isoelectric point of 10.432 Interactions Protein interaction of CXorf67 with UBC (polyubiquitin-C) in humans was identified using a two-hybrid screening. Currently no other protein interactions have been identified in humans. Function The function of CXorf67 is currently unknown, however the fusion of CXorf67 with the MBTD1 gene has been linked to low-grade endometrial stromal sarcoma in humans. Sequence variants of the chromosomal region Xp11.22 are also predicted to confer susceptibility to prostate cancer in humans. References External links Human proteins Uncharacterized proteins Genes on human chromosome X
CXorf67
Biology
326
14,429,247
https://en.wikipedia.org/wiki/RRH
Peropsin, a visual pigment-like receptor, is a protein that in humans is encoded by the RRH gene. It belongs like other animal opsins to the G protein-coupled receptors. Even so, the first peropsins were already discovered in mice and humans in 1997, not much is known about them. Photochemistry Like most opsins, peropsins have in its seventh transmembrane domain a lysine corresponding to amino acid position 296 in cattle rhodopsin, which is important for retinal binding and light sensing. In amphioxus, a cephalochordate, a peropsin binds in the dark-state all-trans-retinal instead of 11-cis-retinal, as it is in cattle rhodopsin. Therefore, peropsins have been suggested to be photoisomerases. Tissue localization In mice, a peropsin is localized to the apical microvilli of the retinal pigment epithelium (RPE). There, it regulates storage or the movement of vitamin A from the retina to the RPE. A peropsin is also expressed in keratinocytes of the human skin. In keratinocyte cell culture, it reacts to UV light if retinal is supplied. In chicken, a peropsin is expressed with an RGR-opsin in the pineal gland and the retina. Gene localization and structure The human peropsin gene lies on chromosome 4 band 4q25 and has six introns like RGR-opsins. However only two of these introns are inserted at the same place, which still indicates that peropsins and RGR-opsins are more closely related to each other than to the ciliary and rhabdomeric opsins. This shared gene structure is also reflected in opsin phylogenies, where peropsins and RGR-opsins are in the same group: The chromopsins. Phylogeny The peropsins are restricted to the craniates and the cephalochordates. The craniates are the taxon that contains mammals and with them humans. The peropsins are one of the seven subgroups of the chromopsins. The other groups are the RGR-opsins, the retinochromes, the nemopsins, the astropsins, the varropsins, and the gluopsins. The chromopsins are one of three subgroups of the tetraopsins (also known as RGR/Go or Group 4 opsins). The other groups are the neuropsins and the Go-opsins. The tetraopsins are one of the five major groups of the animal opsins, also known as type 2 opsins). The other groups are the ciliary opsins (c-opsins, cilopsins), the rhabdomeric opsins (r-opsins, rhabopsins), the xenopsins, and the nessopsins. Four of these subclades occur in Bilateria (all but the nessopsins). However, the bilaterian clades constitute a paraphyletic taxon without the opsins from the cnidarians. In the phylogeny above, Each clade contains sequences from opsins and other G protein-coupled receptors. The number of sequences and two pie charts are shown next to the clade. The first pie chart shows the percentage of a certain amino acid at the position in the sequences corresponding to position 296 in cattle rhodopsin. The amino acids are color-coded. The colors are red for lysine (K), purple for glutamic acid (E), orange for arginine (R), dark and mid-gray for other amino acids, and light gray for sequences that have no data at that position. The second pie chart gives the taxon composition for each clade, green stands for craniates, dark green for cephalochordates, mid green for echinoderms, brown for nematodes, pale pink for annelids, dark blue for arthropods, light blue for mollusks, and purple for cnidarians. The branches to the clades have pie charts, which give support values for the branches. The values are from right to left SH-aLRT/aBayes/UFBoot. The branches are considered supported when SH-aLRT ≥ 80%, aBayes ≥ 0.95, and UFBoot ≥ 95%. If a support value is above its threshold the pie chart is black otherwise gray. Clinical significance Since RGR-opsin may be associated with retinitis pigmentosa, which is like peropsin also expressed in the retinal pigment epithelium, peropsin was screened for a link with retinitis pigmentosa. However, no link could be established. References G protein-coupled receptors
RRH
Chemistry
1,039
31,434,142
https://en.wikipedia.org/wiki/Numerical%20semigroup
In mathematics, a numerical semigroup is a special kind of a semigroup. Its underlying set is the set of all nonnegative integers except a finite number of integers and the binary operation is the operation of addition of integers. Also, the integer 0 must be an element of the semigroup. For example, while the set {0, 2, 3, 4, 5, 6, ...} is a numerical semigroup, the set {0, 1, 3, 5, 6, ...} is not because 1 is in the set and 1 + 1 = 2 is not in the set. Numerical semigroups are commutative monoids and are also known as numerical monoids. The definition of numerical semigroup is intimately related to the problem of determining nonnegative integers that can be expressed in the form x1n1 + x2 n2 + ... + xr nr for a given set {n1, n2, ..., nr} of positive integers and for arbitrary nonnegative integers x1, x2, ..., xr. This problem had been considered by several mathematicians like Frobenius (1849–1917) and Sylvester (1814–1897) at the end of the 19th century. During the second half of the twentieth century, interest in the study of numerical semigroups resurfaced because of their applications in algebraic geometry. Definition and examples Definition Let N be the set of nonnegative integers. A subset S of N is called a numerical semigroup if the following conditions are satisfied. 0 is an element of S N − S, the complement of S in N, is finite. If x and y are in S then x + y is also in S. There is a simple method to construct numerical semigroups. Let A = {n1, n2, ..., nr} be a nonempty set of positive integers. The set of all integers of the form x1 n1 + x2 n2 + ... + xr nr is the subset of N generated by A and is denoted by 〈 A 〉. The following theorem fully characterizes numerical semigroups. Theorem Let S be the subsemigroup of N generated by A. Then S is a numerical semigroup if and only if gcd (A) = 1. Moreover, every numerical semigroup arises in this way. Examples The following subsets of N are numerical semigroups. 〈 1 〉 = {0, 1, 2, 3, ...} 〈 1, 2 〉 = {0, 1, 2, 3, ...} 〈 2, 3 〉 = {0, 2, 3, 4, 5, 6, ...} Let a be a positive integer. 〈 a, a + 1, a + 2, ... , 2a – 1 〉 = {0, a, a + 1, a + 2, a + 3, ...}. Let b be an odd integer greater than 1. Then 〈 2, b 〉 = {0, 2, 4, . . . , b − 3 , b − 1, b, b + 1, b + 2, b + 3 , ...}. Well-tempered harmonic semigroup H={0,12,19,24,28,31,34,36,38,40,42,43,45,46,47,48,...} Embedding dimension, multiplicity The set A is a set of generators of the numerical semigroup 〈 A 〉. A set of generators of a numerical semigroup is a minimal system of generators if none of its proper subsets generates the numerical semigroup. It is known that every numerical semigroup S has a unique minimal system of generators and also that this minimal system of generators is finite. The cardinality of the minimal set of generators is called the embedding dimension of the numerical semigroup S and is denoted by e(S). The smallest member in the minimal system of generators is called the multiplicity of the numerical semigroup S and is denoted by m(S). Frobenius number and genus There are several notable numbers associated with a numerical semigroup S. The set N − S is called the set of gaps in S and is denoted by G(S). The number of elements in the set of gaps G(S) is called the genus of S (or, the degree of singularity of S) and is denoted by g(S). The greatest element in G(S) is called the Frobenius number of S and is denoted by F(S). The smallest element of S such that all larger integers are likewise elements of S is called the conductor; it is F(S) + 1. Examples Let S = 〈 5, 7, 9 〉. Then we have: The set of elements in S : S = {0, 5, 7, 9, 10, 12, 14, ...}. The minimal set of generators of S : {5, 7, 9}. The embedding dimension of S : e(S) = 3. The multiplicity of S : m(S) = 5. The set of gaps in S : G(S) = {1, 2, 3, 4, 6, 8, 11, 13}. The Frobenius number of S is F(S) = 13, and its conductor is 14. The genus of S : g(S) = 8. Numerical semigroups with small Frobenius number or genus Computation of Frobenius number Numerical semigroups with embedding dimension two The following general results were known to Sylvester. Let a and b be positive integers such that gcd (a, b) = 1. Then F(〈 a, b 〉) = (a − 1) (b − 1) − 1 = ab − (a + b). g(〈 a, b 〉) = (a − 1)(b − 1) / 2. Numerical semigroups with embedding dimension three There is no known general formula to compute the Frobenius number of numerical semigroups having embedding dimension three or more. No polynomial formula can be found to compute the Frobenius number or genus of a numerical semigroup with embedding dimension three. Every positive integer is the Frobenius number of some numerical semigroup with embedding dimension three. Rödseth's algorithm The following algorithm, known as Rödseth's algorithm, can be used to compute the Frobenius number of a numerical semigroup S generated by {a1, a2, a3} where a1 < a2 < a3 and gcd ( a1, a2, a3) = 1. Its worst-case complexity is not as good as Greenberg's algorithm but it is much simpler to describe. Let s0 be the unique integer such that a2s0 ≡ a3 mod a1, 0 ≤ s0 < a1. The continued fraction algorithm is applied to the ratio a1/s0: a1 = q1s0 − s1, 0 ≤ s1 < s0, s0 = q2s1 − s2, 0 ≤ s2 < s1, s1 = q3s2 − s3, 0 ≤ s3 < s2, ... sm−1 = qm+1sm, sm+1 = 0, where qi ≥ 2, si ≥ 0 for all i. Let p−1 = 0, p0 = 1, pi+1 = qi+1pi − pi−1 and ri = (sia2 − pia3)/a1. Let v be the unique integer number such that rv+1 ≤ 0 < rv, or equivalently, the unique integer such sv+1/pv+1 ≤ a3/a2 < sv/pv· Then, F(S) = −a1 + a2(sv − 1) + a3(pv+1 − 1) − min{a2sv+1, a3pv}. Special classes of numerical semigroups An irreducible numerical semigroup is a numerical semigroup such that it cannot be written as the intersection of two numerical semigroups properly containing it. A numerical semigroup S is irreducible if and only if S is maximal, with respect to set inclusion, in the collection of all numerical semigroups with Frobenius number F(S). A numerical semigroup S is symmetric if it is irreducible and its Frobenius number F(S) is odd. We say that S is pseudo-symmetric provided that S is irreducible and F(S) is even. Such numerical semigroups have simple characterizations in terms of Frobenius number and genus: A numerical semigroup S is symmetric if and only if g(S) = (F(S) + 1)/2. A numerical semigroup S is pseudo-symmetric if and only if g(S) = (F(S) + 2)/2. See also Frobenius number Special classes of semigroups Semigroup Sylver coinage References Semigroup theory Algebraic structures Number theory
Numerical semigroup
Mathematics
1,895
2,728,609
https://en.wikipedia.org/wiki/Gamma%20Aquilae
Gamma Aquilae, Latinized from γ Aquilae, and formally known as Tarazed , is a star in the constellation of Aquila. It has an apparent visual magnitude of 2.712, making it readily visible to the naked eye at night. Parallax measurements place it at a distance of from the Sun. Properties Gamma Aquilae is a relatively young star with an age of about 270 million years. Nevertheless, it has reached a stage of its evolution where it has consumed the hydrogen at its core and expanded into what is termed a bright giant star, with a stellar classification of K3 II. The star is now burning helium into carbon in its core. After it has finished generating energy through nuclear fusion, Gamma Aquilae will become a white dwarf. The star has an estimated 3.5 times the mass of the Sun and has expanded to 92 times the Sun's radius. It is radiating over times the luminosity of the Sun. An effective temperature of in its outer envelope gives it the orange hue typical of K-type stars. A 1991 catalogue of photometry reported that Gamma Aquilae showed some variation in its brightness, but this has not been confirmed. Emission nebula Gamma Aquilae is located just 7' from the center of an emission nebula, which has been first reported in 2023 by Stefan Ziegenbalg. The star is unlikely to be the ionization source of this nebula. Nomenclature γ Aquilae (Latinised to Gamma Aquilae) is the star's Bayer designation. It bore the traditional name Tarazed, which may derive from the Persian شاهين ترازو šāhin tarāzu "the beam of the scale", referring to an asterism of the Scale, Alpha, Beta and Gamma Aquilae. (Persian šāhīn means "royal falcon", "beam", and "pointer", and gave its name (as "falcon") to Beta Aquilae.) In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Tarazed for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names. In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated Menkib al Nesr (منكب ألنسر - mankib al-nasr), which was translated into Latin as Humerus Vulturis, meaning 'the eagle's shoulder'. In Chinese astronomy, (), meaning River Drum, refers to an asterism consisting of Gamma Aquilae, Beta Aquilae and Altair. Consequently, the Chinese name for Gamma Aquilae itself is (, ). In the Chinese folk tale The Cowherd and the Weaver Girl, Gamma Aquilae and Beta Aquilae are the children of Niulang (牛郎, The Cowherd, Altair) and Zhinü (織女, The Princess, Vega). The Koori people of Victoria knew Beta and Gamma Aquilae as the black swan wives of Bunjil'' (Altair), the wedge-tailed eagle. References External links Tarazed Tarazed HR 7525 Image Gamma Aquilae K-type bright giants Suspected variables Aquila (constellation) Aquilae, Gamma Durchmusterung objects Aquilae, 50 186791 097278 7525 Tarazed
Gamma Aquilae
Astronomy
733
50,267,935
https://en.wikipedia.org/wiki/PSNCBAM-1
PSNCBAM-1 is a negative allosteric modulator of the cannabinoid CB1 receptor. See also GAT100 Org 27569 ZCZ-011 References Cannabinoids CB1 receptor negative allosteric modulators 2-Aminopyridines Anilines Ureas 4-Chlorophenyl compounds
PSNCBAM-1
Chemistry
75
28,912,141
https://en.wikipedia.org/wiki/Component%20detection%20algorithm
The component detection algorithm (CODA) is a name for a type of LC-MS and chemometrics software algorithm focused on detecting peaks in noisy chromatograms (TIC) often obtained using the electrospray ionization technique. The implementation of the algorithm from one piece of mass spectrometry software to another differs. Some implementations need clean chromatograms to substruct background. References Computational chemistry
Component detection algorithm
Chemistry
91
3,017,426
https://en.wikipedia.org/wiki/Leray%27s%20theorem
In algebraic topology and algebraic geometry, Leray's theorem (so named after Jean Leray) relates abstract sheaf cohomology with Čech cohomology. Let be a sheaf on a topological space and an open cover of If is acyclic on every finite intersection of elements of (meaning that for all and all , then where is the -th Čech cohomology group of with respect to the open cover References Bonavero, Laurent. Cohomology of Line Bundles on Toric Varieties, Vanishing Theorems. Lectures 16-17 from "Summer School 2000: Geometry of Toric Varieties." Sheaf theory Theorems in algebraic geometry Theorems in algebraic topology
Leray's theorem
Mathematics
142
35,646,861
https://en.wikipedia.org/wiki/1980s%20in%20science%20and%20technology
This article is a summary of the 1980s in science and technology. Astronomy The Rings of Neptune were first discovered in 1984. The Voyager 2 spacecraft provided images of them in 1989. 4769 Castalia was discovered in 1989. It became the first asteroid to be viewed through radar imaging. The first exoplanet is discovered in 1988, though it was not confirmed until much later. Genetic engineering and biology 1983 Kary Mullis revolutionized molecular biology with his invention of the polymerase chain reaction, which required only a test tube, some reagents, a DNA template, and a source of heat. 1986 April – The first child produced from a gestational surrogacy is born. This is the first time in history that a child has been born to somebody who is not their biological mother. 1989 May 22 – The first gene transfer experiment in humans takes place, leading to full-fledged gene therapy trials by September 1990. The gene responsible for the cystic fibrosis transmembrane conductance regulator was discovered. Mutations of the gene are considered causes of cystic fibrosis. The kākāpō, a bird species of New Zealand, was termed a threatened species. The Department of Conservation started an endangered species recovery plan for the kākāpō in 1989. The K-T extinction event, when dinosaurs became extinct, was shown to be linked to excess iridium in the boundary layer, which implied that the cause was a massive meteor strike. Computer science and networking 1980 Development of ENQUIRE and MS-DOS begin. 1981 MS-DOS debuts. 1982 The first compact discs are sold, which would eventually replace the audiocassette in the 1990s. 1983 Computer "virus" terminology introduced by Fred Cohen. Lotus 1-2-3 spreadsheet software launched. 1984 The Apple Macintosh is released. FidoNet begins. 1985 The first domain names are registered on the Internet. Windows 1.0 debuts. 1986 The TCP/IP-based NSFNET, the forerunner to the Internet, begins construction. 1987 The first popular hypermedia software, HyperCard is released by Apple. 1988 While working on networking computers at CERN, Tim Berners-Lee begins to discuss the possibility of a hyperlinked information system with his colleagues, an idea he was allowed to implement in 1990 when he created the World Wide Web. Adobe Photoshop graphics editing software debuts, revolutionizing photography and the fashion industry. November 2 – Robert Tappan Morris created the Morris worm, considered the first notable computer worm to be distributed via the Internet. The worm was launched from the Massachusetts Institute of Technology and caused considerable damage. In 1989, its creator became the first person indicted under the Computer Fraud and Abuse Act. December – Europe obtains its first permanent connection to the Internet, by satellite between Princeton University and Stockholm, Sweden. 1989 Lotus Notes software launched. June/July – MCI Mail and CompuServe gateway their email systems to the Internet, instantly allowing hundreds of thousands of their users the ability to email people on the Internet for the first time. The first commercial Internet service providers emerge, with The World STD being the first dial-up Internet service in November. See also History of science and technology List of science and technology articles by continent List of years in science References Science and technology by decade 1980s-related lists 1980s decade overviews
1980s in science and technology
Technology
679
16,855,656
https://en.wikipedia.org/wiki/CACNG1
Voltage-dependent calcium channel gamma-1 subunit is a protein that in humans is encoded by the CACNG1 gene. L-type calcium channels are composed of five subunits. The protein encoded by this gene represents one of these subunits, gamma, and is one of several gamma subunit proteins. This particular gamma subunit is part of skeletal muscle 1,4-dihydropyridine-sensitive calcium channels and is an integral membrane protein that plays a role in excitation-contraction coupling. This gene is a member of the neuronal calcium channel gamma subunit gene subfamily of the PMP-22/EMP/MP20 family and is located in a cluster with two similar gamma subunit-encoding genes. See also Voltage-dependent calcium channel References Further reading External links Ion channels
CACNG1
Chemistry
162
9,833,509
https://en.wikipedia.org/wiki/CX3C%20motif%20chemokine%20receptor%201
CX3C motif chemokine receptor 1 (CX3CR1), also known as the fractalkine receptor or G-protein coupled receptor 13 (GPR13), is a transmembrane protein of the G protein-coupled receptor 1 (GPCR1) family and the only known member of the CX3C chemokine receptor subfamily. As the name suggests, this receptor binds the inflammatory chemokine CX3CL1 (also called neurotactin in mice or fractalkine in humans). This endogenous ligand solely binds to CX3CR1 receptor. Interaction of CX3CR1 with CX3CL1 can mediate migration, adhesion and retention of leukocytes, because Fractalkine exists as membrane-anchored protein (mCX3CL1) as well as cleaved soluble molecule (sCX3CL1) due to proteolysis by metalloproteinases (MPPs). The shedded form carries out typical function of conventional chemokines, the chemotaxis, while the membrane-bound protein behaves as adhesion molecule for facilitation of diapedesis. Both partners of CX3CL1-CX3CR1 axis are present on numerous cell types from hematopoietic and nonhematopoietic cells throughout the body. Moreover, their distinct cell expression is dependent on specific tissues and organs, which provides broad sphere of biological activity. Hence, considering their various functional activity, they are also linked with multiple neurodegenerative and inflammatory disorders as well as with tumorigenesis. Genetics The coding gene for CX3CR1 is now officially called identically to its protein: CX3CR1 gene, but may be still referred to by other older names such as V28; CCRL1; GPR13; CMKDR1; GPRV28; CMKBRL1. A genome location of the gene in humans is on the short arm of the chromosome 3p22.2. It is composed of four exons (only one contains coding region) and three intronic elements. Expression of the genomic sequence is regulated via three promoters. Two missense mutations in CX3CR1 gene, variants of single nucleotide polymorphism (SNP) of the receptor, are responsible for functional change of the protein. Names of these variants are derived from given substitution and its position: valine to isoleucine (V249I) and threonine to methionine (T280M). Polymorphism of CX3CR1 has been linked to diseases relating to cardiovascular system (e.g. Atherosclerosis), nervous system (e.g. Alzheimer's disease, Sclerosis) or infections (e.g. systemic candidiasis. Orthologs of CX3CR1 gene are found among animals, especially in mammals with high functional similarity, namely chimpanzee, dog, cat, mouse and rat. Orthologs are located on chromosome 9qF4 in the mouse genome and in the rat 8th chromosome on position 8q32. Expression CX3CR1 is expressed constitutively or in inflammatory response in various cells from hematopoietic lineage: T lymphocytes, natural killer (NK) cells, dendritic cells, B lymphocytes, mast cells, monocytes, macrophages, neutrophils, microglia, osteoclasts and thrombocytes. Furthermore, this receptor can be also found in nonhematopoietic tissues such as endothelial cells, epithelial cells, myocytes and astrocytes. Considering the CX3CR1 abundance in the body, it was also found to be expressed by some types of malignant cells. Function The CX3CR1 receptor is part of the G-protein chemokine receptor family with the metabotropic function. Its intracellular signalling cascades are responsible for modulating cell activity rather towards higher active state as in survival, migration and proliferation. In the recognition of immune cells during inflammation, the function of CX3CL1-CX3CR1 axis in the bloodstream is mainly recruitment of immune cells by migration through chemotaxis and diapedesis. Of course, as a part of the inflammatory immune response against pathogens this role considered as protective. However, as with most immune cells and proteins, in inflammatory or autoimmune diseases, CX3CR1 signalling is associated with some disease's pathophysiology. Expression of this receptor appears to be associated with lymphocytes. CX3CR1 is also expressed by monocytes and plays a major role in the survival of monocytes. Communication in blood vessels through the CX3CL1-CX3CR1 axis between endothelial cells and monocytes is responsible for formation of extracellular matrix and angiogenesis. It has been shown that CX3CR1 can influence monocytes already in bone marrow by means of retention and release. Moreover in bone marrow, CX3CR1 influences bone remodeling through role in differentiation of osteoclasts and osteoblasts. The CX3CL1/CX3CR1 axis role in the nervous system is to mediate communication between microglia, neuroglia and neurons for regulation of microglia activity, hence this axis plays a neurodegenerative and neuroprotective function based on the physiological state. Fractalkine signaling has also recently been discovered to play a developmental role in the migration of microglia in the central nervous system to their synaptic targets, where phagocytosis and synaptic refinement occur. CX3CR1 knockout mice had more synapses on hippocampal neurons than wild-type mice. Structure CX3CR1 is integral membrane protein formed by 355 amino acids with molecular weight around 40 kDa, which consist of three distinguishable segments: extracellular, transmembrane and intracellular part. As a member of the biggest class of GPCR family the rhodopsin-like receptors, the intracellular part of receptor, C-terminus of the polypeptide and three intracellular loops, is a bounding place with conserved DRYLAIV motif for the heterotrimeric G protein. This family is also known as T-transmembrane receptors (7-TM) by reason of 7 α-helices of transmembrane protein, which are alternately located in the cell's cytoplasmic membrane. Extracellular side of CX3CR1 consists of N-terminus of the polypeptide chain and three extracellular loops, forming a binding place for its main ligand CX3CL1, but also CCL26 (Eotaxin-3): has lower binding affinity when compared to fractalkine), immunoglobulins or infectious agents. Signalling cascade CX3CL1-CX3CR1 axis' signalling commences via activation of the receptor by its agonist's binding. It is followed by conformational change and component's dissociation of the heterotrimeric G complex, which consists of three subunits: α (alpha), β (beta) and γ (gamma). Several important signalling pathways are triggered by separated parts of G protein (Gα and Gβγ) such as the PLC/PKC pathway, the PI3K/AKT/NFκB pathway, the Ras/Raf/MEK/ERK (MAPK) pathway (or p38 and JNK) and the CREB pathway. All of those signalling cascades are responsible for diverse cellular behaviours and regulations, in terms of increased proliferation, survival and cell growth, metabolic regulation, induction of migration, apoptosis resistance and secretion of hormones and inflammatory cytokines. Products of CX3CR1 signalling cascades possess importance in the immune response of CX3CR1 positive hematopoietic cells. Clinical significance CX3CR1 and immune cells are strongly connected due to its abundant cell surface expression. Therefore, clinical meaning of CX3CR1 can be found in diseases connected with immunity. CX3CR1 is able to increase accumulation of immune cells in the affected body part, which results in disease aggravation. Few examples: allergies, Rheumatoid arthritis, Renal diseases, Chronic liver disease or Crohn's disease. CX3CR1 is also a coreceptor for HIV-1, and some variations in this gene lead to increased susceptibility to HIV-1 infection and rapid progression to AIDS. Since CX3CR1 plays a major role for interaction between endothelial cells and immune cells, it can aid vascular build up on the artery walls (plaque), thus it has been associated with Atherosclerosis. In addition, this may lead to thrombosis, other cardiovascular diseases or even cerebral ischemia. CX3CL1-CX3CR1 axis has an ability to control neurological inflammation through activation of microglia. Its role in brain pathologies can be therefore protective but also detrimental. There are connections between microglia and neurodegenerative disorders like Alzheimer's disease, Parkinson's disease or even with neurocognitive HIV-dementia. Moreover, CX3CR1 variants have been described to modify the survival time and the progression rate of patients with amyotrophic lateral sclerosis. Mutations in CX3CR1 are associated to dysplasia of the hip. Homozygous CX3CR1-M280 mutation impairs human monocyte survival and deteriorates outcome of human systemic candiasis. As mentioned before, this receptor and its ligand are important for the metabolism of the bone tissue in terms of differentiation of osteoclasts and osteoblasts. Overactivation of osteoclasts as well as accumulation of other immune cells has been linked to Osteoporosis. CX3CR1 with Fractalkine have a meaningful place also in many various types of cancer (e.g. Neuroblastoma, Prostate cancer, Gastric adenocarcinoma or B cell lymphomas) where CX3CL1-CX3CR1 axis is a double agent, providing antitumoral effects (stimulating and recruiting immune cells to target neoplasm) and protumoral effects (stimulating important activity in malignant cells like: invasion, proliferation and apoptosis resistance, for facilitating metastasis). Therefore, it has a lot of potential as therapeutical target in cancer. References Further reading External links Cytokines Receptors Integral membrane proteins
CX3C motif chemokine receptor 1
Chemistry
2,268
7,502,798
https://en.wikipedia.org/wiki/Peace%20war%20game
Peace war game is an iterated game originally played in academic groups and by computer simulation for years to study possible strategies of cooperation and aggression. As peace makers became richer over time it became clear that making war had greater costs than initially anticipated. The only strategy that acquired wealth more rapidly was a "Genghis Khan", a constant aggressor making war continually to gain resources. This led to the development of the "provokable nice guy" strategy, a peace-maker until attacked. Multiple players continue to gain wealth cooperating with each other while bleeding the constant aggressor. The peace war game is a variation of the iterated prisoner's dilemma in which the decisions (Cooperate, Defect) are replaced by (Peace, War). Strategies remain the same with reciprocal altruism, "Tit for Tat", or "provokable nice guy" as the best deterministic one. This strategy is simply to make peace on the first iteration of the game; after that, the player does what his opponent did on the previous move. A slightly better strategy is "Tit for Tat with forgiveness". When the opponent makes war, on the next move, the player sometimes makes peace anyway, with a small probability. This allows an escape from wasting cycles of retribution, a motivation similar to the Rule of Ko in the game of Go. "Tit for Tat with forgiveness" is best when miscommunication is introduced, when one's move is incorrectly reported to the opponent. A typical payoff matrix for two players (A, B) of one iteration of this game is: {| border="1" cellpadding="4" cellspacing="0" |- ! A.. B: ! Peace ! War |- ! Peace | (2, 2) | (0, 3) |- ! War | (3, 0) | (1, 1) |} Here a player's resources have a value of 2, half of which must be spent to wage war. In this case, there exists a Nash equilibrium, a mutually best response for a single iteration, here (War, War), by definition heedless of consequences in later iterations. "Provokable nice guy's" optimality depends on iterations. How many are necessary is likely tied to the payoff matrix and probabilities of choosing. A subgame perfect version of this strategy is "Contrite Tit-for-Tat" which is to make peace unless one is in "good standing" and one's opponent is not. Good ("standing" assumed) means to make peace with good opponents, make peace when bad, or make war when good and opponent is not. See also Iterated prisoner's dilemma Just war theory Paradox of tolerance Tit for Tat WarGames Notes Dilemmas Moral psychology Non-cooperative games Peace and conflict studies Social psychology Thought experiments
Peace war game
Mathematics
609
4,444,651
https://en.wikipedia.org/wiki/Evil%20number
In number theory, an evil number is a non-negative integer that has an even number of 1s in its binary expansion. These numbers give the positions of the zero values in the Thue–Morse sequence, and for this reason they have also been called the Thue–Morse set. Non-negative integers that are not evil are called odious numbers. Examples The first evil numbers are: 0, 3, 5, 6, 9, 10, 12, 15, 17, 18, 20, 23, 24, 27, 29, 30, 33, 34, 36, 39 ... Equal sums The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums. As 19th-century mathematician Eugène Prouhet showed, the partition into evil and odious numbers of the numbers from to , for any , provides a solution to the Prouhet–Tarry–Escott problem of finding sets of numbers whose sums of powers are equal up to the th power. In computer science In computer science, an evil number is said to have even parity. References Integer sequences
Evil number
Mathematics
243
47,118,705
https://en.wikipedia.org/wiki/Interest%20Flooding%20Attack
An Interest Flooding Attack (IFA) is a denial-of-service attack in an Information-centric network (or Content-Centric Networking (CCN) or Named Data Networking (NDN)). An attacker requests existing or non-existing content in order to overload the distribution infrastructure. This can be implemented by sending Interest packets, which are not resolved at all or not resolved fast enough, and thus lead to malicious CPU or memory consumption. This attack was previously denoted an open problem in ICN, only heuristic countermeasures available. In 2016, Aubrey Alston and Tamer Refaei of The MITRE Corporation presented an exact solution to this problem which utilizes an in-packet cryptographic mechanism to remove the ability of high-volume Interest traffic to overload the distribution infrastructure of the network. References Denial-of-service attacks Cyberwarfare Computer network security
Interest Flooding Attack
Technology,Engineering
184
72,744,451
https://en.wikipedia.org/wiki/Yu-Ju%20Chen
Yu-Ju Chen () is a Taiwanese proteomics research scientist, who leads international projects in proteogenomics. Education Yu-Ju Chen received a PhD in physical chemistry at Iowa State University in 1997, under the direction of Cheuk-Yiu Ng. She completed post-doctoral research at Ames Laboratory in 1997, and then at Yuan-Pern Lee's group at National Tsing Hua University in 1999. Career Chen began her career at the Institute of Chemistry of Academia Sinica as assistant research fellow in 1999. She was the Director of the Institute of Chemistry from 2013 to 2019, and is currently a Distinguished Research Fellow. She is also an adjunct professor at National Taiwan University, National Chiayi University, National Taiwan Ocean University, and National Chung Hsing University. She conducts research in mass spectrometry-based bioinformatics, in relation to understanding diseases such as cancer. Since 2016, Chen has participated in the US Cancer Moonshot Initiative, providing proteogenomics expertise as representative of Academia Sinica. She is the project investigator for the Taiwan Cancer Moonshot Project, which analyzes multiomic data related to gastric cancer. She participates in the Chromosome-centric Human Proteome Project, and is the group lead of chromosome 4. Chen served as president for the Human Proteome Organization (2021-2022), the Taiwan Proteomics Society, and the Taiwan Society for Mass Spectrometry (2012-2015). She has been a council member of the Asia Oceania Human Proteome Organization since 2019. She current serves as executive director of the Taiwan Proteomics Society (2021-2023), and the Taiwan Society for Mass Spectrometry. She served on editorial boards of European Journal of Mass Spectrometry, Journal of Proteome Research, and Frontiers in Analytical Chemistry, and currently serves on the Executive Advisory Board of Proteomics. Awards 2023 Tung-Ho Outstanding Research Award, THS Foundation 2023 16th Taiwan Outstanding Women in Science Award, Wu Chien Shiung Education Foundation & L'ORÉAL Taiwan 2022 Outstanding Research Award,National Science and Technology Council 2022 Outstanding Research Award,National Science and Technology Council 2021 National Innovation Award 2020 Taiwan Society for Mass Spectrometry Medal 2011 Taiwan Society for Mass Spectrometry Outstanding Scholar Award 2007 Federation of Asian Chemical Societies Distinguished Young Chemists Award 2006 Chinese Chemical Society Outstanding Young Investigator Award References Iowa State University alumni Academic staff of the National Taiwan University Academic staff of the National Chiayi University Academic staff of the National Chung Hsing University Living people Women physical chemists Taiwanese women scientists Year of birth missing (living people)
Yu-Ju Chen
Chemistry
541
44,115,663
https://en.wikipedia.org/wiki/Reduced%20level
In surveying, reduced level (RL) refers to equating elevations of survey points with reference to a common assumed vertical datum. It is a vertical distance between survey point and adopted datum surface. Thus, it is considered as the base level which is used as reference to reckon heights or depths of other places or structures in that area, region or country. The word "Reduced" here means "equating" and the word "level" means "elevation". Datum may be a real or imaginary location with a nominated elevation. Datum used The most common and convenient datum which is internationally accepted is mean sea level which is a universal measure and based upon a common base line in the whole world determined by earth's gravitational model (see geoid) that gives the standard to measure elevation of a place above or below mean sea level. Countries adopt their nearby mean sea levels as datum planes for calculations of reduced levels in their respective jurisdictions. For example, Pakistan takes sea near Karachi as its datum while India takes sea near Mumbai as its datum for calculation of reduced levels. The term reduced level is denoted shortly by 'RL'. National survey departments of each country determine RLs of significantly important locations or points. These points are called permanent benchmarks and this survey process is known as Great Trigonometrical Surveying (GTS). The permanent benchmarks act as reference points for determining RLs of other locations in a particular country. Instruments The instruments used to determine reduced level include: Optical levelling instruments like automatic level, Y level, dumpy level, or Coke's reversible level Levelling staff Tripod stand RL calculation RL of a survey point can be determined by two methods: Height of instrument method Rise and fall method Significance For drainage of water under gravity a suitable slope is required. Thus, roads are built in the fashion that their RL's on sides are comparatively smaller than the RL at the mid-span of the road. This ensures proper drainage of water from roads. For construction of buildings, roads, and dams, a horizontal levelled surface is required. So, at construction sites, RLs of different points are obtained. The ground surface is then being levelled to the RL, which is obtained by taking the arithmetic mean of RLs of different points. References Surveying Civil engineering
Reduced level
Engineering
480
16,610,108
https://en.wikipedia.org/wiki/Ion%20drift%20meter
An ion drift meter is a device used to measure the velocity of individual ions in the area of a spacecraft. This information can then be used to calculate the ion drift in the space surrounding the instrument as well as the strength of an electric field present, provided that the magnetic field strength has been determined using a magnetometer. The device itself works by allowing ions to pass through an opening at the front of the instrument and measuring the currents produced by the impacts of ions in different locations on a grid at the back. The trajectories of the ions can then be determined. Ion drift meters have been used on several spacecraft including the Dynamics Explorer, CHAMP and Ionospheric Connection Explorer. References Measuring instruments
Ion drift meter
Technology,Engineering
143
32,959,184
https://en.wikipedia.org/wiki/Lactarius%20quercuum
Lactarius quercuum is a member of the large milk-cap genus Lactarius in the order Russulales. Described as new to science in 1963 by American mycologist Rolf Singer, the species is found in Bolivia. See also List of Lactarius species References External links quercuum Fungi described in 1963 Fungi of Bolivia Fungus species
Lactarius quercuum
Biology
75
13,546,580
https://en.wikipedia.org/wiki/List%20of%20Schedule%20II%20controlled%20substances%20%28U.S.%29
This is the list of Schedule II controlled substances in the United States as defined by the Controlled Substances Act. The following findings are required, by section 202 of that Act, for substances to be placed in this schedule: The drug or other substance has a high potential for abuse. The drug or other substance has a currently accepted medical use in treatment in the United States or a currently accepted medical use with severe restrictions. Abuse of the drug or other substances may lead to severe psychological or physical dependence. The complete list of Schedule II substances is as follows. The Administrative Controlled Substances Code Number and Federal Register citation for each substance is included. Drugs See also List of Schedule I controlled substances (U.S.) List of Schedule III controlled substances (U.S.) List of Schedule IV controlled substances (U.S.) List of Schedule V controlled substances (U.S.) Notes References Controlled Substances Act Drug-related lists
List of Schedule II controlled substances (U.S.)
Chemistry
187
4,709,291
https://en.wikipedia.org/wiki/UV%20coating
A UV coating (or more generally a radiation cured coating) is a surface treatment which either is cured by ultraviolet radiation, or which protects the underlying material from such radiation's harmful effects. They have come to the fore because they are considered environmentally friendly and do not use solvents or produce volatile organic compounds (VOCs), or Hazardous Air Pollutant (HAPs), although some materials used for UV coating, such as PVDF in smart phones and tablets, are known to contain substances harmful to both humans and the environment. UV coatings on pipe and tube UV coatings have been applied to mechanical tubing, safety/water suppression pipe and OCTG/line pipe for many years. UV coatings advantages in this application can be summarized as faster, smaller, and cleaner with no thermal ovens required. The coating and curing (almost instantly) at speeds ranging from 100 feet per minute to over 800 feet per minute so the faster production speeds provide greater opportunity for return on investment for the customer (ROI). The resulting smaller floor footprint for UV coatings line is in total length, while running feet per minute is also considered desirable. The process is cleaner because no Volatile organic compound|volatile organic compounds (VOCs), or Air pollution|Hazardous Air Pollutant (HAPs) are produced. Ultraviolet coatings in printing Ultraviolet cured coatings can be applied over ink printed on paper and dried by exposure to UV radiation. UV coatings can be formulated up to 100% solids so that they have no volatile component that contributes to pollution. This high solids level also allows for the coating to be applied in very thin films. UV coatings can be formulated to a wide variety of gloss ranges. UV coating can be applied via most conventional industrial coating applications as well as by silkscreen and 3D printing. Due to the normally high solids content of UV coating/varnish the surface of the cured film can be extremely reflective and glossy. 80 lb text and heavier weights of paper can be UV coated, however, cover weights are preferred. UV can be applied on spot locations of the paper or by flooding the page. This coating application can deepen the color of the printed area. Drying is virtually instantaneous when exposed to the correct level of UV light so projects can move quickly into the bindery. A printed page with UV coating applied can be very shiny or flattened to a matte finish. A good example of UV coated paper is photo paper sold for home printing projects. UV coatings that are not fully cured can have a slightly sticky/tacky feel. Ultraviolet coating of glass and plastic Glass and plastic can be coated to diminish the amount of ultraviolet radiation that passes through. Common uses of such coating include eyeglasses and automotive windows. Photographic filters remove ultraviolet to prevent exposure of the film or sensor by invisible light. UV curable coatings can be used to impart a variety of properties to polymeric surfaces, including glare reduction, wear or scratch resistance, anti-fogging, microbial resistance, chemical resistance. Computer screens, keyboards, and most other personal electronic devices are treated with some type of UV-curable coating. Coatings are usually applied to plastic substrates via spray, dip, roll, flow and other processes. UV-curable coatings are often specified for plastic parts because the process does not require heat, which can distort the plastic shape. Ultraviolet coating of wood The industrial wood finisher has essentially three options in types of UV-curable coatings to use—100% UV, water-reduced UV and solvent reduced UV. Each type of UV-curable coating can be applied by virtually any method of application. The selected method of application is dependent on the surface structure/property to be finished, the finish quality desired on that surface, and the production rate that finishing must achieve. Another consideration is recovery, typically UV-curable coatings are more expensive than conventional cure coatings and as such any material that does not get applied to the part would need to be recovered as efficiently as possible. The selection of the UV-curable coating type applied by any method is really a matter of finish build or thickness, the ease to achieve certain finish subtleties (gloss, leveling, etc.), and the ease of use of the coating system. In general, if 100% UV-curable coatings can be used to produce the desired finish quality, it is best to set a course of action to use them. Costs, operation expenses and reporting requirements will be most advantageous with 100% UV-curable coatings. If very thin film builds are desired, less than 100% actives may be necessary and the use of water-reduced UV-curable coatings is most preferential. Ultraviolet printing of aluminum beverage cans When the aluminum cans are formed, they are washed and cleaned. A special coating also is applied on the inside of the can. On the printing press up to 6 different ink rollers supply the colors that coat the printing plates. (Similar process compared to offset lithography). After making contact with the rubber blanket, the can has a complete negative image per color. The process is considered wet on wet ink. After going through each color on the rotary belt, the final image is formed and a special coating is applied to each can to protect the can/colors from wear and tear. The completed cans are sent to the UV oven, that operate over 100 F and contains between six and eight 300 watt/inch UV lamps. Both inside and outside of the can are exposed to the light to ensure proper ink curing. Site-applied UV coatings In recent years, manufacturers have formulated ultraviolet curable coatings for applications outside of a factory or laboratory environment. This technology was first developed and commercialized by Professional Coatings Inc, (Cabot Ar) for substrates such as wood, concrete, vinyl tile and LVT. Other companies such as Arboritec/UVElite and UVGreenCure have continued in the development of new technologies around coating formulation and floor curing machines. Site Applied UV Coatings are available in both 100% solid and water-based formulations. They offer the advantage of quick return to service in the case of substrates such as wood, where polyurethanes can take several days before achieving full cure, and longevity in applications such as VCT, where an acrylic finish can be reapplied several times per year and buffed routinely. The coatings are applied as traditional coatings and then cured with an ultraviolet light (Generally either a mercury discharge lamp or LED-based system) mounted to a rolling chassis or by a handheld unit. See also References Printing materials Coatings
UV coating
Physics,Chemistry
1,359
144,676
https://en.wikipedia.org/wiki/Man-in-the-middle%20attack
In cryptography and computer security, a man-in-the-middle (MITM) attack, or on-path attack, is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, where in actuality the attacker has inserted themselves between the two user parties. One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. In this scenario, the attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within range of a Wi-Fi access point hosting a network without encryption could insert themselves as a man in the middle. As it aims to circumvent mutual authentication, a MITM attack can succeed only when the attacker impersonates each endpoint sufficiently well to satisfy their expectations. Most cryptographic protocols include some form of endpoint authentication specifically to prevent MITM attacks. For example, TLS can authenticate one or both parties using a mutually trusted certificate authority. Example Suppose Alice wishes to communicate with Bob. Meanwhile, Mallory wishes to intercept the conversation to eavesdrop (breaking confidentiality) with the option to deliver a false message to Bob under the guise of Alice (breaking non-repudiation). Mallory would perform a man-in-the-middle attack as described in the following sequence of events. Alice sends a message to Bob, which is intercepted by Mallory: Alice "Hi Bob, it's Alice. Give me your key." →     Mallory     Bob Mallory relays this message to Bob; Bob cannot tell it is not really from Alice: Alice     Mallory "Hi Bob, it's Alice. Give me your key." →     Bob Bob responds with his encryption key: Alice     Mallory     ← [Bob's key] Bob Mallory replaces Bob's key with her own, and relays this to Alice, claiming that it is Bob's key: Alice     ← [Mallory's key] Mallory     Bob Alice encrypts a message with what she believes to be Bob's key, thinking that only Bob can read it: Alice "Meet me at the bus stop!" [encrypted with Mallory's key] →     Mallory     Bob However, because it was actually encrypted with Mallory's key, Mallory can decrypt it, read it, modify it (if desired), re-encrypt with Bob's key, and forward it to Bob: Alice     Mallory "Meet me at the park!" [encrypted with Bob's key] →     Bob Bob thinks that this message is a secure communication from Alice. This example shows the need for Alice and Bob to have a means to ensure that they are truly each using each other's public keys, and not the public key of an attacker. Otherwise, such attacks are generally possible, in principle, against any message sent using public-key technology. Types of MITM There are several attack types that can fall into the category of MITM. The most notable are: HTTPS Spoofing: The attacker tricks the victim into believing their connection is secure by substituting a fake SSL/TLS certificate. SSL/TLS Stripping: Downgrades HTTPS traffic to HTTP, intercepting and reading unencrypted data. ARP Spoofing: Sends fake ARP messages to associate the attacker’s MAC address with a target IP, intercepting local network traffic. DNS Spoofing/Poisoning: Redirects DNS queries to malicious servers, leading victims to fake websites. Session Hijacking: Steals session cookies or tokens to impersonate a legitimate user in an active session. Man-in-the-Browser (MITB): Malware alters browser activity, intercepting or manipulating transactions in real-time. Wi-Fi MITM (Evil Twin Attack): Creates a fake Wi-Fi hotspot to intercept communications from connected devices. Email Hijacking: Intercepts email exchanges to manipulate or steal sensitive information. Replay Attacks: Captures and retransmits valid data to repeat actions or disrupt communication. Fake Certificate Authority (CA): Uses a fraudulent CA to sign fake certificates, tricking victims into trusting malicious connections. Defense and detection MITM attacks can be prevented or detected by two means: authentication and tamper detection. Authentication provides some degree of certainty that a given message has come from a legitimate source. Tamper detection merely shows evidence that a message may have been altered and has broken integrity. Authentication All cryptographic systems that are secure against MITM attacks provide some method of authentication for messages. Most require an exchange of information (such as public keys) in addition to the message over a secure channel. Such protocols, often using key-agreement protocols, have been developed with different security requirements for the secure channel, though some have attempted to remove the requirement for any secure channel at all. A public key infrastructure, such as Transport Layer Security, may harden Transmission Control Protocol against MITM attacks. In such structures, clients and servers exchange certificates which are issued and verified by a trusted third party called a certificate authority (CA). If the original key to authenticate this CA has not been itself the subject of a MITM attack, then the certificates issued by the CA may be used to authenticate the messages sent by the owner of that certificate. Use of mutual authentication, in which both the server and the client validate the other's communication, covers both ends of a MITM attack. If the server or client's identity is not verified or deemed as invalid, the session will end. However, the default behavior of most connections is to only authenticate the server, which means mutual authentication is not always employed and MITM attacks can still occur. Attestments, such as verbal communications of a shared value (as in ZRTP), or recorded attestments such as audio/visual recordings of a public key hash are used to ward off MITM attacks, as visual media is much more difficult and time-consuming to imitate than simple data packet communication. However, these methods require a human in the loop in order to successfully initiate the transaction. HTTP Public Key Pinning (HPKP), sometimes called "certificate pinning", helps prevent a MITM attack in which the certificate authority itself is compromised, by having the server provide a list of "pinned" public key hashes during the first transaction. Subsequent transactions then require one or more of the keys in the list must be used by the server in order to authenticate that transaction. DNSSEC extends the DNS protocol to use signatures to authenticate DNS records, preventing simple MITM attacks from directing a client to a malicious IP address. Tamper detection Latency examination can potentially detect the attack in certain situations, such as with long calculations that lead into tens of seconds like hash functions. To detect potential attacks, parties check for discrepancies in response times. For example: Say that two parties normally take a certain amount of time to perform a particular transaction. If one transaction, however, were to take an abnormal length of time to reach the other party, this could be indicative of a third party's presence interfering with the connection and inserting additional latency in the transaction. Quantum cryptography, in theory, provides tamper-evidence for transactions through the no-cloning theorem. Protocols based on quantum cryptography typically authenticate part or all of their classical communication with an unconditionally secure authentication scheme. As an example Wegman-Carter authentication. Forensic analysis Captured network traffic from what is suspected to be an attack can be analyzed in order to determine whether there was an attack and, if so, determine the source of the attack. Important evidence to analyze when performing network forensics on a suspected attack includes: IP address of the server DNS name of the server X.509 certificate of the server Whether the certificate has been self signed Whether the certificate has been signed by a trusted certificate authority Whether the certificate has been revoked Whether the certificate has been changed recently Whether other clients, elsewhere on the Internet, received the same certificate Notable instances A Stingray phone tracker is a cellular phone surveillance device that mimics a wireless carrier cell tower in order to force all nearby mobile phones and other cellular data devices to connect to it. The tracker relays all communications back and forth between cellular phones and cell towers. In 2011, a security breach of the Dutch certificate authority DigiNotar resulted in the fraudulent issuing of certificates. Subsequently, the fraudulent certificates were used to perform MITM attacks. In 2013, Nokia's Xpress Browser was revealed to be decrypting HTTPS traffic on Nokia's proxy servers, giving the company clear text access to its customers' encrypted browser traffic. Nokia responded by saying that the content was not stored permanently, and that the company had organizational and technical measures to prevent access to private information. In 2017, Equifax withdrew its mobile phone apps following concern about MITM vulnerabilities. Bluetooth, a wireless communication protocol, has also been susceptible to man-in-the-middle attacks due to its wireless transmission of data. Other notable real-life implementations include the following: DSniff the first public implementation of MITM attacks against SSL and SSHv1 Fiddler2 HTTP(S) diagnostic tool NSA impersonation of Google Superfish malware Forcepoint Content Gateway used to perform inspection of SSL traffic at the proxy Comcast uses MITM attacks to inject JavaScript code to 3rd party web pages, showing their own ads and messages on top of the pages 2015 Kazakhstan man-in-the-middle attack See also ARP spoofing – a technique by which an attacker sends Address Resolution Protocol messages onto a local area network Aspidistra transmitter a British radio transmitter used for World War II "intrusion" operations, an early MITM attack. Babington Plot the plot against Elizabeth I of England, where Francis Walsingham intercepted the correspondence. Computer security the design of secure computer systems. Cookiemonster attack a man-in-the-middle exploit. Cryptanalysis the art of deciphering encrypted messages with incomplete knowledge of how they were encrypted. Digital signature a cryptographic guarantee of the authenticity of a text, usually the result of a calculation only the author is expected to be able to perform. Evil maid attack attack used against full disk encryption systems Interlock protocol a specific protocol to circumvent a MITM attack when the keys may have been compromised. Key management how to manage cryptographic keys, including generation, exchange and storage. Key-agreement protocol a cryptographic protocol for establishing a key in which both parties can have confidence. Man-in-the-browser a type of web browser MITM Man-on-the-side attack a similar attack, giving only regular access to a communication channel. Mutual authentication how communicating parties establish confidence in one another's identities. Password-authenticated key agreement a protocol for establishing a key using a password. Quantum cryptography the use of quantum mechanics to provide security in cryptography. Secure channel a way of communicating resistant to interception and tampering. Terrapin attack a downgrade attack on the ssh protocol that requires an adversary with a man-in-the-middle position. Notes References External links Finding Hidden Threats by Decrypting SSL (PDF). SANS Institute. Cryptographic attacks Computer network security Transport Layer Security
Man-in-the-middle attack
Technology,Engineering
2,394
25,729,981
https://en.wikipedia.org/wiki/Tylopilus%20plumbeoviolaceus
Tylopilus plumbeoviolaceus (formerly Boletus plumbeoviolaceus), commonly known as the violet-grey bolete, is a fungus of the bolete family. First described in 1936, the mushroom has a disjunct distribution, and is distributed in eastern North America and Korea. The fruit bodies of the fungus are violet when young, but fade into a chocolate brown color when mature. They are solid and relatively large—cap diameter up to , with a white pore surface that later turns pink, and a white mycelium at the base of the stem. The mushroom is inedible. A number of natural products have been identified from the fruit bodies, including unique chemical derivatives of ergosterol, a fungal sterol. Taxonomy The species was first named 1936 as Boletus felleus forma plumbeoviolaceus by American mycologist Walter H. Snell and one of his graduate students, Esther A. Dick, based on specimens found in the Black Rock Forest near Cornwall, New York. Regarding his decision to use the taxonomic rank forma, Snell wrote: The writer hesitates to multiply the number of forms (formae) and varieties with distinctive names, because of the ease with which one develops the habit of interpreting slight variations as definite taxonomic units... the word "form" is used instead of "variety" as making no commitment as to the actual status of the variable segregate under consideration, until further information is available. The first collections made of the mushroom were of young, immature specimens, from which authors were unable to obtain spores for examination. It was not until a few years after that they found mature fruit bodies, which revealed that the rosy color of the pore surface took some time to develop. They concluded that this and other differences in physical characteristics, as well as differences in spore size, were enough to justify it being a species distinct from B. felleus, so in 1941 they raised the taxon to species status with the name Boletus plumbeoviolaceus. Noted Agaricales taxonomist Rolf Singer later transferred the taxon to Tylopilus in 1947, a genus characterized by a spore print that is pink, or wine red (vinaceous), rather than brown as in Boletus. The specific name "plumbeoviolaceus" is coined from the Latin adjectives plumbeus ("leaden" or "lead-colored") and violaceus ("purple"). The mushroom is commonly known as the "violet-grey bolete". Description The cap of the fruit body is in diameter, initially convex in shape but becoming centrally depressed, with a broadly arched and rounded margin. Young specimens are rather hard and firm, and the cap has a finely velvet-textured surface that soon wears off to become smooth. The color of the fruit body is violet when young, but dulls as it ages, becoming a dull violet-purplish-gray, then eventually chocolate-brown at maturity. The flesh is solid, white, and does not change color when cut or bruised. The taste is bitter, and the odor is not distinctive. Mycologist David Arora calls the mushroom "beautiful, but bitter-tasting". The tubes on the underside of the cap are deep, 2 or three per millimeter, depressed at the stem (resulting in an adnate attachment). The color of the pore surface is initially white, and it remains so for a while before turning a rosy color at maturity. The stem is long and thick, enlarged at the base, and sometimes bulbous. The surface is slightly reticulate at the top, and smooth lower of the stem. Its color is buff to light brown, often with darker brown bruises or stains, and it has whitish mycelium at the base. The flesh of the stem is white, and it does not change color when cut or bruised. Microscopic characteristics Collected in deposit, like with a spore print, the spores of T. plumbeoviolaceus appear to be a light pink to flesh color. When viewed with a light microscope, they are elliptical, with smooth walls and dimensions of 9.1–12.3 by 3.4–4.5 μm. The basidia (cellular structures that produce the spores) are club-shaped, and measure about 26 by 6.5 μm. The cuticle of the cap (the pileipellis) is made of a tangle of smooth-walled, narrow, brownish hyphae. When stained in potassium hydroxide, the hyphal contents tend to form beads, while staining in Melzer's reagent causes the pigment to form globules. Cystidia are common in the hymenial tissue; they are swollen at the base and narrow at the apex (lageniform), measuring 30–40 μm long by 7–9 μm thick. Clamp connections are absent in the hyphae. Edibility T. plumbeoviolaceous is considered inedible, and has a strongly bitter taste. The presence of a bitter bolete may spoil a meal, as the bitter taste does not disappear with cooking. Similar species There are few other species that might be confused with Tylopilus plumbeoviolaceus; according to one source, it "is one of the most remarkable and easily identified boletes in the USA." Tylopilus violatinctus, found under both hardwoods and conifers and known from New York to Mississippi, has an appearance similar to T. plumbeoviolaceus. It can be distinguished by a paler, lilac-colored cap that, in older specimens, is discolored rusty purple along the edge of the cap. Its spores are 7–10 by 3–4 μm. Tylopilus violatinctus was not described until 1998, so some older literature may confuse the two similar species. Young specimens of Tylopilus rubrobrunneus have a purplish cap, but unlike T. plumbeoviolaceous, their stems are never purple. The species Tylopilus microsporus, known only from China, is characterized by pale violet to violet cap, paler purple to purplish brown stem, and flesh color to pale purplish red pores. In addition to its different distribution, it can be distinguished from T. plumbeoviolaceus by its smaller spores. Another similar Asian species, T. obscureviolaceus, is only known from the Yaeyama Islands in southwestern Japan. It differs from T. plumbeoviolaceous in having a cap that does not fade in color to grayish or brownish when mature, shorter spores (6–7.2 by 3.3–4 μm), and other microscopic characteristics. Habitat, distribution, and ecology Tylopilus plumbeoviolaceous is a mycorrhizal species, and the bulk of the fungus lives underground, associating in a mutualistic relationship with the roots of various tree species. The fruit bodies are found growing singly, scattered or clustered together during mid-summer to autumn in deciduous forests, often under beech or oak trees; however, it sometimes occurs in mixed hardwood-conifer forests under hemlock. A preference for sandy soil has been noted in one source. In North America, the mushroom can be found east of the Rocky Mountains, ranging from Canada to Mexico. The species has also been collected in North Korea. Fruit bodies can serve as a food source for fungus-feeding Drosophila flies. Bioactive compounds Two derivatives of ergosterol have been isolated from the fruit bodies of T. plumbeoviolaceus: tylopiol A (3β-hydroxy-8α,9α-oxido-8,9-secoergosta-7,9(11),22-triene) and tylopiol B (3β-hydroxy-8α,9α-oxido-8,9-secoergosta-7,22dien-12-one). These sterols are unique to this species. Additionally, the compounds ergosta-7,22-dien-3β-ol, uridine, allitol, ergosterol, ergosterol 5α,8α-peroside, ergothioneine, adenosine, and uracil have been identified from the mushrooms. See also List of North American boletes References External links plumbeoviolaceus Inedible fungi Fungi described in 1936 Fungi of Asia Fungi of North America Fungus species
Tylopilus plumbeoviolaceus
Biology
1,788
49,720,751
https://en.wikipedia.org/wiki/List%20of%20fellows%20of%20the%20International%20Society%20for%20Computational%20Biology
This pages lists people elected ISCB Fellow by the International Society for Computational Biology (ISCB). Class of 2009 David Haussler David Lipman Webb Miller David Sankoff Temple F. Smith Janet Thornton Michael Waterman Class of 2010 Russ Altman Lawrence Hunter Eugene Myers Chris Sander Gary Stormo Alfonso Valencia Class of 2011 Michael Ashburner Philip E. Bourne Søren Brunak Richard Durbin Class of 2012 Bonnie Berger Peter Karp Jill Mesirov Pavel Pevzner Ron Shamir Martin Vingron Gunnar von Heijne Class of 2013 Pierre Baldi David Eisenberg Minoru Kanehisa Satoru Miyano Ruth Nussinov Steven Salzberg Class of 2014 Amos Bairoch Ewan Birney Nir Friedman Robert Gentleman Andrej Sali Class of 2015 Rolf Apweiler Cyrus Chothia Julio Collado-Vides Mark Gerstein Des Higgins Thomas Lengauer Michael Levitt Burkhard Rost Class of 2016 Helen M. Berman Steven E. Brenner Dan Gusfield Barry Honig Janet Kelso Michal Linial Christine Orengo Aviv Regev Lincoln Stein Sarah Teichmann Anna Tramontano Shoshana Wodak Haim Wolfson Class of 2017 Alex Bateman Andrea Califano Daphne Koller Anders Krogh William S. Noble Lior Pachter Olga Troyanskaya Tandy Warnow Class of 2018 Patricia Babbitt Terry Gaasterland Hanah Margalit Yves Moreau Bernard Moret William Pearson Mona Singh Mike Steel Class of 2019 Vineet Bafna Eleazar Eskin Xiaole Shirley Liu Marie-France Sagot Class of 2020 Serafim Batzoglou Judith Blake Mark Borodovsky Rita Casadio Paul Flicek Osamu Gotoh Rafael Irizarry Laxmi Parida Katherine Pollard Ben Raphael Zhiping Weng Xuegong Zhang Class of 2021 Atul Butte A. Keith Dunker Eran Halperin Wolfgang Huber Sorin Istrail Christina Leslie Ming Li Núria López Bigas Dana Pe'er Teresa Przytycka Eytan Ruppin Gustavo Stolovitzky Class of 2022 Barbara Bryant Sean Eddy Mikhail Gelfand Takashi Gojobori Trey Ideker David Tudor Jones Fran Lewitter Jun Liu Debora Marks Mihai Pop Reinhard Schneider Class of 2023 Bissan Al-Lazikani Ana Conesa Lenore Cowen Arne Elofsson Oliver Kohlbacher Heng Li Luay Nakhleh Francis Ouellette Shoba Ranganathan Russell Schwartz Roded Sharan Fabian Theis Cathy Wu Jinbo Xu Jinghui Zhang Class of 2024 Teresa Attwood Niko Beerenwinkel Peer Bork Barbara Engelhardt Tao Jiang Carl Kingsford Eugene Koonin Doron Lancet Philippe Lemey Scott Markel Peter Park Natasa Przulj Torsten Schwede Michael Sternberg Fengzhu Sun Mihaela Zavolan References
List of fellows of the International Society for Computational Biology
Biology
608
15,963,716
https://en.wikipedia.org/wiki/Pipecolic%20acid
Pipecolic acid (piperidine-2-carboxylic acid) is an organic compound with the formula HNC5H9CO2H. It is a carboxylic acid derivative of piperidine and, as such, an amino acid, although not one encoded genetically. Like many other α-amino acids, pipecolic acid is chiral, although the S-stereoisomer is more common. It is a colorless solid. Its biosynthesis starts from lysine. CRYM, a taxon-specific protein that also binds thyroid hormones, is involved in the pipecolic acid pathway. Medicine It accumulates in pipecolic acidemia. Elevation of pipecolic acid can be associated with some forms of epilepsy, such as pyridoxine-dependent epilepsy. Occurrence and reactions Like most amino acids, pipecolic acid is a chelating agent. One complex is Cu(HNC5H9CO2)2(H2O)2. Pipecolic acid was identified in the Murchison meteorite. It also occurs in the leaves of the genus Myroxylon, a tree from South America. See also Bupivacaine Efrapeptin References Alpha-Amino acids 2-Piperidinyl compounds Secondary amino acids
Pipecolic acid
Chemistry,Biology
272
18,676,294
https://en.wikipedia.org/wiki/A-349821
A-349,821 is a potent and selective histamine H3 receptor antagonist (or possibly an inverse agonist). It has nootropic effects in animal studies, although there do not appear to be any plans for clinical development at present and it is currently only used in laboratory research. See also H3 receptor antagonist References Benzamides H3 receptor antagonists 4-Morpholinyl compounds Nootropics 4-Hydroxybiphenyl ethers Pyrrolidines
A-349821
Chemistry
106
803,380
https://en.wikipedia.org/wiki/Valeri%20Barsukov
Valeri Leonidovich Barsukov ( March 14, 1928 – July 22, 1992) was a Soviet geologist. He worked in comparative planetology and the geochemistry of space. He was director of the V. I. Vernadsky Institute of Geochemistry from 1976 to 1992. In 1987 he received the V. I. Vernadsky Gold Medal for his work. A crater on Mars was named after him. External links References 1928 births 1992 deaths Communist Party of the Soviet Union members Full Members of the Russian Academy of Sciences Full Members of the USSR Academy of Sciences Recipients of the Order of Friendship of Peoples Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Recipients of the USSR State Prize Russian geochemists 20th-century Russian geologists Soviet geochemists Soviet geologists Burials at Novodevichy Cemetery
Valeri Barsukov
Chemistry
174
9,590,201
https://en.wikipedia.org/wiki/Valve%20actuator
A valve actuator is the mechanism for opening and closing a valve. Manually operated valves require someone in attendance to adjust them using a direct or geared mechanism attached to the valve stem. Power-operated actuators, using gas pressure, hydraulic pressure or electricity, allow a valve to be adjusted remotely, or allow rapid operation of large valves. Power-operated valve actuators may be the final elements of an automatic control loop which automatically regulates some flow, level or other process. Actuators may be only to open and close the valve, or may allow intermediate positioning; some valve actuators include switches or other ways to remotely indicate the position of the valve. Used for the automation of industrial valves, actuators can be found in all kinds of process plants. They are used in waste water treatment plants, power plants, refineries, mining and nuclear processes, food factories, and pipelines. Valve actuators play a major part in automating process control. The valves to be automated vary both in design and dimension. The diameters of the valves range from one-tenth of an inch to several feet. Types The common types of actuators are: manual, pneumatic, hydraulic, electric and spring. Manual A manual actuator employs levers, gears, or wheels to move the valve stem with a certain action. Manual actuators are powered by hand. Manual actuators are inexpensive, typically self-contained, and easy to operate by humans. However, some large valves are impossible to operate manually and some valves may be located in remote, toxic, or hostile environments that prevent manual operations in some conditions. As a safety feature, certain types of situations may require quicker operation than manual actuators can provide to close the valve. Pneumatic Air (or other gas) pressure is the power source for pneumatic valve actuators. They are used on linear or quarter-turn valves. Air pressure acts on a piston or bellows diaphragm creating linear force on a valve stem. Alternatively, a quarter-turn vane-type actuator produces torque to provide rotary motion to operate a quarter-turn valve. A pneumatic actuator may be arranged to be spring-closed or spring-opened, with air pressure overcoming the spring to provide movement. A "double acting" actuator use air applied to different inlets to move the valve in the opening or closing direction. A central compressed air system can provide the clean, dry, compressed air needed for pneumatic actuators. In some types, for example, regulators for compressed gas, the supply pressure is provided from the process gas stream and waste gas either vented to air or dumped into lower-pressure process piping. Hydraulic Hydraulic actuators convert fluid pressure into motion. Similar to pneumatic actuators, they are used on linear or quarter-turn valves. Fluid pressure acting on a piston provides linear thrust for gate or globe valves. A quarter-turn actuator produces torque to provide rotary motion to operate a quarter-turn valve. Most types of hydraulic actuators can be supplied with fail-safe features to close or open a valve under emergency circumstances. Hydraulic pressure can be supplied by a self-contained hydraulic pressure pump. In some applications, such as water pumping stations, the process fluid can provide hydraulic pressure, although the actuators must use materials compatible with the fluid. Electric The electric actuator uses an electric motor to provide torque to operate a valve. They are quiet, non-toxic and energy efficient. However, electricity must be available, which is not always the case, they can also operate on batteries. Spring Spring-based actuators hold back a spring. Once any anomaly is detected, or power is lost, the spring is released, operating the valve. They can only operate once, without resetting, and so are used for one-use purposes such as emergencies. They have the advantage that they do not require a powerful electric supply to move the valve, so they can operate from restricted battery power, or automatically when all power has been lost. Actuator movement A linear actuator opens and closes valves that can be operated via linear force, the type sometimes called a "rising stem" valve. These types of valves include globe valves, rising stem ball valves, control valves and gate valves. The two main types of linear actuators are diaphragm and piston. Diaphragm actuators are made out of a round piece of rubber and squeezed around its edges between two side of a cylinder or chamber that allows air pressure to enter either side pushing the piece of rubber one direction or the other. A rod is connected to the center of the diaphragm so that it moves as the pressure is applied. The rod is then connected to a valve stem which allows the valve to experience the linear motion thereby opening or closing. A diaphragm actuator is useful if the supply pressure is moderate and the valve travel and thrust required are low. Piston actuators use a piston which moves along the length of a cylinder. The piston rod conveys the force on the piston to the valve stem. Piston actuators allow higher pressures, longer travel ranges, and higher thrust forces than diaphragm actuators. A spring is used to provide defined behavior in the case of loss of power. This is important in safety related incidents and is sometimes the driving factor in specifications. An example of loss of power is when the air compressor (the main source of compressed air that provides the fluid for the actuator to move) shuts down. If there is a spring inside of the actuator, it will force the valve open or closed and will keep it in that position while power is restored. An actuator may be specified "fail open" or "fail close" to describe its behavior. In the case of an electric actuator, losing power will keep the valve stationary unless there is a backup power supply. A typical representative of the valves to be automated is a plug-type control valve. Just like the plug in the bathtub is pressed into the drain, the plug is pressed into the plug seat by a stroke movement. The pressure of the medium acts upon the plug while the thrust unit has to provide the same amount of thrust to be able to hold and move the plug against this pressure. Features of an electric actuator Motor (1) Robust asynchronous three-phase AC motors are mostly used as the driving force, for some applications also single-phase AC or DC motors are used. These motors are specially adapted for valve automation as they provide higher torques from standstill than comparable conventional motors, a necessary requirement to unseat sticky valves. The actuators are expected to operate under extreme ambient conditions, however they are generally not used for continuous operation since the motor heat buildup can be excessive. Limit and torque sensors (2) The limit switches signal when an end position has been reached. The torque switching measures the torque present in the valve. When exceeding a set limit, this is signaled in the same way. Actuators are often equipped with a remote position transmitter which indicates the valve position as continuous 4-20mA current or voltage signal. Gearing (3) Often a worm gearing is used to reduce the high output speed of the electric motor. This enables a high reduction ratio within the gear stage, leading to a low efficiency which is desired for the actuators. The gearing is therefore self-locking i.e. it prevents accidental and undesired changes of the valve position by acting upon the valve’s closing element. Valve attachment (4) The valve attachment consists of two elements. First: The flange used to firmly connect the actuator to the counterpart on the valve side. The higher the torque to be transmitted, the larger the flange required. Second: The output drive type used to transmit the torque or the thrust from the actuator to the valve shaft. Just like there is a multitude of valves there is also a multitude of valve attachments. Dimensions and design of valve mounting flange and valve attachments are stipulated in the standards EN ISO 5210 for multi-turn actuators or EN ISO 5211 for part-turn actuators. The design of valve attachments for linear actuators is generally based on DIN 3358. Manual operation (5) In their basic version most electric actuators are equipped with a handwheel for operating the actuators during commissioning or power failure. The handwheel does not move during motor operation. The electronic torque limiting switches are not functional during manual operation. Mechanical torque-limiting devices are commonly used to prevent torque overload during manual operation. Actuator controls (6) Both actuator signals and operation commands of the DCS are processed within the actuator controls. This task can in principle be assumed by external controls, e.g. a PLC. Modern actuators include integral controls which process signals locally without any delay. The controls also include the switchgear required to control the electric motor. This can either be reversing contactors or thyristors which, being an electric component, are not subject to mechanic wear. Controls use the switchgear to switch the electric motor on or off depending on the signals or commands present. Another task of the actuator controls is to provide the DCS with feedback signals, e.g. when reaching a valve end position. Electrical connection (7) The supply cables of the motor and the signal cables for transmitting the commands to the actuator and sending feedback signals on the actuator status are connected to the electrical connection. The electrical connection can be designed as a separately sealed terminal bung or plug/socket connector. For maintenance purposes, the wiring should be easily disconnected and reconnected. Fieldbus connection (8) Fieldbus technology is increasingly used for data transmission in process automation applications. Electric actuators can therefore be equipped with all common fieldbus interfaces used in process automation. Special connections are required for the connection of fieldbus data cables. Functions Automatic switching off in the end positions After receiving an operation command, the actuator moves the valve in direction OPEN or CLOSE. When reaching the end position, an automatic switch-off procedure is started. Two fundamentally different switch-off mechanisms can be used. The controls switch off the actuator as soon as the set tripping point has been reached. This is called limit seating. However, there are valve types for which the closing element has to be moved in the end position at a defined force or a defined torque to ensure that the valve seals tightly. This is called torque seating. The controls are programmed as to ensure that the actuator is switched off when exceeding the set torque limit. The end position is signalled by a limit switch. Safety functions The torque switching is not only used for torque seating in the end position, but it also serves as overload protection over the whole travel and protects the valve against excessive torque. If excessive torque acts upon the closing element in an intermediate position, e.g. due to a trapped object, the torque switching will trip when reaching the set tripping torque. In this situation the end position is not signalled by the limit switch. The controls can therefore distinguish between normal operation torque switch tripping in one of the end positions and switching off in an intermediate position due to excessive torque. Temperature sensors are required to protect the motor against overheating. For some applications by other manufacturers, the increase of the motor current is also monitored. Thermoswitches or PTC thermistors which are embedded in the motor windings mostly reliably fulfil this task. They trip when the temperature limit has been exceeded and the controls switch off the motor. ] Process control functions Due to increasing decentralisation in automation technology and the introduction of micro processors, more and more functions have been transferred from the DCS to the field devices. The data volume to be transmitted was reduced accordingly, in particular by the introduction of fieldbus technology. Electric actuators whose functions have been considerably expanded are also affected by this development. The simplest example is the position control. Modern positioners are equipped with self-adaptation i.e. the positioning behaviour is monitored and continuously optimised via controller parameters. Meanwhile, electric actuators are equipped with fully-fledged process controllers (PID controllers). Especially for remote installations, e.g. the flow control to an elevated tank, the actuator can assume the tasks of a PLC which otherwise would have to be additionally installed. Diagnosis Modern actuators have extensive diagnostic functions which can help identify the cause of a failure. They also log the operating data. Study of the logged data allows the operation to be optimised by changing the parameters and the wear of both actuator and valve to be reduced. Duty types Open-close duty If a valve is used as a shut-off valve, then it will be either open or closed and intermediate positions are not held... Positioning duty Defined intermediate positions are approached for setting a static flow through a pipeline. The same running time limits as in open-close duty apply. Modulating duty The most distinctive feature of a closed-loop application is that changing conditions require frequent adjustment of the actuator, for example, to set a certain flow rate. Sensitive closed-loop applications require adjustments within intervals of a few seconds. The demands on the actuator are higher than in open-close or positioning duty. Actuator design must be able to withstand the high number of starts without any deterioration in control accuracy. Service conditions Actuators are specified for the desired life and reliability for a given set of application service conditions. In addition to the static and dynamic load and response time required for the valve, the actuator must withstand the temperature range, corrosion environment and other conditions of a specific application. Valve actuator applications are often safety related, therefore the plant operators put high demands on the reliability of the devices. Failure of an actuator may cause accidents in process-controlled plants and toxic substances may leak into the environment. Process-control plants are often operated for several decades which justifies the higher demands put on the lifetime of the devices. For this reason, actuators are always designed in high enclosure protection. The manufacturers put a lot of work and knowledge into corrosion protection. Enclosure protection The enclosure protection types are defined according to the IP codes of EN 60529. The basic versions of most electric actuators are designed to the second highest enclosure protection IP 67. This means they are protected against the ingress of dust and water during immersion (30 min at a max. head of water of 1 m). Most actuator manufacturers also supply devices to enclosure protection IP 68 which provides protection against submersion up to a max. head of water of 6 m. Ambient temperatures In Siberia, temperatures down to – 60 °C may occur, and in technical process plants + 100 °C may be exceeded. Using the proper lubricant is crucial for full operation under these conditions. Greases which may be used at room temperature can become too solid at low temperatures for the actuator to overcome the resistance within the device. At high temperatures, these greases can liquify and lose their lubricating power. When sizing the actuator, the ambient temperature and the selection of the correct lubricant are of major importance. Explosion protection Actuators are used in applications where potentially explosive atmospheres may occur. This includes among others refineries, pipelines, oil and gas exploration or even mining. When a potentially explosive gas-air-mixture or gas-dust-mixture occurs, the actuator must not act as ignition source. Hot surfaces on the actuator as well as ignition sparks created by the actuator have to be avoided. This can be achieved by a flameproof enclosure, where the housing is designed to prevent ignition sparks from leaving the housing even if there is an explosion inside. Actuators designed for these applications, being explosion-proof devices, have to be qualified by a test authority (notified body). Explosion protection is not standardized worldwide. Within the European Union, ATEX 94/9/EC applies, in US, the NEC (approval by FM) or the CEC in Canada (approval by the CSA). Explosion-proof actuators have to meet the design requirements of these directives and regulations. Additional uses Small electric actuators can be used in a wide variety of assembly, packaging and testing applications. Such actuators can be linear, rotary, or a combination of the two, and can be combined to perform work in three dimensions. Such actuators are often used to replace pneumatic cylinders. References Actuators Fluid technology Actuators
Valve actuator
Physics,Chemistry,Engineering
3,449
59,524,587
https://en.wikipedia.org/wiki/Applied%20Spectral%20Imaging
Applied Spectral Imaging or ASI is a multinational biomedical company that develops and manufactures microscopy imaging and digital analysis tools for hospitals, service laboratories and research centers. The company provides cytogenetic, pathology, and research laboratories with bright-field, fluorescence and spectral imaging in clinical applications. Test slides can be scanned, captured, archived, reviewed on the screen, analyzed with computer-assisted algorithms, and reported. ASI system platforms automate the workflow process to reduce human error in the identification and classification of chromosomal disorders, genome instability, various oncological malignancies, among other diseases. History Founded in 1993, ASI initially focused on spectral imaging devices for the research community. In 2002, ASI made a strategic move to expand into the clinical cytogenetics market and thereby, introduced its CytoLabView system for karyotyping and FISH imaging. In 2005, ASI launched its automated scanning system in order to increase throughput for case analysis, compensating for higher sample volumes and helping laboratories to better cope with a deficit of laboratory technicians and other professions. As the demand increased for more diagnostics, ASI focused on providing faster imaging and analysis to improve turn-around-time for patient results. Scanning automation and algorithms enabled laboratory technologists to spend more time on results and analysis rather than manual labor. In 2011, ASI launched a proprietary software platform named GenASIs. The software automates the diagnostic manual process. Physicians, medical scientists and laboratory technicians integrate digital technology to manage the visualization of the slide and compute the analysis. Through algorithms, tissue suspension cell and chromosomes are analyzed for aberrations, cell classification, tumor proportion score etc. ASI's high throughput tray loader, introduced the same year, was manufactured to automate the sample and scanning process. In 2017, ASI introduced PathFusion and HiPath Pro- the company's full pathology imaging suite for H&E, IHC, and FISH visualization and analysis software including tissue matching and whole slide imaging. FDA Clearances ASI has a wide FDA cleared portfolio. Its products and Quality System (QS) are compliant with IVD medical Device Standards and Regulations. 2001: FDA cleared for BandView product 2005: FDA cleared for FISHView product 2007: FDA cleared for SpotScan application for CEP XY 2010: FDA cleared for SpotScan application for HER2/neu 2011: FDA cleared for SpotScan application for UroVysion 2013: FDA cleared for SpotsScan application for ALK 2015: FDA cleared for HiPath system for IHC family HER2, ER, PR and Ki67 Patents ASI patents cover methods and instrumentation for general fields in the life sciences. Some of the claims are specific to a special type of hardware. Others have a more general scope and refer to the application rather than the instrument. Some of the original patents are related to spectral imaging systems based on interferometry and other spectral imaging instrumentation. Functionalities The functionalities that Applied Spectral Imaging provides laboratories and hospitals include automated slide scanning, applications interface, whole slide imaging, scoring and analysis, sharing capabilities for team review and final sign off, database management, secure archiving of reports, connectivity to the LIS and standardized testing. Clinical applications ASI's clinical applications for laboratories include the scoring of chromosome analysis and karyotyping, fluorescent karyotyping, spectral karyotyping, karyotyping of multiple species, scanning and detection of metaphases and interphases, FISH review and analysis, matching of tissue FISH with H&E/ IHC, Brightfield whole slide imaging, IHC quantitative scoring, Cytokinesis-blocked micronucleus, region of interest annotating and measuring, tissue matching and FISH imaging, analysis and documentation of membrane IHC stain, analysis and documentation of nuclear IHC stain, chromosome comparison modules, Whole Slide Image viewing, enhancement and documentation, data case management and network connectivity of multiple systems in a network. Products ASI HiPath Pro - Brightfield imaging analysis system for a variety of histopathology needs, including IHC scoring and Whole Slide Imaging of H&E and IHC samples. ASI PathFusion - Bridges the gap between Brightfield pathology and FISH. Combines Whole Slide Imaging, computational Tissue FISH and digital tissue matching of FISH with Haemotoxylin and Eosin (H&E) or Immunohistochemistry (IHC) samples. ASI HiBand - Digital chromosome analysis for counting, indexing and karyotyping. ASI HiFISH - Computational FISH diagnostics for classification, scanning and imaging analysis. ASI CytoPower - Complete chromosomes' analysis, Karyotyping and FISH cell classification platform. ASI Rainbow - Analysis & multicolor imaging solution for Fluorescence and Brightfield samples References External links Companies based in Carlsbad, California Companies established in 1993 Bioinformatics companies Multinational companies headquartered in the United States Biotechnology companies established in 1993 Biomedical engineering Biological engineering Medical technology companies of the United States Medical imaging
Applied Spectral Imaging
Engineering,Biology
1,036
19,619,372
https://en.wikipedia.org/wiki/Maurocalcine
Maurocalcine (MCa) is a protein, 33 Amino acid residues in length, isolated from the venom of the scorpion Maurus palmatus, which belongs to the family Chactidae, first characterized in 2000. The toxin is present in such small amounts that it could not be isolated to analyze it, so a chemical synthesis of this toxin was performed by the solid-phase technique so it could be fully characterized.  It shares 82% sequence identity with imperatoxin A (IpTx A), a scorpion toxin from the venom of Pandinus imperator.  IpTx A acts by modifying the activity of the type 1 ryanodine receptor of skeletal muscle.  RyR controls the intracellular Ca2+ permeability of various cell types and is central in the process of excitation–contraction of muscle tissues.  The synthesized toxin, sMCa is active on RyR1 and it binds onto a site different from that of ryanodine itself. Structural components MCa folds folds into the inhibitor cystine knot motif.  The structure consists of a compact disulfide-bond core with the following three pairs: Cys3-Cys17, Cys10-Cys21, and Cys16-Cys32 (Fig. 1). Another important feature of MCa is the dipole moment which exists because of the basic-rich surface including the residues Lys19, Lys20, Lys22, Arg23, Arg24, and Arg3 without any acidic residue. Compared to the opposite surface contains four acidic residues Asp2, Glu12, Asp15, and Glu29 (Fig. 2).  This dipole moment is proposed to help it cross the membrane.  The only element of regular secondary structure is a double-stranded antiparallel b-sheet comprising residues 20–23 and 30–33. Membrane permeability Evidence suggests that MCa can cross a membrane. First, MCa has biological activity consistent with the direct activation of RyR1 when added to the extracellular medium.  Second, MCa contains a stretch of positively charged amino acid residues that is reminiscent of the protein transduction domains (PTD) found in proteins known to cross the membrane.  MCa is suggested to be a cell-penetrating peptide (CPP). CPPs commonly contain many basic residues oriented toward the same face of the molecule.  This structural feature allows CPPs to cross biological membranes in a receptor- or transporter-independent manner through a mechanism called translocation.  MCa is similar to CPP sequences because MCa is a small peptide, it has a net positive charge, it enters many cell types, it enters in an efficient manner and at low concentration, the translocation is a fast process that is energy-independent, and it can carry a cargo molecule. MCa is unique because it can enter cells against its concentration gradient, and it enters the cell far more rapidly than its exit. Also, the disulfide linkage of MCa, which makes it more rigid than other CPPs, implies that the transduction mechanism at the basis of MCa cell penetration does not rely on extensive peptide unfolding. Mutagenesis findings To look closer at the basic surface that allows the protein to cross the membrane, mutagenesis was performed changing amino acids at different positions, by substituting a charged amino acid with a neutral one.  The specific mutations were K8A, K19A, K20A, K22A, R23A, R24A and the effects of MCa and its mutants on RyR1 incorporated into artificial lipid bilayers and on elementary calcium release events (ECRE) in rat and frog skeletal muscle fibers were observed.  The corresponding mutations should evoke parallel changes in the affinity if the continuity of the basic surface is essential.  However, the average length and frequency of ECRE was decreased if the mutation was placed farther away in the 3D structure from the critical 24Arg residue. This reveals that the effect of the mutations of basic amino acids to neutral amino acids cannot be solely attributed to the change of the net electrical charge of the peptide since mutations that were distant to the cluster but produced the same change in net electrical charge had relatively minor effects. Potential medical applications MCa was coupled to streptavidine which is of significantly higher mass than MCa itself. This demonstrates that MCa can also carry large molecules into cells, similar to other CPPs. The toxin complex efficiently penetrated into various cell types without requiring metabolic energy or implicating an endocytosis mechanism.  MCa has the ability to act as a molecular carrier and to cross cell membranes in a rapid manner (1–2 min), making this toxin the first demonstrated example of a scorpion toxin that translocates into cells. This could prove useful if drugs that cannot usually cross a biological membrane could be paired with MCa and carried across the membrane. Recently, cell penetrating peptides have been used for their ability to deliver non-permeant compounds into cells. Doxorubicin, a common cancer therapeutic, has been covalently coupled to an analogue of maurocalcine on drug-sensitive or drug-resistant cell lines MCF7 and MDA-MB 231. References Proteins Scorpion toxins Ion channel toxins
Maurocalcine
Chemistry
1,071
13,480,873
https://en.wikipedia.org/wiki/Digital%20prototyping
Digital Prototyping gives conceptual design, engineering, manufacturing, and sales and marketing departments the ability to virtually explore a complete product before it's built. Industrial designers, manufacturers, and engineers use Digital Prototyping to design, iterate, optimize, validate, and visualize their products digitally throughout the product development process. Innovative digital prototypes can be created via CAutoD through intelligent and near-optimal iterations, meeting multiple design objectives (such as maximised output, energy efficiency, highest speed and cost-effectiveness), identifying multiple figures of merit, and reducing development gearing and time-to-market. Marketers also use Digital Prototyping to create photorealistic renderings and animations of products prior to manufacturing. Companies often adopt Digital Prototyping with the goal of improving communication between product development stakeholders, getting products to market faster, and facilitating product innovation. Digital Prototyping goes beyond simply creating product designs in 3D. It gives product development teams a way to assess the operation of moving parts, to determine whether or not the product will fail, and see how the various product components interact with subsystems—either pneumatic or electric. By simulating and validating the real-world performance of a product design digitally, manufacturers often can reduce the number of physical prototypes they need to create before a product can be manufactured, reducing the cost and time needed for physical prototyping. Many companies use Digital Prototyping in place of, or as a complement to, physical prototyping. Digital Prototyping changes the traditional product development cycle from design>build>test>fix to design>analyze>test>build. Instead of needing to build multiple physical prototypes and then testing them to see if they'll work, companies can conduct testing digitally throughout the process by using Digital Prototyping, reducing the number of physical prototypes needed to validate the design. Studies show that by using Digital Prototyping to catch design problems up front, manufacturers experience fewer change orders downstream. Because the geometry in digital prototypes is highly accurate, companies can check interferences to avoid assembly issues that generate change orders in the testing and manufacturing phases of development. Companies can also perform simulations in early stages of the product development cycle, so they avoid failure modes during testing or manufacturing phases. With a Digital Prototyping approach, companies can digitally test a broader range of their product's performance. They can also test design iterations quickly to assess whether they're over- or under-designing components. Research from the Aberdeen Group shows that manufacturers that use Digital Prototyping build half the number of physical prototypes as the average manufacturer, get to market 58 days faster than average, and experience 48 percent lower prototyping costs. History of Digital Prototyping The concept of Digital Prototyping has been around for over a decade, particularly since software companies such as Autodesk, PTC, Siemens PLM (formerly UGS), and Dassault began offering computer-aided design (CAD) software capable of creating accurate 3D models. It may even be argued that the product lifecycle management (PLM) approach was the harbinger of Digital Prototyping. PLM is an integrated, information-driven approach to a product's lifecycle, from development to disposal. A major aspect of PLM is coordinating and managing product data among all software, suppliers, and team members involved in the product's lifecycle. Companies use a collection of software tools and methods to integrate people, data, and processes to support singular steps in the product's lifecycle or to manage the product's lifecycle from beginning to end. PLM often includes product visualization to facilitate collaboration and understanding among the internal and external teams that participate in some aspect of a product's lifecycle. While the concept of Digital Prototyping has been a longstanding goal for manufacturing companies for some time, it's only recently that Digital Prototyping has become a reality for small-to-midsize manufacturers that cannot afford to implement complex and expensive PLM solutions. Digital Prototyping and PLM Large manufacturing companies rely on PLM to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. PLM is a fully integrated approach to product development that requires investments in application software, implementation, and integration with enterprise resource planning (ERP) systems, as well as end-user training and a sophisticated IT staff to manage the technology. PLM solutions are highly customized and complex to implement, often requiring a complete replacement of existing technology. Because of the high expense and IT expertise required to purchase, deploy, and run a PLM solution, many small-to-midsized manufacturers cannot implement PLM. Digital Prototyping is a viable alternative to PLM for these small-to-midsized manufacturers. Like PLM, Digital Prototyping seeks to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. However, unlike PLM, Digital Prototyping does not support the entire product development process from conception to disposal, but rather focuses on the design-to-manufacture portion of the process. The realm of Digital Prototyping ends when the digital product and the engineering bill of materials are complete. Digital Prototyping aims to resolve many of the same issues as PLM without involving a highly customized, all-encompassing software deployment. With Digital Prototyping, a company may choose to address one need at a time, making the approach more pervasive as its business grows. Other differences between Digital Prototyping and PLM include: Digital Prototyping involves fewer participants than PLM. Digital Prototyping has a less complex process for collecting, managing, and sharing data. Manufacturers can keep product development activities separate from operations management with Digital Prototyping. Digital Prototyping solutions don't need to be integrated with ERP (but can be), customer relationship management (CRM), and project and portfolio management (PPM) software. Digital Prototyping Workflow A Digital Prototyping workflow involves using a single digital model throughout the design process to bridge the gaps that typically exist between workgroups such as industrial design, engineering, manufacturing, sales, and marketing. Product development can be broken into the following general phases at most manufacturing companies: Conceptual Design Engineering Manufacturing Customer Involvement Marketing Communications Conceptual Design The conceptual design phase involves taking customer input or market requirements and data to create a product design. In a Digital Prototyping workflow, designers work digitally, from the very first sketch, throughout the conceptual design phase. They capture their designs digitally, and then share that data with the engineering team using a common file format. The industrial design data is then incorporated into the digital prototype to ensure technical feasibility. In a Digital Prototyping workflow, designers and their teams review digital design data via high-quality digital imagery or renderings to make informed product design decisions. Designers may create and visualize several iterations of design, changing things like materials or color schemes, before a concept is finalized. Engineering During the engineering phase of the Digital Prototyping workflow, engineers create the product's 3D model (the digital prototype), integrating design data developed during the conceptual design phase. Teams also add electrical systems design data to the digital prototype while it's being developed, and evaluate how different systems interact. At this stage of the workflow, all data related to the product's development is fully integrated into the digital prototype. Working with mechanical, electrical, and industrial design data, companies engineer every last product detail in the engineering phase of the workflow. At this point, the digital prototype is a fully realistic digital model of the complete product. Engineers test and validate the digital prototype throughout their design process to make the best possible design decisions and avoid costly mistakes. Using the digital prototype, engineers can: Perform integrated calculations, and stress, deflection, and motion simulations to validate designs Test how moving parts will work and interact Evaluate different solutions to motion problems Test how the design functions under real-world constraints Conduct stress analysis to analyze material selection and displacement Verify the strength of a part By incorporating integrated calculations, stress, deflection, and motion simulations into the Digital Prototyping workflow, companies can speed development cycles by minimizing physical prototyping phases. By implementing a digital prototype of a partially or fully automated vehicle and its sensor suite into a dynamic co-simulation of traffic flow and vehicle dynamics, a novel toolchain methodology comprising virtual testing is available for the development of automated driving functions by the automotive industry. Also during the engineering phase of the Digital Prototyping workflow, engineers create documentation required by the production team. Manufacturing In a Digital Prototyping workflow, manufacturing teams are involved early in the design process. This input helps engineers and manufacturing experts work together on the digital prototype throughout the design process to ensure that the product can be produced cost effectively. Manufacturing teams can see the product exactly as it's intended, and provide input on manufacturability. Companies can perform molding simulations on digital prototypes for plastic part and injection molds to test the manufacturability of their designs, identifying potential manufacturing defects before they cut mold tooling. Digital Prototyping also enables product teams to share detailed assembly instructions digitally with manufacturing teams. While paper assembly drawings can be confusing, 3D visualizations of digital prototypes are unambiguous. This early and clear collaboration between manufacturing and engineering teams helps minimize manufacturing problems on the shop floor. Finally, manufacturers can use Digital Prototyping to visualize and simulate factory-floor layouts and production lines. They can check for interferences to detect potential issues such as space constraints and equipment collisions. Customer Involvement Customers are involved throughout the Digital Prototyping workflow. Rather than waiting for a physical prototype to be complete, companies that use Digital Prototyping bring customers into the product development process early. They show customers realistic renderings and animations of the product's digital prototype so they'll know what the product looks like and how it will function. This early customer involvement helps companies get sign-off up front, so they don't waste time designing, engineering, and manufacturing a product that doesn't fulfill the customer's expectations. Marketing Using 3D CAD data from the digital prototype, companies can create realistic visualizations, renderings, and animations to market products in print, on the web, in catalogues, or in television commercials. Without needing to produce expensive physical prototypes and conduct photo shoots, companies can create virtual photography and cinematography nearly indistinguishable from reality. One aspect of this is creating the illumination environment for the subject, an area of new development. Realistic visualizations not only help marketing communications, but the sales process as well. Companies can respond to requests for proposals and bid on projects without building physical prototypes, using visualizations to show the potential customer what the end product will be like. In addition, visualizations can help companies bid more accurately by making it more likely that everyone has the same expectations about the end product. Companies can also use visualizations to facilitate the review process once they've secured the business. Reviewers can interact with digital prototypes in realistic environments, allowing for the validation of design decisions early in the product development process. Connecting Data and Teams To support a Digital Prototyping workflow, companies use data management tools to coordinate all teams at every stage in the workflow, streamline design revisions and automate release processes for digital prototypes, and manage engineering bills of materials. These data management tools connect all workgroups to critical Digital Prototyping data. Digital Prototyping and Sustainability Companies increasingly use Digital Prototyping to understand sustainability factors in new product designs, and to help meet customer requirements for sustainable products and processes. They minimize material use by assessing multiple design scenarios to determine the optimal amount and type of material required to meet product specifications. In addition, by reducing the number of physical prototypes required, manufacturers can trim down their material waste. Digital Prototyping can also help companies reduce the carbon footprint of their products. For example, WinWinD, a company that creates innovative wind turbines, uses Digital Prototyping to optimize the energy production of wind-power turbines for varying wind conditions. Furthermore, the rich product data supplied by Digital Prototyping can help companies demonstrate conformance with the growing number of product-related environmental regulations and voluntary sustainability standards. References Prototyping Prototypes
Digital prototyping
Technology
2,553
232,495
https://en.wikipedia.org/wiki/Motivation
Motivation is an internal state that propels individuals to engage in goal-directed behavior. It is often understood as a force that explains why people or animals initiate, continue, or terminate a certain behavior at a particular time. It is a complex phenomenon and its precise definition is disputed. It contrasts with amotivation, which is a state of apathy or listlessness. Motivation is studied in fields like psychology, neuroscience, motivation science, and philosophy. Motivational states are characterized by their direction, intensity, and persistence. The direction of a motivational state is shaped by the goal it aims to achieve. Intensity is the strength of the state and affects whether the state is translated into action and how much effort is employed. Persistence refers to how long an individual is willing to engage in an activity. Motivation is often divided into two phases: in the first phase, the individual establishes a goal, while in the second phase, they attempt to reach this goal. Many types of motivation are discussed in the academic literature. Intrinsic motivation comes from internal factors like enjoyment and curiosity; it contrasts with extrinsic motivation, which is driven by external factors like obtaining rewards and avoiding punishment. For conscious motivation, the individual is aware of the motive driving the behavior, which is not the case for unconscious motivation. Other types include: rational and irrational motivation; biological and cognitive motivation; short-term and long-term motivation; and egoistic and altruistic motivation. Theories of motivation are conceptual frameworks that seek to explain motivational phenomena. Content theories aim to describe which internal factors motivate people and which goals they commonly follow. Examples are the hierarchy of needs, the two-factor theory, and the learned needs theory. They contrast with process theories, which discuss the cognitive, emotional, and decision-making processes that underlie human motivation, like expectancy theory, equity theory, goal-setting theory, self-determination theory, and reinforcement theory. Motivation is relevant to many fields. It affects educational success, work performance, athletic success, and economic behavior. It is further pertinent in the fields of personal development, health, and criminal law. Definition, measurement, and semantic field Motivation is often understood as an internal state or force that propels individuals to engage and persist in goal-directed behavior. Motivational states explain why people or animals initiate, continue, or terminate a certain behavior at a particular time. Motivational states are characterized by the goal they aim for, as well as the intensity and duration of the effort devoted to the goal. Motivational states have different degrees of strength. If a state has a high degree then it is more likely to influence behavior than if it has a low degree. Motivation contrasts with amotivation, which is a lack of interest in a certain activity or a resistance to it. In a slightly different sense, the word "motivation" can also refer to the act of motivating someone and to a reason or goal for doing something. It comes from the Latin term (to move). The traditional discipline studying motivation is psychology. It investigates how motivation arises, which factors influence it, and what effects it has. Motivation science is a more recent field of inquiry focused on an integrative approach that tries to link insights from different subdisciplines. Neurology is interested in the underlying neurological mechanisms, such as the involved brain areas and neurotransmitters. Philosophy aims to clarify the nature of motivation and understand its relation to other concepts. Motivation is not directly observable but has to be inferred from other characteristics. There are different ways to do so and measure it. The most common approach is to rely on self-reports and use questionnaires. They can include direct questions like "how motivated are you?" but may also inquire about additional factors in relation to the goals, feelings, and effort invested in a particular activity. Another approach is based on external observation of the individual. This can concern studying behavioral changes but may also include additional methods like measuring brain activity and skin conductance. Academic definitions Many academic definitions of motivation have been proposed but there is little consensus on its precise characterization. This is partly because motivation is a complex phenomenon with many aspects and different definitions often focus on different aspects. Some definitions emphasize internal factors. This can involve psychological aspects in relation to desires and volitions or physiological aspects regarding physical needs. For example, John Dewey and Abraham Maslow use a psychological perspective to understand motivation as a form of desire while Jackson Beatty and Charles Ransom Gallistel see it as a physical process akin to hunger and thirst. Some definitions stress the continuity between human and animal motivation, but others draw a clear distinction between the two. This is often emphasized by the idea that human agents act for reasons and are not mechanistically driven to follow their strongest impulse. A closely related disagreement concerns the role of awareness and rationality. Definitions emphasizing this aspect understand motivation as a mostly conscious process of rationally considering the most appropriate behavior. Another perspective emphasizes the multitude of unconscious and subconscious factors responsible. Other definitions characterize motivation as a form of arousal that provides energy to direct and maintain behavior. For instance, K. B. Madsen sees motivation as "the 'driving force' behind behavior" while Elliott S. Vatenstein and Roderick Wong emphasize that motivation leads to goal-oriented behavior that is interested in consequences. The role of goals in motivation is sometimes paired with the claim that it leads to flexible behavior in contrast to blind reflexes or fixed stimulus-response patterns. This is based on the idea that individuals use means to bring about the goal and are flexible in regard to what means they employ. According to this view, the feeding behavior of rats is based on motivation since they can learn to traverse through complicated mazes to satisfy their hunger, which is not the case for the stimulus-bound feeding behavior of flies. Some psychologists define motivation as a temporary and reversible process. For example, Robert A. Hinde and John Alcock see it as a transitory state that affects responsiveness to stimuli. This approach makes it possible to contrast motivation with phenomena like learning which bring about permanent behavioral changes. Another approach is to provide a very broad characterization to cover many different aspects of motivation. This often results in very long definitions by including many of the factors listed above. The multitude of definitions and the lack of consensus have prompted some theorists, like psychologists B. N. Bunnell and Donald A. Dewsbury, to doubt that the concept of motivation is theoretically useful and to see it instead as a mere hypothetical construct. Semantic field The term "motivation" is closely related to the term "motive" and the two terms are often used as synonyms. However, some theorists distinguish their precise meanings as technical terms. For example, psychologist Andrea Fuchs understands motivation as the "sum of separate motives". According to psychologist Ruth Kanfer, motives are stable dispositional tendencies that contrast with the dynamic nature of motivation as a fluctuating internal state. Motivation is closely related to ability, effort, and action. An ability is a power to perform an action, like the ability to walk or to write. Individuals can have abilities without exercising them. They are more likely to be motivated to do something if they have the ability to do it, but having an ability is not a requirement and it is possible to be motivated while lacking the corresponding ability. Effort is the physical and mental energy invested when exercising an ability. It depends on motivation and high motivation is associated with high effort. The quality of the resulting performance depends on the ability, effort, and motivation. Motivation to perform an action can be present even if the action is not executed. This is the case, for instance, if there is a stronger motivation to engage in a different action at the same time. Components and stages Motivation is a complex phenomenon that is often analyzed in terms of different components and stages. Components are aspects that different motivational states have in common. Often-discussed components are direction, intensity, and persistence. Stages or phases are temporal parts of how motivation unfolds over time, like the initial goal-setting stage in contrast to the following goal-striving stage. A closely related issue concerns the different types of mental phenomena that are responsible for motivation, like desires, beliefs, and rational deliberation. Some theorists hold that a desire to do something is an essential part of all motivational states. This view is based on the idea that the desire to do something justifies the effort to engage in this activity. However, this view is not generally accepted and it has been suggested that at least in some cases, actions are motivated by other mental phenomena, like beliefs or rational deliberation. For example, a person may be motivated to undergo a painful root canal treatment because they conclude that it is a necessary thing to do even though they do not actively desire it. Components Motivation is sometimes discussed in terms of three main components: direction, intensity, and persistence. Direction refers to the goal people choose. It is the objective in which they decide to invest their energy. For example, if one roommate decides to go to the movies while the other visits a party, they both have motivation but their motivational states differ in regard to the direction they pursue. The pursued objective often forms part of a hierarchy of means-end relationships. This implies that several steps or lower-level goals may have to be fulfilled to reach a higher-level goal. For example, to achieve the higher-level goal of writing a complete article, one needs to realize different lower-level goals, like writing different sections of the article. Some goals are specific, like reducing one's weight by 3 kg, while others are non-specific, like losing as much weight as possible. Specific goals often affect motivation and performance positively by making it easier to plan and track progress. The goal belongs to the individual's motivational reason and explains why they favor an action and engage in it. Motivational reasons contrast with normative reasons, which are facts that determine what should be done or why a course of action is objectively good. Motivational reasons can be in tune with normative reasons but this is not always the case. For example, if a cake is poisoned then this is a normative reason for the host not to offer it to their guests. But if they are not aware of the poison then politeness may be their motivating reason to offer it. The intensity of motivation corresponds to how much energy someone is willing to invest into a particular task. For instance, two athletes engaging in the same drill have the same direction but differ concerning the motivational intensity if one gives their best while the other only puts in minimal effort. Some theorists use the term "effort" rather than "intensity" for this component. The strength of a motivational state also affects whether it is translated into action. One theory states that different motivational states compete with each other and that only the behavior with the highest net force of motivation is put into action. However, it is controversial whether this is always true. For example, it has been suggested that in cases of rational deliberation, it may be possible to act against one's strongest motive. Another problem is that this view may lead to a form of determinism that denies the existence of free will. Persistence is the long-term component of motivation and refers to how long an individual engages in an activity. A high level of motivational persistence manifests itself in a sustained dedication over time. The motivational persistence in relation to the chosen goal contrasts with flexibility on the level of the means: individuals may adjust their approach and try different strategies on the level of the means to reach a pursued end. This way, individuals can adapt to changes in the physical and social environment that affect the effectiveness of previously chosen means. The components of motivation can be understood in analogy to the allocation of limited resources: direction, intensity, and persistence determine where to allocate energy, how much of it, and for how long. For effective action, it is usually relevant to have the right form of motivation on all three levels: to pursue an appropriate goal with the required intensity and persistence. Stages The process of motivation is commonly divided into two stages: goal-setting and goal-striving. Goal-setting is the phase in which the direction of motivation is determined. It involves considering the reasons for and against different courses of action and then committing oneself to a goal one aims to achieve. The goal-setting process by itself does not ensure that the plan is carried out. This happens in the goal-striving stage, in which the individual tries to implement the plan. It starts with the initiation of the action and includes putting in effort and trying different strategies to succeed. Various difficulties can arise in this phase. The individual has to muster the initiative to get started with the goal-directed behavior and stay committed even when faced with obstacles without giving in to distractions. They also need to ensure that the chosen means are effective and that they do not overexert themselves. Goal-setting and goal-striving are usually understood as distinct stages but they can be intertwined in various ways. Depending on the performance during the striving phase, the individual may adjust their goal. For example, if the performance is worse than expected, they may lower their goals. This can go hand in hand with adjusting the effort invested in the activity. Emotional states affect how goals are set and which goals are prioritized. Positive emotions are associated with optimism about the value of a goal and create a tendency to seek positive outcomes. Negative emotions are associated with a more pessimistic outlook and tend to lead to the avoidance of bad outcomes. Some theorists have suggested further phases. For example, psychologist Barry J. Zimmerman includes an additional self-reflection phase after the performance. A further approach is to distinguish two parts of the planning: the first part consists in choosing a goal while the second part is about planning how to realize this goal. Types Many different types of motivation are discussed in the academic literature. They differ from each other based on the underlying mechanisms responsible for their manifestation, what goals are pursued, what temporal horizon they encompass, and who is intended to benefit. Intrinsic and extrinsic The distinction between intrinsic and extrinsic motivation is based on the source or origin of the motivation. Intrinsic motivation comes from within the individual, who engages in an activity out of enjoyment, curiosity, or a sense of fulfillment. It occurs when people pursue an activity for its own sake. It can be due to affective factors, when the person engages in the behavior because it feels good, or cognitive factors, when they see it as something good or meaningful. An example of intrinsic motivation is a person who plays basketball during lunch break only because they enjoy it. Extrinsic motivation arises from external factors, such as rewards, punishments, or recognition from others. This occurs when people engage in an activity because they are interested in the effects or the outcome of the activity rather than in the activity itself. For instance, if a student does their homework because they are afraid of being punished by their parents then extrinsic motivation is responsible. Intrinsic motivation is often more highly regarded than extrinsic motivation. It is associated with genuine passion, creativity, a sense of purpose, and personal autonomy. It also tends to come with stronger commitment and persistence. Intrinsic motivation is a key factor in cognitive, social, and physical development. The degree of intrinsic motivation is affected by various conditions, including a sense of autonomy and positive feedback from others. In the field of education, intrinsic motivation tends to result in high-quality learning. However, there are also certain advantages to extrinsic motivation: it can provide people with motivation to engage in useful or necessary tasks which they do not naturally find interesting or enjoyable. Some theorists understand the difference between intrinsic and extrinsic motivation as a spectrum rather than a clear dichotomy. This is linked to the idea that the more autonomous an activity is, the more it is associated with intrinsic motivation. A behavior can be motivated only by intrinsic motives, only by extrinsic motives, or by a combination of both. In the latter case, there are both internal and external reasons why the person engages in the behavior. If both are present, they may work against each other. For example, the presence of a strong extrinsic motivation, like a high monetary reward, can decrease intrinsic motivation. Because of this, the individual may be less likely to further engage in the activity if it does not result in an external reward anymore. However, this is not always the case and under the right circumstances, the combined effects of intrinsic and extrinsic motivation leads to higher performance. Conscious and unconscious Conscious motivation involves motives of which the person is aware. It includes the explicit recognition of goals and underlying values. Conscious motivation is associated with the formulation of a goal and a plan to realize it as well as its controlled step-by-step execution. Some theorists emphasize the role of the self in this process as the entity that plans, initiates, regulates, and evaluates behavior. An example of conscious motivation is a person in a clothing store who states that they want to buy a shirt and then goes on to buy one. Unconscious motivation involves motives of which the person is not aware. It can be guided by deep-rooted beliefs, desires, and feelings operating beneath the level of consciousness. Examples include the unacknowledged influences of past experiences, unresolved conflicts, hidden fears, and defense mechanisms. These influences can affect decisions, impact behavior, and shape habits. An example of unconscious motivation is a scientist who believes that their research effort is a pure expression of their altruistic desire to benefit science while their true motive is an unacknowledged need for fame. External circumstances can also impact the motivation underlying unconscious behavior. An example is the effect of priming, in which an earlier stimulus influences the response to a later stimulus without the person's awareness of this influence. Unconscious motivation is a central topic in Sigmund Freud's psychoanalysis. Early theories of motivation often assumed that conscious motivation is the primary form of motivation. However, this view has been challenged in the subsequent literature and there is no academic consensus on the relative extent of their influence. Rational and irrational Closely related to the contrast between conscious and unconscious motivation is the distinction between rational and irrational motivation. A motivational state is rational if it is based on a good reason. This implies that the motive of the behavior explains why the person should engage in the behavior. In this case, the person has an insight into why the behavior is considered valuable. For example, if a person saves a drowning child because they value the child's life then their motivation is rational. Rational motivation contrasts with irrational motivation, in which the person has no good reason that explains the behavior. In this case, the person lacks a clear understanding of the deeper source of motivation and in what sense the behavior is in tune with their values. This can be the case for impulsive behavior, for example, when a person spontaneously acts out of anger without reflecting on the consequences of their actions. Rational and irrational motivation play a key role in the field of economics. In order to predict the behavior of economic actors, it is often assumed that they act rationally. In this field, rational behavior is understood as behavior that is in tune with self-interest while irrational behavior goes against self-interest. For example, based on the assumption that it is in the self-interest of firms to maximize profit, actions that lead to that outcome are considered rational while actions that impede profit maximization are considered irrational. However, when understood in a wider sense, rational motivation is a broader term that also includes behavior motivated by a desire to benefit others as a form of rational altruism. Biological and cognitive Biological motivation concerns motives that arise due to physiological needs. Examples are hunger, thirst, sex, and the need for sleep. They are also referred to as primary, physiological, or organic motives. Biological motivation is associated with states of arousal and emotional changes. Its source lies in innate mechanisms that govern stimulus-response patterns. Cognitive motivation concerns motives that arise from the psychological level. They include affiliation, competition, personal interests, and self-actualization as well as desires for perfection, justice, beauty, and truth. They are also called secondary, psychological, social, or personal motives. They are often seen as a higher or more refined form of motivation. The processing and interpretation of information play a key role in cognitive motivation. Cognitively motivated behavior is not an innate reflex but a flexible response to the available information that is based on past experiences and expected outcomes. It is associated with the explicit formulation of desired outcomes and engagement in goal-directed behavior to realize these outcomes. Some theories of human motivation see biological causes as the source of all motivation. They tend to conceptualize human behavior in analogy to animal behavior. Other theories allow for both biological and cognitive motivation and some put their main emphasis on cognitive motivation. Short-term and long-term Short-term and long-term motivation differ in regard to the temporal horizon and the duration of the underlying motivational mechanism. Short-term motivation is focused on achieving rewards immediately or in the near future. It is associated with impulsive behavior. It is a transient and fluctuating phenomenon that may arise and subside spontaneously. Long-term motivation involves a sustained commitment to goals in a more distant future. It encompasses a willingness to invest time and effort over an extended period before the intended goal is reached. It is often a more deliberative process that requires goal-setting and planning. Both short-term and long-term motivation are relevant to achieving one's goals. For example, short-term motivation is central when responding to urgent problems while long-term motivation is a key factor in pursuing far-reaching objectives. However, they sometimes conflict with each other by supporting opposing courses of action. An example is a married person who is tempted to have a one-night stand. In this case, there may be a clash between the short-term motivation to seek immediate physical gratification and the long-term motivation to preserve and nurture a successful marriage built on trust and commitment. Another example is the long-term motivation to stay healthy in contrast to the short-term motivation to smoke a cigarette. Egoistic and altruistic The difference between egoistic and altruistic motivation concerns who is intended to benefit from the anticipated course of action. Egoistic motivation is driven by self-interest: the person is acting for their own benefit or to fulfill their own needs and desires. This self-interest can take various forms, including immediate pleasure, career advancement, financial rewards, and gaining respect from others. Altruistic motivation is marked by selfless intentions and involves a genuine concern for the well-being of others. It is associated with the desire to assist and help others in a non-transactional manner without the goal of obtaining personal gain or rewards in return. According to the controversial thesis of psychological egoism, there is no altruistic motivation: all motivation is egoistic. Proponents of this view hold that even apparently altruistic behavior is caused by egoistic motives. For example, they may claim that people feel good about helping other people and that their egoistic desire to feel good is the true internal motivation behind the externally altruistic behavior. Many religions emphasize the importance of altruistic motivation as a component of religious practice. For example, Christianity sees selfless love and compassion as a way of realizing God's will and bringing about a better world. Buddhists emphasize the practice of loving-kindness toward all sentient beings as a means to eliminate suffering. Others Many other types of motivation are discussed in the academic literature. Moral motivation is closely related to altruistic motivation. Its motive is to act in tune with moral judgments and it can be characterized as the willingness to "do the right thing". The desire to visit a sick friend to keep a promise is an example of moral motivation. It can conflict with other forms of motivation, like the desire to go to the movies instead. An influential debate in moral philosophy centers around the question of whether moral judgments can directly provide moral motivation, as internalists claim. Externalists provide an alternative explanation by holding that additional mental states, like desires or emotions, are needed. Externalists hold that these additional states do not always accompany moral judgments, meaning that it would be possible to have moral judgments without a moral motivation to follow them. Certain forms of psychopathy and brain damage can inhibit moral motivation. Self-determination theorists, such as Edward Deci and Richard Ryan, distinguish between autonomous and controlled motivation. Autonomous motivation is associated with acting according to one's free will or doing something because one wants to do it. In the case of controlled motivation, the person feels pressured into doing something by external forces. A related contrast is between push and pull motivation. Push motivation arises from unfulfilled internal needs and aims at satisfying them. For example, hunger may push an individual to find something to eat. Pull motivation arises from an external goal and aims at achieving this goal, like the motivation to get a university degree. Achievement motivation is the desire to overcome obstacles and strive for excellence. Its goal is to do things well and become better even in the absence of tangible external rewards. It is closely related to the fear of failure. An example of achievement motivation in sports is a person who challenges stronger opponents in an attempt to get better. Human motivation is sometimes contrasted with animal motivation. The field of animal motivation examines the reasons and mechanisms underlying animal behavior. It belongs to psychology and zoology. It gives specific emphasis to the interplay of external stimulation and internal states. It further considers how an animal benefits from a certain behavior as an individual and in terms of evolution. There are important overlaps between the fields of animal and human motivation. Studies on animal motivation tend to focus more on the role of external stimuli and instinctive responses while the role of free decisions and delayed gratification has a more prominent place when discussing human motivation. Amotivation and akrasia Motivation contrasts with amotivation (also known as avolition) which is an absence of interest. Individuals in the state of amotivation feel apathy or lack the willingness to engage in a particular behavior. For instance, amotivated children at school remain passive in class, do not engage in classroom activities, and fail to follow teacher instructions. Amotivation can be a significant barrier to productivity, goal attainment, and overall well-being. It can be caused by factors like unrealistic expectations, helplessness, feelings of incompetence, and the inability to see how one's actions affect outcomes. In the field of Christian spirituality, the terms acedia and accidie are often used to describe a form of amotivation or listlessness associated with a failure to engage in spiritual practices. Amotivation is usually a temporary state. The term amotivational syndrome refers to a more permanent and wide-reaching condition. It involves apathy and lack of activity in relation to a broad range of activities and is associated with incoherence, inability to concentrate, and memory disturbance. The term disorders of diminished motivation covers a wide range of related phenomena, including abulia, akinetic mutism, and other motivation-related neurological disorders. Amotivation is closely related to akrasia. A person in the state of akrasia believes that they should perform a certain action but cannot motivate themselves to do it. This means that there is an internal conflict between what a person believes they should do and what they actually do. The cause of akrasia is sometimes that a person gives in to temptations and is not able to resist them. For this reason, akrasia is also referred to as weakness of the will. An addict who compulsively consumes drugs even though they know that it is not in their best self-interest is an example of akrasia. Akrasia contrasts with enkrasia, which is a state where a person's motivation aligns with their beliefs. Theories Theories of motivation are frameworks or sets of principles that aim to explain motivational phenomena. They seek to understand how motivation arises and what causes and effects it has as well as the goals that commonly motivate people. This way, they provide explanations of why an individual engages in one behavior rather than another, how much effort they invest, and how long they continue to strive toward a given goal. Important debates in the academic literature concern to what extent motivation is innate or based on genetically determined instincts rather than learned through previous experience. A closely related issue is whether motivational processes are mechanistic and run automatically or have a more complex nature involving cognitive processes and active decision-making. Another discussion revolves around the topic of whether the primary sources of motivation are internal needs rather than external goals. A common distinction among theories of motivation is between content theories and process theories. Content theories attempt to identify and describe the internal factors that motivate people, such as different types of needs, drives, and desires. They examine which goals motivate people. Influential content theories are Maslow's hierarchy of needs, Frederick Herzberg's two-factor theory, and David McClelland's learned needs theory. Process theories discuss the cognitive, emotional, and decision-making processes that underlie human motivation. They examine how people select goals and the means to achieve them. Major process theories are expectancy theory, equity theory, goal-setting theory, self-determination theory, and reinforcement theory. Another way to classify theories of motivation focuses on the role of inborn physiological processes in contrast to cognitive processes and distinguishes between biological, psychological, and biopsychosocial theories. Major content theories Maslow holds that humans have different kinds of needs and that those needs are responsible for motivation. According to him, they form a hierarchy of needs that is composed of lower and higher needs. Lower needs belong to the physiological level and are characterized as deficiency needs since they indicate some form of lack. Examples are the desire for food, water, and shelter. Higher needs belong to the psychological level and are associated with the potential to grow as a person. Examples are self-esteem in the form of a positive self-image and personal development by actualizing one's unique talents and abilities. Two key principles of Maslow's theory are the progression principle and the deficit principle. They state that lower needs have to be fulfilled before higher needs become activated. This means that higher needs, like esteem and self-actualization, are unable to provide full motivation while lower needs, like food and shelter, remain unfulfilled. An influential extension of Maslow's hierarchy of needs was proposed by Clayton Alderfer in the form of his ERG theory. Herzberg's Two-Factor Theory also analyzes motivation in terms of lower and higher needs. Herzberg applies it specifically to the workplace and distinguishes between lower-lever hygiene factors and higher-level motivators. Hygiene factors are associated with the work environment and conditions. Examples include company policies, supervision, salary, and job security. They are essential to prevent job dissatisfaction and associated negative behavior, such as frequent absence or decreased effort. Motivators are more directly related to work itself. They include the nature of the work and the associated responsibility as well as recognition and personal and professional growth opportunities. They are responsible for job satisfaction as well as increased commitment and creativity. This theory implies, for example, that increasing salary and job security may not be sufficient to fully motivate workers if their higher needs are not met. McClelland's learned needs theory states that individuals have three primary needs: affiliation, power, and achievement. The need for affiliation is a desire to form social connections with others. The need for power is a longing to exert control over one's surroundings and wield influence over others. The need for achievement relates to a yearning to establish ambitious objectives and to receive positive feedback on one's performance. McClelland holds that these needs are present in everyone but that their exact form, strength, and expression is shaped by cultural influences and the individual's experiences. For example, affiliation-oriented individuals are primarily motivated by establishing and maintaining social relations while achievement-oriented individuals are inclined to set challenging goals and strive for personal excellence. More emphasis on the need of affiliation tends to be given in collectivist cultures in contrast to a focus on the need of achievement in individualist cultures. Major process theories Expectancy theory states that whether a person is motivated to perform a certain behavior depends on the expected results of this behavior: the more positive the expected results are, the higher the motivation to engage in that behavior. Expectancy theorists understand the expected results in terms of three factors: expectancy, instrumentality, and valence. Expectancy concerns the relation between effort and performance. If the expectancy of a behavior is high then the person believes that their efforts will likely result in successful performance. Instrumentality concerns the relation between performance and outcomes. If the instrumentality of a performance is high then the person believes that it will likely result in the intended outcomes. Valence is the degree to which the outcomes are attractive to the person. These three components affect each other in a multiplicative way, meaning that high motivation is only present if all of them are high. In this case, the person believes it likely that they perform well, that the performance leads to the expected result, and that the result as a high value. Equity theory sees fairness as a key aspect of motivation. According to it, people are interested in the proportion between effort and reward: they judge how much energy one has to invest and how good the outcome is. Equity theory states that individuals assess fairness by comparing their own ratio of effort and reward to the ratio of others. A key idea of equity theory is that people are motivated to reduce perceived inequity. This is especially the case if they feel that they receive less rewards than others. For example, if an employee has the impression that they work longer than their co-workers while receiving the same salary, this may motivate them to ask for a raise. Goal-setting theory holds that having clearly defined goals is one of the key factors of motivation. It states that effective goals are specific and challenging. A goal is specific if it involves a clear objective, such as a quantifiable target one intends to reach rather than just trying to do one's best. A goal is challenging if it is achievable but hard to reach. Two additional factors identified by goal-setting theorists are goal commitment and self-efficacy. Commitment is a person's dedication to achieving a goal and includes an unwillingness to abandon or change the goal when meeting resistance. To have self-efficacy means to believe in oneself and in one's ability to succeed. This belief can help people persevere through obstacles and remain motivated to reach challenging goals. According to self-determination theory, the main factors influencing motivation are autonomy, competence, and connection. People act autonomously if they decide themselves what to do rather than following orders. This tends to increase motivation since humans usually prefer to act in accordance with their wishes, values, and goals without being coerced by external forces. If a person is competent at a certain task then they tend to feel good about the work itself and its results. Lack of competence can decrease motivation by leading to frustration if one's efforts fail to succeed. Connection is another factor identified by self-determination theorists and concerns the social environment. Motivation tends to be reinforced for activities in which a person can positively relate to others, receives approval, and can reach out for help. Reinforcement theory is based on behaviorism and explains motivation in relation to positive and negative outcomes of previous behavior. It uses the principle of operant conditioning, which states that behavior followed by positive consequences is more likely to be repeated, while behavior followed by negative consequences is less likely to be repeated. This theory predicts, for example, that if an aggressive behavior of a child is rewarded then this will reinforce the child's motivation for aggressive behavior in the future. In various fields Neurology In neurology, motivation is studied from a physiological perspective by examining the brain processes and brain areas involved in motivational phenomena. Neurology uses data from both humans and animals, which it obtains through a variety of methods, including the use of functional magnetic resonance imaging and positron emission tomography. It investigates regular motivational processes, pathological cases, and the effect of possible treatments. It is a complex discipline that relies on insights from fields like clinical, experimental, and comparative psychology. Neurologists understand motivation as a multifaceted phenomenon that integrates and processes signals to make complex decisions and coordinate actions. Motivation is influenced by the organism's physiological state, like stress, information about the environment, and personal history, like past experiences with this environment. All this information is integrated to perform a cost–benefit analysis, which considers the time, effort, and discomfort associated with pursuing a goal as well as positive outcomes, like fulfilling one's needs or escaping harm. This form of reward prediction is associated with several brain areas, like the orbitofrontal cortex, the anterior cingulate, and the basolateral amygdala. The dopamine system plays a key role in learning which positive and negative outcomes are associated with a specific behavior and how certain signals, like environmental cues, are related to specific goals. Through these associations, motivation can automatically arise when the signals are present. For example, if a person associates having a certain type of food with a specific time of day then they may automatically feel motivated to eat this food when the time arrives. Education Motivation plays a key role in education since it affects the students' engagement with the studied topic and shapes their learning experience and academic success. Motivated students are more likely to participate in classroom activities and persevere through challenges. One of the responsibilities of educators and educational institutions is to establish a learning environment that fosters and sustains students' motivation to ensure effective learning. Educational research is particularly interested in understanding the different effects that intrinsic and extrinsic motivation have on the learning process. In the case of intrinsic motivation, students are interested in the subject and the learning experience itself. Students driven by extrinsic motivation seek external rewards, like good grades or peer recognition. Intrinsic motivation is often seen as the preferred type of motivation since it is associated with more in-depth learning, better memory retention, and long-term commitment. Extrinsic motivation in the form of rewards and recognition also plays a key role in the learning process. However, it can conflict with intrinsic motivation in some cases and may then hinder creativity. Various factors influence student motivation. It is usually beneficial to have an organized classroom with few distractions. The learning material should be neither too easy, which threatens to bore students, nor too difficult, which can lead to frustration. The behavior of the teacher also has a significant impact on student motivation, for example, in regard to how the material is presented, the feedback they provide on assignments, and the interpersonal relation they build with the students. Teachers who are patient and supportive can encourage interaction by interpreting mistakes as learning opportunities. Work Work motivation is an often-studied topic in the fields of organization studies and organizational behavior. They aim to understand human motivation in the context of organizations and investigate its role in work and work-related activities including human resource management, employee selection, training, and managerial practices. Motivation plays a key role in the workplace on various levels. It impacts how employees feel about their work, their level of determination, commitment, and overall job satisfaction. It also affects employee performance and overall business success. Lack of motivation can lead to decreased productivity due to complacency, disinterest, and absenteeism. It can also manifest in the form of occupational burnout. Various factors influence work motivation. They include the personal needs and expectations of the employees, the characteristics of the tasks they perform, and whether the work conditions are perceived as fair and just. Another key aspect is how managers communicate and provide feedback.  Understanding and managing employee motivation is essential for managers to ensure effective leadership, employee performance, and business success. Cultural differences can have a significant impact on how to motivate workers. For example, workers from economically advanced countries may respond better to higher-order goals like self-actualization while the fulfillment of more basic needs tends to be more central for workers from less economically developed countries. There are different approaches to increasing employee motivation. Some focus on material benefits, like high salary, health care, stock ownership plans, profit-sharing, and company cars. Others aim to make changes to the design of the job itself. For example, overly simplified and segmented jobs tend to result in decreased productivity and lower employee morale. The dynamics of motivation differ between paid work and volunteer work. Intrinsic motivation plays a larger role for volunteers with key motivators being self-esteem, the desire to help others, career advancement, and self-improvement. Sport Motivation is a fundamental aspect of sports. It affects how consistently athletes train, how much effort they are willing to invest, and how well they persevere through challenges. Proper motivation is an influential factor for athletic success. It concerns both the long-term motivation needed to sustain progress and commitment over an extended period as well as the short-term motivation required to mobilize as much energy as possible for a high performance on the same day. It is the responsibility of coaches not just to advise and instruct athletes on training plans and strategies but also to motivate them to put in the required effort and give their best. There a different coaching styles and the right approach may depend on the personalities of the coach, the athlete, and the group as well as the general athletic situation. Some styles focus on realizing a particular goal while others concentrate on teaching, following certain principles, or building a positive interpersonal relationship. Criminal law The motive of a crime is a key aspect in criminal law. It refers to reasons that the accused had for committing a crime. Motives are often used as evidence to demonstrate why the accused might have committed the crime and how they would benefit from it. The absence of a motive can be used as evidence to put the accused's involvement in the crime into doubt. For example, financial gain is a motive to commit a crime from which the perpetrator would financially benefit, like embezzlement. As a technical term, motive is distinguished from intent. Intent is the mental state of the defendant and belongs to mens rea. A motive is a reason that tempts a person to form an intent. Unlike intent, motive is usually not an essential element of a crime: it plays various roles in investigative considerations but is normally not required to establish the defendant's guilt. In a different sense, motivation also plays a role in justifying why convicted offenders should be punished. According to the deterrence theory of law, one key aspect of punishment for law violation is to motivate both the convicted individual and potential future wrongdoers to not engage in similar criminal behavior. Others Motivation is a central factor in implementing and maintaining lifestyle changes in the fields of personal development and health. Personal development is a process of self-improvement aimed at enhancing one's skills, knowledge, talents, and overall well-being. It is realized through practices that promote growth and improve different areas in one's life. Motivation is pivotal in engaging in these practices. It is especially relevant to ensure long-term commitment and to follow through with one's plans. For example, health-related lifestyle changes may at times require high willpower and self-control to implement meaningful adjustments while resisting impulses and bad habits. This is the case when trying to resist urges to smoke, consume alcohol, and eat fattening food. Motivation plays a key role in economics since it is what drives individuals and organizations to make economic decisions and engage in economic activities. It affects diverse processes involving consumer behavior, labor supply, and investment decisions. For example, rational choice theory, a fundamental theory in economics, postulates that individuals are motivated by self-interest and aim to maximize their utility, which guides economic behavior like consumption choices. In video games, player motivation is what drives people to play a game and engage with its contents. Player motivation often revolves around completing certain objectives, like solving a puzzle, beating an enemy, or exploring the game world. It concerns both smaller objectives within a part of the game as well as finishing the game as a whole. Understanding different types of player motivation helps game designers make their games immersive and appealing to a wide audience. Motivation is also relevant in the field of politics. This is true specifically for democracies to ensure active engagement, participation, and voting. See also 3C-model Amotivational syndrome Effects of hormones on sexual motivation Employee engagement Enthusiasm Frustration Happiness at work Health action process approach Hedonic motivation Humanistic psychology I-Change Model Incentives Learned industriousness Motivation crowding theory Nucleus accumbens Positive education Positive psychology in the workplace Regulatory focus theory Rubicon model (psychology) Striatum Work engagement References Notes Citations Sources Behavior Cognition Psychology
Motivation
Biology
9,231
24,200,321
https://en.wikipedia.org/wiki/Cognitive%20inertia
Cognitive inertia is the tendency for a particular orientation in how an individual thinks about an issue, belief, or strategy to resist change. Clinical and neuroscientific literature often defines it as a lack of motivation to generate distinct cognitive processes needed to attend to a problem or issue. The physics term inertia emphasizes the rigidity and resistance to change in the method of cognitive processing that has been used for a significant amount of time. Commonly confused with belief perseverance, cognitive inertia is the perseverance of how one interprets information, not the perseverance of the belief itself. Cognitive inertia has been causally implicated in disregarding impending threats to one's health or environment, enduring political values and deficits in task switching. Interest in the phenomenon was primarily taken up by economic and industrial psychologists to explain resistance to change in brand loyalty, group brainstorming, and business strategies. In the clinical setting, cognitive inertia has been used as a diagnostic tool for neurodegenerative diseases, depression, and anxiety. Critics have stated that the term oversimplifies resistant thought processes and suggests a more integrative approach that involves motivation, emotion, and developmental factors. History and methods Early history The idea of cognitive inertia has its roots in philosophical epistemology. Early allusions to a reduction of cognitive inertia can be found in the Socratic dialogues written by Plato. Socrates builds his argument by using the detractor's beliefs as the premise of his argument's conclusions. In doing so, Socrates reveals the detractor's fallacy of thought, inducing the detractor to change their mind or face the reality that their thought processes are contradictory. Ways to combat persistence of cognitive style are also seen in Aristotle's syllogistic method which employs logical consistency of the premises to convince an individual of the conclusion's validity. At the beginning of the twentieth century, two of the earliest experimental psychologists, Müller and Pilzecker, defined perseveration of thought to be "the tendency of ideas, after once having entered consciousness, to rise freely again in consciousness". Müller described perseveration by illustrating his own inability to inhibit old cognitive strategies with a syllable-switching task, while his wife easily switched from one strategy to the next. One of the earliest personality researchers, W. Lankes, more broadly defined perseveration as "being confined to the cognitive side" and possibly "counteracted by strong will". These early ideas of perseveration were the precursor to how the term cognitive inertia would be used to study certain symptoms in patients with neurodegenerative disorders, rumination and depression. Cognitive psychology Originally proposed by William J. McGuire in 1960, the theory of cognitive inertia was built upon emergent theories in social psychology and cognitive psychology that centered around cognitive consistency, including Fritz Heider's balance theory and Leon Festinger's cognitive dissonance. McGuire used the term cognitive inertia to account for an initial resistance to change how an idea was processed after new information, that conflicted with the idea, had been acquired. In McGuire's initial study involving cognitive inertia, participants gave their opinions of how probable they believed various topics to be. A week later, they returned to read messages related to the topics they had given their opinions on. The messages were presented as factual and were targeted to change the participants' belief in how probable the topics were. Immediately after reading the messages, and one week later, the participants were again assessed on how probable they believed the topics to be. Discomforted by the inconsistency of the related information from the messages and their initial ratings on the topics, McGuire believed the participants would be motivated to shift their probability ratings to be more consistent with the factual messages. However, the participants' opinions did not immediately shift toward the information presented in the messages. Instead, a shift towards consistency of thought on the information from the messages and topics grew stronger as time passed, often referred to as "seepage" of information. The lack of change was reasoned to be due to persistence in the individual's existing thought processes which inhibited their ability to re-evaluate their initial opinion properly, or as McGuire called it, cognitive inertia. Probabilistic model Although cognitive inertia was related to many of the consistency theories at the time of its conception, McGuire used a unique method of probability theory and logic to support his hypotheses on change and persistence in cognition. Utilizing a syllogistic framework, McGuire proposed that if three issues (a, b and c) were so interrelated that an individual's opinion were in complete support of issues a and b then it would follow their opinion on issue c would be supported as a logical conclusion. Furthermore, McGuire proposed if an individual's belief in the probability (p) of the supporting issues (a or b) was changed, then not only would the issue (c) explicitly stated change, but a related implicit issue (d) could be changed as well. More formally: This formula was used by McGuire to show that the effect of a persuasive message on a related, but unmentioned, topic (d) took time to sink in. The assumption was that topic d was predicated on issues a and b, similar to issue c, so if the individual agreed with issue c then so too should they agree with issue d. However, in McGuire's initial study immediate measurement on issue d, after agreement on issues a, b and c, had only shifted half the amount that would be expected to be logically consistent. Follow-up a week later showed that shift in opinion on issue d had shifted enough to be logically consistent with issues a, b, and c, which not only supported the theory of cognitive consistency, but also the initial hurdle of cognitive inertia. The model was based on probability to account for the idea that individuals do not necessarily assume every issue is 100% likely to happen, but instead there is a likelihood of an issue occurring and the individual's opinion on that likelihood will rest on the likelihood of other interrelated issues. Examples Public health Historical Group (cognitive) inertia, how a subset of individuals view and process an issue, can have detrimental effects on how emergent and existing issues are handled. In an effort to describe the almost lackadaisical attitude from a large majority of U.S. citizens toward the insurgence of the Spanish flu in 1918, historian Tom Dicke has proposed that cognitive inertia explains why many individuals did not take the flu seriously. At the time, most U.S. citizens were familiar with the seasonal flu. They viewed it as an irritation that was often easy to treat, infected few, and passed quickly with few complications and hardly ever a death. However, this way of thinking about the flu was detrimental to the need for preparation, prevention, and treatment of the Spanish flu due to its quick spread and virulent form until it was much too late, and it became one of the most deadly pandemics in history. Contemporary In the more modern period, there is an emerging position that anthropogenic climate change denial is a kind of cognitive inertia. Despite the evidence provided by scientific discovery, there are still those – including nations – who deny its incidence in favor of existing patterns of development. Geography To better understand how individuals store and integrate new knowledge with existing knowledge, Friedman and Brown tested participants on where they believed countries and cities to be located latitudinally and then, after giving them the correct information, tested them again on different cities and countries. The majority of participants were able to use the correct information to update their cognitive understanding of geographical locations and place the new locations closer to their correct latitudinal location, which supported the idea that new knowledge affects not only the direct information but also related information. However, there was a small effect of cognitive inertia as some areas were unaffected by the correct information, which the researchers suggested was due to a lack of knowledge linkage in the correct information and new locations presented. Group membership Politics The persistence of political group membership and ideology is suggested to be due to the inertia of how the individual has perceived the grouping of ideas over time. The individual may accept that something counter to their perspective is true, but it may not be enough to tip the balance of how they process the entirety of the subject. Governmental organizations can often be resistant or glacially slow to change along with social and technological transformation. Even when evidence of malfunction is clear, institutional inertia can persist. Political scientist Francis Fukuyama has asserted that humans imbue intrinsic value on the rules they enact and follow, especially in the larger societal institutions that create order and stability. Despite rapid social change and increasing institutional problems, the value placed on an institution and its rules can mask how well an institution is functioning as well as how that institution could be improved. The inability to change an institutional mindset is supported by the theory of punctuated equilibrium, long periods of deleterious governmental policies punctuated by moments of civil unrest. After decades of economic decline, the United Kingdom's referendum to leave the EU was seen as an example of the dramatic movement after a long period of governmental inertia. Interpersonal roles The unwavering views of the roles people play in our lives have been suggested as a form of cognitive inertia. When asked how they would feel about a classmate marrying their mother or father, many students said they could not view their classmate as a step-father/mother. Some students went so far as to say that the hypothetical relationship felt like incest. Role inertia has also been implicated in marriage and the likelihood of divorce. Research on couples who cohabit together before marriage shows they are more likely to get divorced than those who do not. The effect is most seen in a subset of couples who cohabit without first being transparent about future expectations of marriage. Over time, cognitive role inertia takes over, and the couple marries without fully processing the decision, often with one or both of the partners not fully committed to the idea. The lack of deliberative processing of existing problems and levels of commitment in the relationship can lead to increased stress, arguments, dissatisfaction, and divorce. In business Cognitive inertia is regularly referenced in business and management to refer to consumers' continued use of products, a lack of novel ideas in group brainstorming sessions, and lack of change in competitive strategies. Brand loyalty Gaining and retaining new customers is essential to whether a business succeeds early on. To assess a service, product, or likelihood of customer retention, many companies will invite their customers to complete satisfaction surveys immediately after purchasing a product or service. However, unless the satisfaction survey is completed immediately after the point of purchase, the customer response is often based on an existing mindset about the company, not the actual quality of experience. Unless the product or service is extremely negative or positive, cognitive inertia related to how the customer feels about the company will not be inhibited, even when the product or service is substandard. These satisfaction surveys can lack the information businesses need to improve a service or product that will allow them to survive against the competition. Brainstorming Cognitive inertia plays a role in why a lack of ideas is generated during group brainstorming sessions. Individuals in a group will often follow an idea trajectory, in which they continue to narrow in on ideas based on the very first idea proposed in the brainstorming session. This idea trajectory inhibits the creation of new ideas central to the group's initial formation. In an effort to combat cognitive inertia in group brainstorming, researchers had business students either use a single-dialogue or multiple-dialogue approach to brainstorming. In the single dialogue version, the business students all listed their ideas. They created a dialogue around the list, whereas, in the multi-dialogue version, ideas were placed in subgroups that individuals could choose to enter and talk about and then freely move to another subgroup. The multi-dialogue approach was able to combat cognitive inertia by allowing different ideas to be generated in sub-groups simultaneously and each time an individual switched to a different sub-group, they had to change how they were processing the ideas, which led to more novel and high-quality ideas. Competitive strategies Adapting cognitive strategies to changing business climates is often integral to whether or not a business succeeds or fails during economic stress. In the late 1980s in the UK, real estate agents' cognitive competitive strategies did not shift with signs of an increasingly depressed real estate market, despite their ability to acknowledge the signs of decline. This cognitive inertia at the individual and corporate level has been proposed as reasons to why companies do not adopt new strategies to combat the ever-increasing decline in the business or take advantage of the potential. General Mills' continued operation of mills long after they were no longer necessary is an example of when companies refuse to change the mindset of how they should operate. More famously, cognitive inertia in upper management at Polaroid was proposed as one of the main contributing factors to the company's outdated competitive strategy. Management strongly held that consumers wanted high-quality physical copies of their photos, where the company would make their money. Despite Polaroid's extensive research and development into the digital market, their inability to refocus their strategy to hardware sales instead of film eventually led to their collapse. Scenario planning has been one suggestion to combat cognitive inertia when making strategic decisions to improve business. Individuals develop different strategies and outline how the scenario could play out, considering different ways it could go. Scenario planning allows for diverse ideas to be heard and the breadth of each scenario, which can help combat relying on existing methods and thinking alternatives is unrealistic. Management In a recent review of company archetypes that lead to corporate failure, Habersang, Küberling, Reihlen, and Seckler defined "the laggard" as one who rests on the laurels of the company, believing past success and recognition will shield them from failure. Instead of adapting to changes in the market, "the laggard" assumes that the same strategies that won the company success in the past will do the same in the future. This lag in changing how they think about the company can lead to rigidity in company identity, like Polaroid, conflict in adapting when the sales plummet, and resource rigidity. In the case of Kodak, instead of reallocating money to a new product or service strategy, they cut production costs and imitation of competitors, both leading to poorer quality products and eventually bankruptcy. A review of 27 firms integrating the use of big data analytics found cognitive inertia to hamper the widespread implementation, with managers from sectors that did not focus on digital technology seeing the change as unnecessary and cost prohibitive. Managers with high cognitive flexibility that can change the type of cognitive processing based on the situation at hand are often the most successful in solving novel problems and keeping up with changing circumstances. Interestingly, shifts in mental models (disrupting cognitive inertia) during a company crisis are frequently at the lower group level, with leaders coming to a consensus with the rest of the workforce in how to process and deal with the crisis, instead of vice versa. It is proposed that leaders can be blinded by their authority and too easily disregard those at the front-line of the problem causing them to reject remunerative ideas. Applications Therapy An inability to change how one thinks about a situation has been implicated as one of the causes of depression. Rumination, or the perseverance of negative thoughts, is often correlated with the severity of depression and anxiety. Individuals with high levels of rumination test low on scales of cognitive flexibility and have trouble shifting how they think about a problem or issue even when presented with facts that counter their thinking process. In a review paper that outlined strategies that are effective for combating depression, the Socratic method was suggested to overcome cognitive inertia. By presenting the patient's incoherent beliefs close together and evaluating with the patient their thought processes behind those beliefs, the therapist is able to help them understand things from a different perspective. Clinical diagnostics In nosological literature relating to the symptom or disorder of apathy, clinicians have used cognitive inertia as one of the three main criteria for diagnosis. The description of cognitive inertia differs from its use in cognitive and industrial psychology in that lack of motivation plays a key role. As a clinical diagnostic criterion, Thant and Yager described it as "impaired abilities to elaborate and sustain goals and plans of actions, to shift mental sets, and to use working memory". This definition of apathy is frequently applied to onset of apathy due to neurodegenerative disorders such as Alzheimer's and Parkinson's disease but has also been applied to individuals who have gone through extreme trauma or abuse. Neural anatomy and correlates Cortical Cognitive inertia has been linked to decreased use of executive function, primarily in the prefrontal cortex, which aids in the flexibility of cognitive processes when switching tasks. Delayed response on the implicit associations task (IAT) and Stroop task have been related to an inability to combat cognitive inertia, as participants struggle to switch from one cognitive rule to the next to get the questions right. Before taking part in an electronic brainstorming session, participants were primed with pictures that motivated achievement to combat cognitive inertia. In the achievement-primed condition, subjects were able to produce more novel, high-quality ideas. They used more right frontal cortical areas related to decision-making and creativity. Cognitive inertia is a critical dimension of clinical apathy, described as a lack of motivation to elaborate plans for goal-directed behavior or automated processing. Parkinson's patients whose apathy was measured using the cognitive inertia dimension showed less executive function control than Parkinson's patients without apathy, possibly suggesting more damage to the frontal cortex. Additionally, more damage to the basal ganglia in Parkinson's, Huntington's and other neurodegenerative disorders have been found with patients exhibiting cognitive inertia in relation to apathy when compared to those who do not exhibit apathy. Patients with lesions to the dorsolateral prefrontal cortex have shown reduced motivation to change cognitive strategies and how they view situations, similar to individuals who experience apathy and cognitive inertia after severe or long-term trauma. Functional connectivity Nursing home patients who have dementia have been found to have larger reductions in functional brain connectivity, primarily in the corpus callosum, important for communication between hemispheres. Cognitive inertia in neurodegenerative patients has also been associated with a decrease in the connection of the dorsolateral prefrontal cortex and posterior parietal area with subcortical areas, including the anterior cingulate cortex and basal ganglia. Both findings are suggested to decrease motivation to change one's thought processes or create new goal-directed behavior. Alternative theories Some researchers have refuted the cognitive perspective of cognitive inertia and suggest a more holistic approach that considers the motivations, emotions, and attitudes that fortify the existing frame of reference. Alternative paradigms Motivated reasoning The theory of motivated reasoning is proposed to be driven by the individual's motivation to think a certain way, often to avoid thinking negatively about oneself. The individual's own cognitive and emotional biases are commonly used to justify a thought, belief, or behavior. Unlike cognitive inertia, where an individual's orientation in processing information remains unchanged either due to new information not being fully absorbed or being blocked by a cognitive bias, motivated reasoning may change the orientation or keep it the same depending on whether that orientation benefits the individual. In an extensive online study, participant opinions were acquired after two readings about various political issues to assess the role of cognitive inertia. The participants gave their opinions after the first reading and were then assigned a second reading with new information; after being assigned to read more information on the issue that either confirmed or disconfirmed their initial opinion, the majority of participants' opinions did not change. When asked about the information in the second reading, those who did not change their opinion evaluated the information that supported their initial opinion as stronger than information that disconfirmed their initial opinion. The persistence in how the participants viewed the incoming information was based on their motivation to be correct in their initial opinion, not the persistence of an existing cognitive perspective. Socio-cognitive inflexibility From a social psychology perspective, individuals continually shape beliefs and attitudes about the world based on interaction with others. What information the individual attends to is based on prior experience and knowledge of the world. Cognitive inertia is seen not just as a malfunction in updating how information is being processed but as the assumptions about the world and how it works can impede cognitive flexibility. The persistence of the idea of the nuclear family has been proposed as a socio-cognitive inertia. Despite the changing trends in family structure, including multi-generational, single-parent, blended, and same-sex parent families, the normative idea of a family has centered around the mid-twentieth century idea of a nuclear family (i.e., mother, father, and children). Various social influences are proposed to maintain the inertia of this viewpoint, including media portrayals, the persistence of working-class gender roles, unchanged domestic roles despite working mothers, and familial pressure to conform. The phenomenon of cognitive inertia in brainstorming groups has been argued to be due to other psychological effects such as fear of disagreeing with an authority figure in the group, fear of new ideas being rejected and the majority of speech being attributed to the minority group members. Internet-based brainstorming groups have been found to produce more ideas of high-quality because it overcomes the problem of speaking up and fear of idea rejection. See also References Cognitive psychology Heuristics Management Behavioral economics
Cognitive inertia
Biology
4,524
5,793,598
https://en.wikipedia.org/wiki/Chebyshev%20function
In mathematics, the Chebyshev function is either a scalarising function (Tchebycheff function) or one of two related functions. The first Chebyshev function or is given by where denotes the natural logarithm, with the sum extending over all prime numbers that are less than or equal to . The second Chebyshev function is defined similarly, with the sum extending over all prime powers not exceeding  where is the von Mangoldt function. The Chebyshev functions, especially the second one , are often used in proofs related to prime numbers, because it is typically simpler to work with them than with the prime-counting function, (see the exact formula below.) Both Chebyshev functions are asymptotic to , a statement equivalent to the prime number theorem. Tchebycheff function, Chebyshev utility function, or weighted Tchebycheff scalarizing function is used when one has several functions to be minimized and one wants to "scalarize" them to a single function: By minimizing this function for different values of , one obtains every point on a Pareto front, even in the nonconvex parts. Often the functions to be minimized are not but for some scalars . Then All three functions are named in honour of Pafnuty Chebyshev. Relationships The second Chebyshev function can be seen to be related to the first by writing it as where is the unique integer such that and . The values of are given in . A more direct relationship is given by This last sum has only a finite number of non-vanishing terms, as The second Chebyshev function is the logarithm of the least common multiple of the integers from 1 to . Values of for the integer variable are given at . Relationships between ψ(x)/x and &vartheta;(x)/x The following theorem relates the two quotients and . Theorem: For , we have This inequality implies that In other words, if one of the or tends to a limit then so does the other, and the two limits are equal. Proof: Since , we find that But from the definition of we have the trivial inequality so Lastly, divide by to obtain the inequality in the theorem. Asymptotics and bounds The following bounds are known for the Chebyshev functions: (in these formulas is the th prime number; , , etc.) Furthermore, under the Riemann hypothesis, for any . Upper bounds exist for both and such that for any . An explanation of the constant 1.03883 is given at . The exact formula In 1895, Hans Carl Friedrich von Mangoldt proved an explicit expression for as a sum over the nontrivial zeros of the Riemann zeta function: (The numerical value of is .) Here runs over the nontrivial zeros of the zeta function, and is the same as , except that at its jump discontinuities (the prime powers) it takes the value halfway between the values to the left and the right: From the Taylor series for the logarithm, the last term in the explicit formula can be understood as a summation of over the trivial zeros of the zeta function, , i.e. Similarly, the first term, , corresponds to the simple pole of the zeta function at 1. It being a pole rather than a zero accounts for the opposite sign of the term. Properties A theorem due to Erhard Schmidt states that, for some explicit positive constant , there are infinitely many natural numbers such that and infinitely many natural numbers such that In little- notation, one may write the above as Hardy and Littlewood prove the stronger result, that Relation to primorials The first Chebyshev function is the logarithm of the primorial of , denoted : This proves that the primorial is asymptotically equal to , where "" is the little- notation (see big notation) and together with the prime number theorem establishes the asymptotic behavior of . Relation to the prime-counting function The Chebyshev function can be related to the prime-counting function as follows. Define Then The transition from to the prime-counting function, , is made through the equation Certainly , so for the sake of approximation, this last relation can be recast in the form The Riemann hypothesis The Riemann hypothesis states that all nontrivial zeros of the zeta function have real part . In this case, , and it can be shown that By the above, this implies Smoothing function The smoothing function is defined as Obviously Notes Pierre Dusart, "Estimates of some functions over primes without R.H.". Pierre Dusart, "Sharper bounds for , , , ", Rapport de recherche no. 1998-06, Université de Limoges. An abbreviated version appeared as "The th prime is greater than for ", Mathematics of Computation, Vol. 68, No. 225 (1999), pp. 411–415. Erhard Schmidt, "Über die Anzahl der Primzahlen unter gegebener Grenze", Mathematische Annalen, 57 (1903), pp. 195–204. G .H. Hardy and J. E. Littlewood, "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes", Acta Mathematica, 41 (1916) pp. 119–196. Davenport, Harold (2000). In Multiplicative Number Theory. Springer. p. 104. . Google Book Search. References External links Riemann's Explicit Formula, with images and movies Arithmetic functions
Chebyshev function
Mathematics
1,178
8,834,063
https://en.wikipedia.org/wiki/Cyphernetics
Cyphernetics Corporation was a commercial timesharing company founded in March 1969 and based in Ann Arbor, Michigan. The company had a sales offices in most major American cities and many international locations, providing communications and technical support for clients. As was the case with a number of commercial timesharing operators in the 1970s, Cyphernetics utilized the DECsystem-10 computer systems from Digital Equipment Corporation. Cyphernetics developed many products that were well ahead of their time, and whose concepts are contained in many of the most important PC applications, even today. Cyphernetics had an email system (called UTI:MEMO) in the early 1970s, as well as word processing (Cyphertext), spreadsheets (Cyphertab), project management, and time series data storage and analysis (TSAM). Despite comparatively weak CPUs, very limited memory and storage, and slow communications networks of the time, most modern PC applications were functional on the timesharing network at 300 to 1200 baud while running on a processor (that was much less powerful than a desktop PC is today) shared by over 50 simultaneous users. Cyphernetics was purchased by Automatic Data Processing in 1975 and renamed ADP Network Services. References 1969 establishments in Michigan 1975 disestablishments in Michigan 1975 mergers and acquisitions ADP (company) American companies established in 1969 American companies established in 1975 Companies based in Ann Arbor, Michigan Computer companies established in 1969 Computer companies established in 1975 Defunct computer companies of the United States Defunct computer hardware companies Software companies established in 1969 Software companies established in 1975 Time-sharing companies
Cyphernetics
Technology
331
4,668,382
https://en.wikipedia.org/wiki/Lapatinib
Lapatinib (INN), used in the form of lapatinib ditosylate (USAN) (trade names Tykerb and Tyverb marketed by Novartis) is an orally active drug for breast cancer and other solid tumours. It is a dual tyrosine kinase inhibitor which interrupts the HER2/neu and epidermal growth factor receptor (EGFR) pathways. It is used in combination therapy for HER2-positive breast cancer. It is used for the treatment of patients with advanced or metastatic breast cancer whose tumors overexpress HER2 (ErbB2). Status In March 2007, the U.S. Food and Drug Administration (FDA) approved lapatinib in combination therapy for breast cancer patients already using capecitabine (Xeloda). In January 2010, Tykerb received accelerated approval for the treatment of postmenopausal women with hormone receptor positive metastatic breast cancer that overexpresses the HER2 receptor and for whom hormonal therapy is indicated (in combination with letrozole). Pharmaceutical company GlaxoSmithKline (GSK) markets the drug under the proprietary names Tykerb (mostly U.S.) and Tyverb (mostly Europe and Russia). The drug currently has approval for sale and clinical use in the US, Australia, Bahrain, Kuwait, Venezuela, Brazil, New Zealand, South Korea, Switzerland, Japan, Jordan, the European Union, Lebanon, India and Pakistan. In August 2013, India's Intellectual Property Appellate Board revoked the patent for Glaxo's Tykerb citing its derivative status, while upholding at the same time the original patent granted for lapatinib. The drug lapatinib ditosylate is classified as S/NM (a synthetic compound showing competitive inhibition of the natural product) that is naturally derived or inspired substrate. Mode of action Biochemistry Lapatinib inhibits the tyrosine kinase activity associated with two oncogenes, EGFR (epidermal growth factor receptor) and HER2/neu (human EGFR type 2). Over expression of HER2/neu can be responsible for certain types of high-risk breast cancers in women. Like sorafenib, lapatinib is a protein kinase inhibitor shown to decrease tumor-causing breast cancer stem cells. Lapatinib inhibits receptor signal processes by binding to the ATP-binding pocket of the EGFR/HER2 protein kinase domain, preventing self-phosphorylation and subsequent activation of the signal mechanism (see Receptor tyrosine kinase#Signal transduction). Clinical application Breast cancer Lapatinib is used as a treatment for women's breast cancer in treatment-naïve, ER+/EGFR+/HER2+ breast cancer patients and in patients who have HER2-positive advanced breast cancer that has progressed after previous treatment with other chemotherapeutic agents, such as anthracycline, taxane-derived drugs, or trastuzumab (Herceptin). A 2006 GSK-supported randomized clinical trial on female breast cancer previously being treated with those agents (anthracycline, a taxane and trastuzumab) demonstrated that administrating lapatinib in combination with capecitabine delayed the time of further cancer growth compared to regimens that use capecitabine alone. The study also reported that risk of disease progression was reduced by 51%, and that the combination therapy was not associated with increases in toxic side effects. The outcome of this study resulted in a somewhat complex and rather specific initial indication for lapatinib—use only in combination with capecitabine for HER2-positive breast cancer in women whose cancer have progressed following previous chemotherapy with anthracycline, taxanes and trastuzumab. Early clinical trials have been performed suggesting that high dose intermittent lapatinib might have better efficacy with manageable toxicities in the treatment of HER2-overexpressing breast cancers. Adverse effects Like many small molecule tyrosine kinase inhibitors, lapatinib is regarded as well tolerated. The most common side effects reported are diarrhea, fatigue, nausea and rashes. Of note, lapatinib related rash is associated with improved outcome. In clinical studies elevated liver enzymes have been reported. QT prolongation has been observed with the use of lapatinib ditosylate but there are no reports of torsades de pointes. Caution is advised in patients with hypokalaemia, hypomagnesaemia, congenital long QT syndrome, or with coadministration of medicines known to cause QT prolongation. In combination with capecitabine, reversible decreased left ventricular function are common (2%). Ongoing trials in gastric cancer Phase III study designed to assess lapatinib in combination with chemotherapy for advanced HER2-positive gastric cancer in 2013 failed to meet the primary endpoint of improved overall survival (OS) against chemotherapy alone. The trial did not discover new safety signals, while the median OS for patients in the lapatinib and chemotherapy group was 12.2 months against 10.5 months for patients in the placebo plus chemotherapy. Secondary endpoints of the randomized, double-blinded study, were progression-free survival (PFS), response rate and duration of response. Median PFS was 6 months, response rate was 53% and the duration of response was 7.3 months in the investigational combination chemotherapy group compared to median PFS of 5.4 months, response rate of 39% and duration of response of 5.6 months for patients in chemotherapy alone group. Diarrhoea, vomiting, anemia, dehydration and nausea were serious adverse events (SAE) reported in over 2% of patients in the investigational combination chemotherapy group, while vomiting was the most common SAE noted in the chemotherapy group. References External links Amines Anilines Aromatic amines Chloroarenes Furans 3-Fluorophenyl compounds Phenol ethers Quinazolines Receptor tyrosine kinase inhibitors Sulfones Drugs developed by GSK plc Drugs developed by Novartis Antineoplastic drugs
Lapatinib
Chemistry
1,295
12,679,770
https://en.wikipedia.org/wiki/Jay%20U.%20Gunter
June U. Gunter (January 15, 1911 – November 14, 1994), better known as Jay U. Gunter or J. U. Gunter, was an American pathologist and amateur astronomer. Life and professional career Gunter was born in Sanford, North Carolina. In 1931 he graduated from the University of North Carolina at Chapel Hill and then continued here and at the Jefferson Medical College in Philadelphia with his medical education. He received his degree in 1938. The Second World War he spent in the Medical Corps of the United States Navy. From 1947 he worked as Pathologist and Director of Laboratories, Watts Hospital in Durham, North Carolina. He was also a visiting Professor of Pathology at the University of North Carolina School of Medicine. Amateur astronomy In 1976 Gunter retired and devoted the rest of his life to amateur astronomy. His main field of study and observation was asteroids. He founded and for more than 15 years published the popular magazine Tonight's Asteroids. It was a bimonthly periodical, distributed free, containing finding charts and news from the world of asteroid studies. It was widely acknowledged for bringing attention of many amateur astronomers to asteroid observation. In 1980 the main belt asteroid 2136 Jugta was named in his honour (the name being an acronym of the first letters of his and his magazine's names). In 1983 he received the Amateur Achievement Award of the Astronomical Society of the Pacific and in 1989 the Caroline Herschel Award of the Western Amateur Astronomer Society. References 1911 births 1994 deaths Amateur astronomers 20th-century American astronomers American pathologists United States Navy officers United States Navy personnel of World War II People from Sanford, North Carolina Use mdy dates from August 2011 20th-century American physicians
Jay U. Gunter
Astronomy
337
69,105,066
https://en.wikipedia.org/wiki/Problem%20of%20the%20Nile
The problem of the Nile is a mathematical problem related to equal partitions of measures. The problem was first presented by Ronald Fisher in 1936–1938. It is presented by Dubins and Spanier in the following words:"Each year, the Nile would flood, thereby irrigating or perhaps devastating parts of the agricultural land of a predynastic Egyptian village. The value of different portions of the land would depend upon the height of the flood. In question was the possibility of giving to each of the k residents, piece of land whose value would be 1/k of the total land value, no matter what the height of the flood."Formally, for each height h, there is a nonatomic measure vh on the land, which represents the land values when the height of the Nile is h. In general, there can be infinitely many different heights, and hence, infinitely many different measures. William Feller showed in 1938 that a solution for the general case might not exist. When the number of different heights (= measures) is finite, a solution always exists. This was first noted by Jerzy Neyman in 1946, and proved as a corollary of the Dubins–Spanier theorems in 1961. The problem in this case is called the exact division or consensus division problem. Related problems A related problem is the problem of similar regions studied by Neyman and Pearson. Here, instead of partitioning the land into k subsets, one only looks for a single subset, whose value for each measure vh is r times the total value (where r is a given constant in [0,1]). From existence perspective, the problem is equivalent to the problem of the Nile, as noted by Georges Darmois. However, they differ in the number of required cuts. The optimal number of required cuts for any r is described in the Stromquist–Woodall theorem. References Fair division
Problem of the Nile
Mathematics
392
34,391,429
https://en.wikipedia.org/wiki/IEEE%201905
IEEE 1905.1 is an IEEE standard which defines a network enabler for home networking supporting both wireless and wireline technologies: IEEE 802.11 (marketed under the Wi-Fi trademark), IEEE 1901 (HomePlug, HD-PLC) power-line networking, IEEE 802.3 Ethernet and Multimedia over Coax (MoCA). The IEEE P1905.1 working group had its first meeting in December 2010 to begin development of convergence digital home network specifications. Around 30 organizations participated in the group and achieved approval of the draft P1905.1 standard in January 2013 with final approval and publication by IEEE-SA in April 2013. The IEEE 1905.1 Standard Working Group is sponsored by the IEEE power-line communication standards committee (PLCSC). From about 2013 to 2015, a program called nVoy certified related products. It is not to be confused with the Pogo Mobile and nVoy device of the same name nor various networked devices named Envoy. Vendors (such as Qualcomm and Broadcom) endorsed the certification regime. Consumer-level lists of features and benefits of IEEE 1905 are also the responsibility of nVoy certifiers. Description The standard includes setup, configuration and operation of home networking devices using heterogeneous technologies. Using multiple interface types (Ethernet, Wi-Fi, Powerline and MoCA) enables better coverage for both mobile and fixed devices. Standardizing the use of multiple networking technologies to transmit data to a single device in a transparent manner enables powerful use cases in home networks: Increase the capacity by load balancing different streams over different links. Increase robustness of transmissions by switching streams from one link to another in case of link degradation. Better integrate consumer appliances with limited network connectivity (power line only) and high end network devices (typically Ethernet only) into a common network accessible via 802.11ac and .11n for appliance control and media streaming purposes Unify device certification under one regime for all major networking protocols (nVoy - see below) Generally reduce the number of different devices required and permit storage, processing and user interface functions to migrate to purpose-specific peripherals on a 2 to 5 gigabit networked "bus" or backbone. For service providers and carriers Service providers seek to address growth in network traffic resulting from more devices in more rooms and high-bandwidth latency-straining trends such as IPTV, Video on demand, multi-room DVR and device to device media shifting. 1905.1 upgrades the network to a backbone to improve existing deployments (for instance, ending streaming delays from in-home devices) and enabling new whole-home products and services. Some example features/benefits include: Self-install Common setup procedures for adding devices to a network simplifies network setup for consumers; Reduces call volumes and truck rolls. Advanced diagnostics Network monitors itself to maintain reliable operation; simplifies troubleshooting Aggregated throughput Single devices aggregate throughput from multiple interfaces to ensure sufficient performance and coverage for video applications. Fallback/failover Optimizes the hybrid network by opening alternative routes when a link is down or congested which; increases reliability on the customers' network. Load balancing Limits network congestion by enabling a hybrid network to intelligently distribute streams over different paths. Multiple simultaneous streams Network utilizes multiple media simultaneously enabling multiple streams to exceed the maximum throughput of a single medium. Where dual link aggregation is supported (typically between gigabit Ethernet wired connections), simultaneous streaming can be even faster (e.g. between router- or network-attached storage devices and high-bandwidth displays (such as ultra-high-definition television) making these devices far less troublesome to support in-home. For consumers and retailers Integration of wired and wireless products enables consumers to easily self-install networking equipment capable of significantly improving capacity and coverage in their home network which improves end user satisfaction and reduces product returns. Some specific benefits of 1905.1 networking to the retailer and end user include: Ability to upgrade some components of a home network with ensured interoperability with legacy equipment. Simplifies network setup and security authentication with consistent password procedures and button push security configuration. Increases performance and coverage of home networks which increases the networks capacity to increase overall number of devices in the home. Technical overview 1905.1 devices run an abstraction layer (AL) hiding the diversity of media access control technologies. This sub-layer exchanges Control Message Data Unit (CMDU) with 1905.1 neighbors. The CMDUs are communicated directly over Layer 2 of the different supported technologies without the need to have an IP stack. The standard does not require any changes to the specifications of the underlying technologies. This abstraction layer provides a unique EUI-48 address to identify a 1905.1 device. This unique address is useful to keep a persistent address when multiple interfaces are available and facilitate seamless switching of traffic between interfaces. The standard does not define loop prevention and forwarding protocol. A 1905.1 device is compatible with existing IEEE 802.1 bridging protocols. The management of a 1905.1 device is simplified by the use of a unified Abstraction Layer Management Entity (ALME) and with the use of a data model accessible with CWMP (Broadband Forum TR-069) Architecture The architecture designed for the abstraction layer is based on two 1905.1 service access points accessible to upper layers: a 1905.1 MAC SAP and a 1905.1 ALME SAP. The ALME is a unique management entity supporting different media dependent management entities and a flow-based forwarding table. A 1905.1 protocol is used between ALMEs to distribute different type of management information such as: topology and link metrics. 1905.1 Control Message Data Unit frame consists of an 8 octets header and a variable length list of type–length–values (TLVs) data elements which is easily extendable for future use. The generic CMDU frame format has the following structure: Vendor specific CMDU are supported via message type 0x0004. Each TLV has the following basic structure: Vendor specific TLV are supported via TLV type 11. The EtherType value assigned to 1905.1 CMDU is 0x893a. Features Some of the features of IEEE 1905.1 are listed below. Topology 1905.1 provides a tool to get a global view of the network topology regardless of the technologies running in the home/office network. The abstraction layer generates different topology messages to build this protocol's topology: Discovery (message type 0x0000) to detect direct 1905.1 neighbors Notification (message type 0x0001) to inform network devices about a topology change Query/response (message type 0x0002 and 0x0003) to get the topology database of another 1905.1 device The group address used for discovery and notification messages is 01:80:c2:00:00:13. To detect a non-1905.1 bridge connected between two 1905.1 devices, the abstraction layer also generates a LLDP message with the nearest bridge address (01:80:c2:00:00:0e) that is not propagated by 802.1D bridges. Topology information collected by a 1905.1 device are stored in a data model accessible remotely via TR-069 protocol. Link metrics The 1905.1 ALME provides a mechanism to obtain a list of metrics for links connecting two 1905.1 devices: Packet errors Transmitted packets MAC throughput capacity (expressed in Mbit/s) Link availability (expressed as a proportion of time the link is idle) PHY rate A 1905.1 device can also request Link Metrics from another 1905.1 device by generating a Link Metric Query message (Message type 0x0005). The requested device will respond with a Link Metric Response message (Message type 0x0006). Forwarding rules The 1905 ALME provides a list of primitives to manage forwarding rules per flow (Get, Set, Modify and Remove). This feature may be used to distribute dynamically the different flows over the different technologies. To classify the flows, a set or subset of the following elements can be used: MAC destination address MAC source address Ethertype VLAN id. Priority code point When setting a forwarding rule for a unicast destination, only one outgoing interface may be specified. Security setup The goal of 1905.1 security setup is to allow a new 1905.1 device to join the network with a unified security procedure even if the device has multiple interfaces running different encryption methods. Three unified security setup procedures are defined: 1905.1 Push Button 1905.1 User Configured Passphrase/Key (optional) 1905.1 Near Field Communication Network Key (optional) The push button method requires the user to press one button on a new (i.e. not in-network) 1905.1 device and one button on any 1905.1 device already in the network. It is not necessary for the user to know which technology is used by the new device to join the network, and which device will process the pairing and admission of this new device into the network. Two 1905.1 messages are used for the push button method: Push Button Event Notification (message type 0x000B) Push Button Join Notification (message type 0x000C) These messages are sent to all 1905.1 devices in the network. If the user configured passphrase/key is used, the user needs to type/remember only one sequence of US-ASCII characters (between 8 and 63) and the ALME will derive different security passwords for the different technologies through SHA-256 function. If the NFC network key is used, the user needs to touch the new 1905.1 device with an NFC equipped smartphone already member of the 1905.1 network. AP auto-configuration This feature is used to exchange Wi-Fi Simple Configuration messages over an authenticated 1905.1 link. Using this protocol a 1905.1 AP enrollee can retrieve configuration parameters (like SSID) from a 1905.1 AP registrar. Thus AP auto-configuration is used to simplify the setup of a home network consisting of multiple APs; eliminating the need for the user to manually configure each AP (only a single configuration, of the AP registrar, is required). A specific 1905.1 CMDU frame (message type 0x0009) is used to transport WPS messages. If an AP enrollee is dual-band (2.4 GHz and 5 GHz) capable, the auto-configuration procedure may be executed twice. Implementation Qualcomm Atheros products implementing 1905.1 are named Hy-Fi (for Hybrid Fidelity). In January 2012, HomePlug Powerline Alliance announced support for IEEE 1905.1 certification. The consumer certification program named nVoy was announced in June 2013 and first certified chips that "support the new nVoy HomePlug Certification for IEEE 1905.1 compliance" were announced at that time. Consumer-level products were expected by year-end 2013. but were delayed until 2014 consumer shows. As of December 2013 there were no nVoy-certified consumer products; small-network-focused review sites had no products to review. Chipsets Broadcom BCM60500 and BCM60333 SoC are claimed (by the vendor) to be nVoy/1905-compliant. Compatible line drivers were available; e.g. from Microsemi. Qualcomm Atheros offers a variety of Hy-Fi reference designs based on various combinations of Qualcomm VIVE™ 11ac and Qualcomm XSPAN™ 11n wireless LAN, Qualcomm AMP™ power line and Ethernet technologies. MStar Semiconductor indicated its support of nVoy/1905 in its Homeplug AV power line communication solutions. References IEEE standards
IEEE 1905
Technology
2,426
9,930,858
https://en.wikipedia.org/wiki/Sealed%20road
A sealed road is a road whose surface has been permanently sealed by the use of one of several pavement treatments, often of composite construction. In some countries, such as Australia and New Zealand, this surface is generically referred to as "seal". Surface treatments used on sealed roads include: Asphalt concrete Chipseal Tarmac Bitumen See also Road surface References Road construction Pavements Road infrastructure
Sealed road
Engineering
79
6,466,838
https://en.wikipedia.org/wiki/Immanant
In mathematics, the immanant of a matrix was defined by Dudley E. Littlewood and Archibald Read Richardson as a generalisation of the concepts of determinant and permanent. Let be a partition of an integer and let be the corresponding irreducible representation-theoretic character of the symmetric group . The immanant of an matrix associated with the character is defined as the expression Examples The determinant is a special case of the immanant, where is the alternating character , of Sn, defined by the parity of a permutation. The permanent is the case where is the trivial character, which is identically equal to 1. For example, for matrices, there are three irreducible representations of , as shown in the character table: As stated above, produces the permanent and produces the determinant, but produces the operation that maps as follows: Properties The immanant shares several properties with determinant and permanent. In particular, the immanant is multilinear in the rows and columns of the matrix; and the immanant is invariant under simultaneous permutations of the rows or columns by the same element of the symmetric group. Littlewood and Richardson studied the relation of the immanant to Schur functions in the representation theory of the symmetric group. The necessary and sufficient conditions for the immanant of a Gram matrix to be are given by Gamas's Theorem. References Linear algebra Matrix theory Permutations
Immanant
Mathematics
299
30,716,698
https://en.wikipedia.org/wiki/Pierre%20Gabriel
Pierre Gabriel (1 August 1933 – 24 November 2015), also known as Peter Gabriel, was a French mathematician at the University of Strasbourg (1962–1970), University of Bonn (1970–1974) and University of Zürich (1974–1998) who worked on category theory, algebraic groups, and representation theory of algebras. He was elected a correspondent member of the French Academy of Sciences in November 1986. His most famous result is Gabriel's theorem that provides a classification of all quivers of finite type. References External links Personal Web Page 1933 births 2015 deaths 20th-century French mathematicians 21st-century French mathematicians Algebraists University of Paris alumni Academic staff of the University of Zurich Members of the French Academy of Sciences People from Bitche French expatriates in Germany French expatriates in Switzerland
Pierre Gabriel
Mathematics
163
45,168,169
https://en.wikipedia.org/wiki/Gliese%20908
Gliese 908 is a red dwarf star, located in constellation Pisces at 19.3 light-years from Earth. It is a BY Draconis variable star with a variable star designation of BR Piscium. Its apparent magnitude varies between magnitude 8.93 and magnitude 9.03 as a result of starspots and varying chromospheric activity. The variability of Gliese 908 was confirmed in 1994, although no period could be detected in its brightness changes. It was entered into the General Catalogue of Variable Stars in 1997. Gliese 908 is a cool main sequence star, a red dwarf, with a spectral class of M1V Fe-1. The suffix indicates a noticeable deficiency in heavy elements. References Pisces (constellation) M-type main-sequence stars 0908 Piscium, BR 117473 J23491255+0224037 BD+01 4774
Gliese 908
Astronomy
195
12,207,828
https://en.wikipedia.org/wiki/American%20Tower
American Tower Corporation (also referred to as American Tower or ATC) is an American real estate investment trust which owns, develops and operates wireless and broadcast communications infrastructure in several countries. It is headquartered in Boston, Massachusetts. It is ranked 373rd on the Fortune 500 in 2023. the company owns 224,502 communications sites, including 42,905 sites in the U.S. and Canada, 77,647 sites in Asia-Pacific, 31,241 sites in Europe, 24,229 in Africa, and 48,480 sites in Latin America. History The company was formed in 1995 as a unit of American Radio Systems. In 1998, American Radio Systems merged with CBS Corporation and completed the corporate spin-off of American Tower. The first CEO of American Tower was Steven B. Dodge, remaining in the position until resigning in 2004. Following the merger, American Tower began international expansion by establishing operations in Mexico in 1998 and Brazil in 1999. Around 2000, the company began purchasing numerous AT&T Long Lines microwave telephone relay towers. Upon acquisition of these sites from the now defunct AT&T Communications, Inc., American Tower began repurposing the towers for use as cell towers, and leasing antenna space to various American cell phone providers and private industries. Then, most of the former AT&T Long Lines sites had their horn antennas removed, either by helicopter or by crane, to make room for more antennas. Since AT&T's Long Lines Program was decommissioned in the 1980s, and the company no longer had any use for the towers themselves, American Tower now owns most of these tower structures across the entire continental United States, totaling 42,965 in 2022. In 2004 James D. Taiclet was named CEO and held the title until 2020. In 2005, American Tower acquired SpectraSite Communications, expanding its global portfolio to over 22,000 owned communications sites, including over 21,000 wireless towers, 400 broadcast towers and 100 in-building DAS (Distributed Antenna System) sites. The merger further established American Tower's position as one of the largest tower owners and operators in North America. Between 2007 and 2012, the company expanded internationally with operations in India, Peru, Chile, Colombia, South Africa, Ghana, and Uganda. In 2013, the company acquired Global Tower Partners for $4.8 billion. This acquisition added sites to the U.S. portfolio and added operations in Costa Rica and Panama. In 2020, Tom Bartlett was named President and CEO after Taiclet left to become the CEO of Lockheed Martin. In 2021, the company agreed to acquire the European and Latin American tower divisions of Telxius from parent company Telefonica, comprising approximately 31,000 communications sites for $9.6 billion. The acquired sites were located in Spain, Germany, Argentina, Brazil, Chile, and Peru. Later in 2021, American Tower acquired CoreSite for $10.4 billion, adding a footprint of carrier-neutral data center facilities in the U.S. to its holdings, to position the company to strengthen its position in 5G. Tom Bartlett retired from his positions as President, Chief Executive Officer, and director of the Board of Directors, effective February 1, 2024. He was succeeded by Steven Vondran. References External links Companies based in Boston Companies listed on the New York Stock Exchange Real estate companies established in 1995 Telecommunications companies established in 1995 Telecommunications companies of the United States Real estate investment trusts of the United States Mobile telecommunications
American Tower
Technology
705
566,680
https://en.wikipedia.org/wiki/Artificial%20Intelligence%3A%20A%20Modern%20Approach
Artificial Intelligence: A Modern Approach (AIMA) is a university textbook on artificial intelligence (AI), written by Stuart J. Russell and Peter Norvig. It was first published in 1995, and the fourth edition of the book was released on 28 April 2020. AIMA has been called "the most popular artificial intelligence textbook in the world", and is considered the standard text in the field of AI. As of 2023, it was being used at over 1500 universities worldwide, and it has over 59,000 citations on Google Scholar. AIMA is intended for an undergraduate audience but can also be used for graduate-level studies with the suggestion of adding some of the primary sources listed in the extensive bibliography. Content AIMA gives detailed information about the working of algorithms in AI. The book's chapters span from classical AI topics like searching algorithms and first-order logic, propositional logic and probabilistic reasoning to advanced topics such as multi-agent systems, constraint satisfaction problems, optimization problems, artificial neural networks, deep learning, reinforcement learning, and computer vision. Code The authors provide a GitHub repository with implementations of various exercises and algorithms from the book in different programming languages. Programs in the book are presented in pseudo code with implementations in Java, Python, Lisp, JavaScript, and Scala available online. Editions The first and last editions of AIMA were published in 1995 and 2020, respectively, with four editions published in total (1995, 2003, 2009, 2020). The following is a list of the US print editions. For other editions, the publishing date and the colors of the cover can vary. 1st edition: published in 1995 with red cover 2nd edition: published in 2003 with green cover 3rd edition: published in 2009 with blue cover 4th edition: published in 2020 with purple cover Various editions have been translated from the original English into several languages, including at least Chinese, French, German, Hungarian, Italian, Romanian, Russian, and Serbian. However, the latest, 4th edition is available only in English, French, Croatian References External links 1995 non-fiction books 2003 non-fiction books 2009 non-fiction books 2020 non-fiction books Artificial intelligence textbooks Cognitive science literature English-language non-fiction books Robotics books Prentice Hall books
Artificial Intelligence: A Modern Approach
Technology
457
55,564,347
https://en.wikipedia.org/wiki/Plus%20Tate
Plus Tate is a network of visual arts organisations in the United Kingdom, led by the Tate gallery in London. Plus Tate was launched by Jeremy Hunt MP in 2012, initially with 18 partners. 16 new institutions were added to the network in 2015, increasing the size of the network to 35 members, as announced by Nicholas Serota. , Plus Tate member institutions were visited by more than 3.5m people annually, employing around 500 staff, with an annual turnover of around £33 million. References External links Plus Tate website 2012 establishments in the United Kingdom Organizations established in 2012 Non-profit organisations based in the United Kingdom Art and design organizations Tate galleries
Plus Tate
Engineering
131
69,645,938
https://en.wikipedia.org/wiki/Frank%20Morton%20%28plant%20breeder%29
Frank H. Morton (born 1955 in Fayette County, West Virginia) is an organic farmer, gardener, plant breeder and seedsman known for creating dozens of varieties of lettuce. With his wife Karen, he founded the company Wild Garden Seed in 1994, and he was a founding member of the Open Source Seed Initiative in 2012. Biography Morton's first foray into gardening was brought about by his father, a coal miner who grew prizewinning delphiniums as a hobby, and his mother, who inspired the 5-year-old Frank to try planting the seeds from the watermelons he ate in an (ultimately unsuccessful) attempt to get more watermelon. Growing up in West Virginia, he moved to Oregon in the 1970s to attend Lewis & Clark College, graduating with a BS in Child Psychology in 1978. Morton began growing lettuce commercially in the early 1980s, letting some of the crop go to seed and planting those seeds the next year. Cross-pollination between two varieties led to formation of a novel hybrid with a combination of characteristics from the parent plants. Planting the seeds from that lettuce resulted in a number of different plants with a wide variety of features, from which 23 new varieties of lettuce were eventually selected. More recently, Morton has been selecting lettuce varieties for resistance to common diseases such as downy mildew. Famous cultivars Morton has bred at least 99 types of lettuce, and his company, Wild Garden Seed, offered seed for 114 lettuce varieties in 2016. On August 10, 2015, 'Outredgeous', a red romaine lettuce bred by Morton in the 1990s, became the first plant variety to be planted, harvested and eaten entirely in space, as a part of Expedition 44 to the International Space Station. In addition to lettuce, Morton has also grown and bred other types of vegetables, including beets, peppers, kale, and quinoa. References Further reading 1955 births Living people People from Fayette County, West Virginia Plant breeding Farmers from West Virginia 21st-century American farmers Lewis & Clark College alumni
Frank Morton (plant breeder)
Chemistry
435
24,202,070
https://en.wikipedia.org/wiki/C10H12O3
{{DISPLAYTITLE:C10H12O3}} The molecular formula C10H12O3 (molar mass : 180.2 g/mol, exact mass : 180.0786438 u) may refer to : Anisyl acetate Canolol, a phenolic compound found in canola oil Carvonic acid Coniferyl alcohol Isopropylparaben Isopropyl salicylate Propylparaben Thujaplicinol
C10H12O3
Chemistry
104
66,488,949
https://en.wikipedia.org/wiki/Emma%20Allen-Vercoe
Emma Allen-Vercoe is a British-Canadian Molecular biologist who is a Professor and Canada Research Chair at the University of Guelph. Her research considers the gut microbiome and microbial therapeutics to treat Escherichia coli. Early life and education Allen-Vercoe was an undergraduate student at the Veterinary Laboratories Agency. She moved to the Health Protection Agency for her graduate studies, where she worked under the supervision of Martin Woodward. Here she studied Salmonella enterica and the processes by which enteric pathogens cause disease. She was a postdoctoral researcher at the Health Protection Agency. During her doctorate, she studied Mycobacterium tuberculosis and Campylobacter jejuni. Research and career In 2001, Allen-Vercoe moved to Canada, where she joined the University of Calgary. Allen-Vercoe worked on Escherichia coli. In 2004, she was awarded a Canadian Association of Gastroenterology Fellow-to-Faculty Transition Award. She moved to the University of Guelph in 2007. Her research considers the gut microbiome. She worked with the biotechnology company Infors to create a bioreactor that can maintain biological samples in specific anaerobic atmospheres whilst her research team studying the constituents microbes. Allen-Vercoe isolates bacteria from human stool samples, places them in the so-called robo-gut and monitors their behaviour in precise conditions. For example, the robo-gut (or mechanical colon) can recreate environments that allow for particular genes and bacteria to thrive, which allows Allen-Vercoe to study the microbiobes associated with certain medical conditions. Allen-Vercoe has identified the general bacteria that exist in all microbiomes, as well as monitoring the microbiome's metabolomics. She has worked on microbial therapeutics to treat various diseases, including Clostridioides difficile infection and cancer. Allen-Vercoe launched the NuBiyota in 2013, a biotechnology company that looks to grow microbes in a controlled environment. She was awarded a Tier 1 Canada Research Chair in 2019, which allowed her to study the influence of the gut microbiome on health and disease. Selected publications References Women veterinary scientists Veterinary scientists Molecular biologists Canada Research Chairs Academic staff of the University of Guelph Academic staff of the University of Calgary Living people Year of birth missing (living people)
Emma Allen-Vercoe
Chemistry
495
17,604,272
https://en.wikipedia.org/wiki/Cerrosafe
Cerrosafe is a fusible alloy with a low melting point. It is a non-eutectic mixture consisting of 42.5% bismuth, 37.7% lead, 11.3% tin, and 8.5% cadmium that melts between and . It is useful for making reference castings whose dimensions can be correlated to those of the mold or other template due to its well-known thermal expansion properties during cooling. The alloy contracts during the first 30 minutes, allowing easy removal from a mold, then expands during the next 30 minutes to return to the exact original size. It then continues expanding at a known rate for 200 hours, allowing conversion of measurements of the casting back to those of the mold. Similar metals References External links Examples of chamber casts using low-temp metal Fusible alloys Bismuth alloys Cadmium alloys Lead alloys Tin alloys
Cerrosafe
Chemistry,Materials_science
179
23,965,361
https://en.wikipedia.org/wiki/Gating%20%28telecommunication%29
In telecommunication, the term gating has the following meanings: The process of selecting only those portions of a wave between specified time intervals or between specified amplitude limits. The controlling of signals by means of combinational logic elements. A process in which a predetermined set of conditions, when established, permits a second process to occur. Telecommunications engineering Signal processing
Gating (telecommunication)
Technology,Engineering
73
33,225,754
https://en.wikipedia.org/wiki/%282%2C4%2C6-Trimethylphenyl%29gold
(2,4,6-Trimethylphenyl)gold is a member of a special group of compounds where an aryl carbon atom acts as a bridge between two gold atoms. This compound is formed in a reaction between Au(CO)Cl and the Grignard reagent mesitylmagnesium bromide. It crystallizes as a cyclical pentamer. References Gold(I) compounds Organogold compounds Aromatic compounds
(2,4,6-Trimethylphenyl)gold
Chemistry
95
5,733,602
https://en.wikipedia.org/wiki/Fayette%20County%20Reservoir
Fayette County Reservoir is a power station cooling reservoir on Cedar Creek in the Colorado River basin, 3 miles west of Fayetteville, Texas and 10 miles east of La Grange, Texas. The reservoir was created in 1978 when a dam was built on the creek to provide a cooling pond for the Fayette Power Project which provides electrical generation to Fayette County and surrounding areas. The dam, lake, and power plant are managed by the Lower Colorado River Authority. There is very little vegetation compared to what can usually be found in fisheries, and some invasive plant species are present. The lake is open to the public for recreational activities, including boating, fishing, camping, and hiking. Fayette County Reservoir is also known as Lake Fayette. Description Fayette County Reservoir is located within the Post Oak Savannah ecoregion in Texas. Habitat in the littoral zone is mainly natural and rocky shoreline. The water level is supplied and maintained by the Colorado River. Its function as a power plant cooling reservoir increases water temperatures throughout the lake. Water clarity is considered normal, and nutrient levels are high. The dam itself is composed of compacted soil approximately 96 ft. high and 15, 259 ft. long. History Construction of the dam for Fayette County Reservoir impounded the upstream section of Cedar Creek, turning it into a recharge area where it was previously a groundwater discharge area. Most of the vegetation in the area was removed during construction. Lake Fayette used to be the site of a small town called Biegel, which was famous for its pickles. Outlines of houses may be visible on fish finders. The Biegel-December House, the Legler Log House, and the Gentner-Kroll-Polasek Farmstead were historic structures present in the construction area, and were relocated prior to construction. Other historic and archaeological sites were present on the construction site, and 25 sites were investigated by the Texas Archaeological Survey. No other structures or artifacts were removed. Fish and plant populations Fayette County Reservoir has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Fayette County Reservoir include catfish, largemouth bass, and sunfish. Largemouth Bass is considered the most popular sport fishing species in this reservoir, and fish are typically abundant and active between February and June. The Channel Catfish is another important sport fishing species, and their populations are being closely monitored as a result of decreasing abundance. The most abundant prey species in the reservoir are Bluegill, Gizzard Shad, and Threadfin Shad. The level of overall coverage by aquatic vegetation in Fayette County Reservoir is lower than what is typical for fisheries. A 2012 survey of aquatic vegetation found an invasive plant species, Eurasian watermilfoil, but more recent surveys indicate that it is no longer present, and coverage by other invasive species is low. Recreational uses Boating, fishing, and camping are popular recreational uses of the lake. There are boat ramps and piers available to visitors, as well as access to shoreline for fishing. Fishing tournaments are held annually, notably for largemouth bass. There is also a playground, a 3-mile trail connecting Oak Thicket Park and Park Prairie Park, and a nature trail in Oak Thicket Park. References External links Fayette County Reservoir - Texas Parks & Wildlife Lower Colorado River Authority Biegel, Texas Underwater Ghost Town Fishing Regulations for Fayette County Fayette County Protected areas of Fayette County, Texas Lower Colorado River Authority Bodies of water of Fayette County, Texas Cooling ponds
Fayette County Reservoir
Chemistry,Environmental_science
716
75,475,574
https://en.wikipedia.org/wiki/Elma%20Parsamyan
Elma Parsamian is a Soviet and Armenian astrophysicist and astronomer. She works at the Byurakan Observatory. She serves as the Principal Research Associate of the scientific group. Early life She was born in Yerevan, Armenia on December 23, 1929. After moving to Moscow with her father, from 1938 to 1941, Parsamian studied at Moscow School N213. During her school years, she admired the study of astronomy, and decided to become an astronomer. From 1949 to 1954, Elma Parsamian studied at the Astronomy Department of Physical and Mathematical Faculty at Yerevan State University where she graduated with a specialization in Astrophysics. Career She joined the staff of the Byurkan Astrophysical Observatory (BAO) and stayed there. In 1961, she earned her Ph.D. degree in Physical-Mathematical Sciences, and became a Doctor of Physical-Mathematical Sciences in 1963. Elma Parsamian achieved professorship in 1989, and in 2000, she was selected as a corresponding member of the National Academy of Sciences of Armenia. Her main research fields include variable and non-stable stars, galactic nebulae and archaeoastronomical studies. Recognition For Valorous Work (1971) Anania Shirakatsi medal (2003) Honorary Diploma of NAS RA, ArAS/BAO Prize for Services in Astronomy (2009) References Year of birth missing (living people) Living people Soviet astronomers Soviet astrophysicists Armenian astronomers Armenian astrophysicists Women astronomers Women astrophysicists Yerevan State University alumni Soviet expatriates in Mexico Armenian expatriates in Mexico
Elma Parsamyan
Astronomy
322
415,871
https://en.wikipedia.org/wiki/List%20of%20mascots
This is a list of mascots. A mascot is any person, animal, or object thought to bring luck, or anything used to represent a group with a common public identity, such as a school, professional sports team, society, military unit, or brand name. College See: List of U.S. college mascots, which lists the names of college mascots Computing See: List of computing mascots Corporate Sports Olympics Paralympics FIFA World Cup UEFA European Championship Major League Baseball National Basketball Association National Football League National Hockey League Women's National Basketball Association Association Football Other Freedom Frog - mascot of Intervention Helpline, an Alaska counseling nonprofit organization Senhor Testiculo - Brazilian mascot of the Associação de Assistência às Pessoas Zé Gotinha - Brazilian mascot created to promote vaccination campaigns against the polio virus See also List of Australian mascots List of breakfast cereal advertising characters List of computing mascots List of national animals List of Olympic mascots List of Paralympic mascots List of U.S. college mascots List of video game mascots Mascot Hall of Fame Military mascot References Mascots, List of Mascots de:Liste der Maskottchen
List of mascots
Mathematics
239
382,614
https://en.wikipedia.org/wiki/Eircell
Eircell was an Irish mobile cellular network provider which was established in 1984, with operations commencing in 1986. Its access code was 088 for the original analogue TACS system and 087 for the later GSM system. Following the abolition of the Department of Posts and Telegraphs, Eircell fell under the remit of Telecom Éireann (Later Eircom), which today is known as Eir. The Eircell brand became defunct in 2002 following its acquisition by Vodafone. From 2001, Eircell underwent a major branding exercise after its acquisition by the Vodafone group in December 2000. The main branding was to associate a shade of deep purple with the company. When Vodafone rebranded with its trademark shade of red, the company commented that "red is the new purple". The company was known as Eircell-Vodafone for some time as the process took nine months in total. History Early stages of Eircell In the late 1980s, early adopters of the service numbered in their hundreds rather than thousands and paid handsomely for the phones that were available at the time through a network of independent retailers. The price for mobile phones ranged from IR£1500 to IR£2000 and 'subscribers' were typically politicians or wealthy businessmen. The market-leading phone manufacturers in the early years of the Irish market were Nokia and Motorola. Popularity of the company rises In response to negative publicity about security compromises on the TACS system during the early 1990s, Eircell introduced Ireland's first encrypted cellular phone called a Kokusai and it retailed in the region of IR£1400. Sales were poor partly because Eircell was not in the business of selling phones and because switching from encrypted to unencrypted was 'messy'. As phone prices dropped and the network rolled out to more of Ireland, sales took off reaching a milestone 100,000 subscribers by 1995 and in 1997, Eircell launched Ireland's first prepaid mobile phone service which was called 'Ready To Go'. A year later, it launched its GSM 900 version (access code 087) which quickly took hold as users rapidly switched over to the new digital technology. See also Communications in Ireland Vodafone Ireland References External links Official site (Vodafone, Ireland) The Telegraph - December 2000 - Vodafone's acquisition of Eircell Business2000.ie - Eircell Case Study Mobile telecommunications networks
Eircell
Technology
524
25,522,876
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20February%207%2C%202092
An annular solar eclipse will occur at the Moon's descending node of orbit on Thursday, February 7, 2092, with a magnitude of 0.984. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Occurring about 6.25 days before perigee (on February 2, 2092, at 9:00 UTC), the Moon's apparent diameter will be larger. The path of annularity will be visible from parts of Panama, Colombia, Venezuela, Guyana, the Canary Islands, Morocco, Algeria, and Tunisia. A partial solar eclipse will also be visible for parts of North America, Central America, the Caribbean, northern South America, West Africa, Northwest Africa, and Western Europe. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2092 An annular solar eclipse on February 7. A penumbral lunar eclipse on February 23. A penumbral lunar eclipse on July 19. An annular solar eclipse on August 3. A penumbral lunar eclipse on August 17. Metonic Preceded by: Solar eclipse of April 21, 2088 Followed by: Solar eclipse of November 27, 2095 Tzolkinex Preceded by: Solar eclipse of December 27, 2084 Followed by: Solar eclipse of March 21, 2099 Half-Saros Preceded by: Lunar eclipse of February 2, 2083 Followed by: Lunar eclipse of February 14, 2101 Tritos Preceded by: Solar eclipse of March 10, 2081 Followed by: Solar eclipse of January 8, 2103 Solar Saros 132 Preceded by: Solar eclipse of January 27, 2074 Followed by: Solar eclipse of February 18, 2110 Inex Preceded by: Solar eclipse of February 28, 2063 Followed by: Solar eclipse of January 19, 2121 Triad Preceded by: Solar eclipse of April 8, 2005 Followed by: Solar eclipse of December 9, 2178 Solar eclipses of 2091–2094 Saros 132 Metonic series Tritos series Inex series Notes References 2092 2 7 2092 in science 2092 2 7 2092 2 7
Solar eclipse of February 7, 2092
Astronomy
642
38,225,167
https://en.wikipedia.org/wiki/Digital%20Retro
Digital Retro: The Evolution and Design of the Personal Computer is a coffee table book about the history of home computers and personal computers. It was written by Gordon Laing, a former editor of Personal Computer World magazine and covers the period from 1975 to 1988 (the era before widespread adoption of PC compatibility). Its contents cover home computers, along with some business models and video game consoles, but hardware such as minicomputers and mainframes is excluded. In writing the book, the author's research included finding and interviewing some of those who worked on the featured hardware and founded the companies. Such hardware was borrowed from private collections and computer museums, with more than thirty coming from the Museum of Computing in Swindon. Contents Topics covered include choice of video chip and how designers of sound chips later proceeded to make synthesisers. A number of British computers "that most Americans have probably never encountered in person" are included, such as the Acorn Atom, and Grundy NewBrain. Almost forty computers are included in total. Reception It has been described as a "beautifully illustrated" "well written" book which "drips detail", with the author being noted as a "perfectionist". The photographs depict "external views of each machine from several angles". Omissions (such as the ) were noted by Mike Magee in The Inquirer. There are internal photographs in a few cases. Writing in The Register, Lance Davis commented on the importance of such books, stating "... history isn't just about dead people who wore crowns." References External links Coffee table books
Digital Retro
Technology
326