id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
990,657
https://en.wikipedia.org/wiki/Service%20data%20unit
In Open Systems Interconnection (OSI) terminology, a service data unit (SDU) is a unit of data that has been passed down from an OSI layer or sublayer to a lower layer. This unit of data (SDU) has not yet been encapsulated into a protocol data unit (PDU) by the lower layer. That SDU is then encapsulated into the lower layer's PDU and the process continues until reaching the PHY, physical, or lowest layer of the OSI stack. The SDU can also be thought of as a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user. SDU and PDU It differs from a PDU in that the PDU specifies the data that will be sent to the peer protocol layer at the receiving end, as opposed to being sent to a lower layer. The SDU accepted by any given layer (n) from layer (n+1) above, is a PDU of the layer (n+1) above. In effect the SDU is the 'payload' of a given PDU. The layer (n) may add headers or trailers, or both, to the SDU and may do other kinds of reformatting, recoding, splitting or transformations on the data, forming one or more layer (n) PDUs. The added headers or trailers and other possible changes are part of the process that makes it possible to get data from a source to a destination. Layer (n) may also generate additional layer (n) PDUSs. Each unit of data that layer (n) gives to layer (n-1) below is in turn handed down as a layer (n-1) SDU. When the PDU of layer (n+1), plus any metadata layer (n) would add; would exceed the maximum size a layer-n PDU can be (called layer (n)'s maximum transmission unit); the SDU must be split into multiple payloads for layer (n); a process known as fragmentation. MAC SDU MAC SDUS or MSDUS are data units transmitted between other Media access controllers on a lower OSI Layer. The PDU counterpart MAC PDU that does the same thing but on the same OSI Layer. When there are larger MAC PDU's as MAC SDU's in the system, the MAC PDU includes more MAC SDU's, because of packet aggregation. If the MAC PDU's are smaller then the MAC SDU's includes more MAC PDU's, because of packet segmentation. See also Federal Standard 1037C References Telecommunications standards
Service data unit
Technology
566
66,315
https://en.wikipedia.org/wiki/Solid-state%20chemistry
Solid-state chemistry, also sometimes referred as materials chemistry, is the study of the synthesis, structure, and properties of solid phase materials. It therefore has a strong overlap with solid-state physics, mineralogy, crystallography, ceramics, metallurgy, thermodynamics, materials science and electronics with a focus on the synthesis of novel materials and their characterization. A diverse range of synthetic techniques, such as the ceramic method and chemical vapour depostion, make solid-state materials. Solids can be classified as crystalline or amorphous on basis of the nature of order present in the arrangement of their constituent particles. Their elemental compositions, microstructures, and physical properties can be characterized through a variety of analytical methods. History Because of its direct relevance to products of commerce, solid state inorganic chemistry has been strongly driven by technology. Progress in the field has often been fueled by the demands of industry, sometimes in collaboration with academia. Applications discovered in the 20th century include zeolite and platinum-based catalysts for petroleum processing in the 1950s, high-purity silicon as a core component of microelectronic devices in the 1960s, and “high temperature” superconductivity in the 1980s. The invention of X-ray crystallography in the early 1900s by William Lawrence Bragg was an enabling innovation. Our understanding of how reactions proceed at the atomic level in the solid state was advanced considerably by Carl Wagner's work on oxidation rate theory, counter diffusion of ions, and defect chemistry. Because of his contributions, he has sometimes been referred to as the father of solid state chemistry. Synthetic methods Given the diversity of solid-state compounds, an equally diverse array of methods are used for their preparation. Synthesis can range from high-temperature methods, like the ceramic method, to gas methods, like chemical vapour deposition. Often, the methods prevent defect formation or produce high-purity products. High-temperature methods Ceramic method The ceramic method is one of the most common synthesis techniques. The synthesis occurs entirely in the solid state.  The reactants are ground together, formed into a pellet using a pellet press and hydraulic press, and heated at high temperatures. When the temperature of the reactants are sufficient, the ions at the grain boundaries react to form desired phases. Generally ceramic methods give polycrystalline powders, but not single crystals. Using a mortar and pestle, ResonantAcoustic mixer, or ball mill, the reactants are ground together, which decreases size and increases surface area of the reactants. If the mixing is not sufficient, we can use techniques such as co-precipitation and sol-gel. A chemist forms pellets from the ground reactants and places the pellets into containers for heating. The choice of container depends on the precursors, the reaction temperature and the expected product. For example, metal oxides are typically synthesized in silica or alumina containers. A tube furnace heats the pellet. Tube furnaces are available up to maximum temperatures of 2800oC. Molten flux synthesis Molten flux synthesis can be an efficient method for obtaining single crystals. In this method, the starting reagents are combined with flux, an inert material with a melting point lower than that of the starting materials. The flux serves as a solvent. After the reaction, the excess flux can be washed away using an appropriate solvent or it can be heat again to remove the flux by sublimation if it is a volatile compound. Crucible materials have a great role to play in molten flux synthesis. The crucible should not react with the flux or the starting reagent. If any of the material is volatile, it is recommended to conduct the reaction in a sealed ampule. If the target phase is sensitive to oxygen, a carbon- coated fused silica tube or a carbon crucible inside a fused silica tube is often used which prevents the direct contact between the tube wall and reagents. Chemical vapour transport Chemical vapour transport results in very pure materials. The reaction typically occurs in a sealed ampoule. A transporting agent, added to the sealed ampoule, produces a volatile intermediate species from the solid reactant. For metal oxides, the transporting agent is usually Cl2 or HCl. The ampoule has a temperature gradient, and, as the gaseous reactant travels along the gradient, it eventually deposits as a crystal. An example of an industrially-used chemical vapor transport reaction is the Mond process. The Mond process involves heating impure nickel in a stream of carbon monoxide to produce pure nickel. Low-temperature methods Intercalation method Intercalation synthesis is the insertion of molecules or ions between layers of a solid. The layered solid has weak intermolecular bonds holding its layers together. The process occurs via diffusion. Intercalation is further driven by ion exchange, acid-base reactions or electrochemical reactions. The intercalation method was first used in China with the discovery of porcelain. Also, graphene is produced by the intercalation method, and this method is the principle behind lithium-ion batteries. Solution methods It is possible to use solvents to prepare solids by precipitation or by evaporation. At times, the solvent is a hydrothermal that is under pressure at temperatures higher than the normal boiling point. A variation on this theme is the use of flux methods, which use a salt with a relatively low melting point as the solvent. Gas methods Many solids react vigorously with gas species like chlorine, iodine, and oxygen. Other solids form adducts, such as CO or ethylene. Such reactions are conducted in open-ended tubes, which the gasses are passed through. Also, these reactions can take place inside a measuring device such as a TGA. In that case, stoichiometric information can be obtained during the reaction, which helps identify the products. Chemical vapour deposition Chemical vapour deposition is a method widely used for the preparation of coatings and semiconductors from molecular precursors. A carrier gas transports the gaseous precursors to the material for coating. Characterization This is the process in which a material’s chemical composition, structure, and physical properties are determined using a variety of analytical techniques. New phases Synthetic methodology and characterization often go hand in hand in the sense that not one but a series of reaction mixtures are prepared and subjected to heat treatment. Stoichiometry, a numerical relationship between the quantities of reactant and product, is typically varied systematically. It is important to find which stoichiometries will lead to new solid compounds or solid solutions between known ones. A prime method to characterize the reaction products is powder diffraction because many solid-state reactions will produce polycrystalline molds or powders. Powder diffraction aids in the identification of known phases in the mixture. If a pattern is found that is not known in the diffraction data libraries, an attempt can be made to index the pattern. The characterization of a material's properties is typically easier for a product with crystalline structures. Compositions and structures Once the unit cell of a new phase is known, the next step is to establish the stoichiometry of the phase. This can be done in several ways. Sometimes the composition of the original mixture will give a clue, under the circumstances that only a product with a single powder pattern is found or a phase of a certain composition is made by analogy to known material, but this is rare. Often, considerable effort in refining the synthetic procedures is required to obtain a pure sample of the new material. If it is possible to separate the product from the rest of the reaction mixture, elemental analysis methods such as scanning electron microscopy (SEM) and transmission electron microscopy (TEM) can be used. The detection of scattered and transmitted electrons from the surface of the sample provides information about the surface topography and composition of the material. Energy dispersive X-ray spectroscopy (EDX) is a technique that uses electron beam excitation. Exciting the inner shell of an atom with incident electrons emits characteristic X-rays with specific energy to each element. The peak energy can identify the chemical composition of a sample, including the distribution and concentration.Similar to EDX, X-ray diffraction analysis (XRD) involves the generation of characteristic X-rays upon interaction with the sample. The intensity of diffracted rays scattered at different angles is used to analyze the physical properties of a material such as phase composition and crystallographic structure. These techniques can also be coupled to achieve a better effect. For example, SEM is a useful complement to EDX due to its focused electron beam, it produces a high-magnification image that provides information on the surface topography. Once the area of interest has been identified, EDX can be used to determine the elements present in that specific spot. Selected area electron diffraction can be coupled with TEM or SEM to investigate the level of crystallinity and the lattice parameters of a sample. More information X-ray diffraction is also used due to its imaging capabilities and speed of data generation. The latter often requires revisiting and refining the preparative procedures and that are linked to the question of which phases are stable at what composition and what stoichiometry. In other words, what the phase diagram looks like. An important tool in establishing this are thermal analysis techniques like DSC or DTA and increasingly also, due to the advent of synchrotrons, temperature-dependent powder diffraction. Increased knowledge of the phase relations often leads to further refinement in synthetic procedures in an iterative way. New phases are thus characterized by their melting points and their stoichiometric domains. The latter is important for the many solids that are non-stoichiometric compounds. The cell parameters obtained from XRD are particularly helpful to characterize the homogeneity ranges of the latter. Local structure In contrast to the large structures of crystals, the local structure describes the interaction of the nearest neighbouring atoms. Methods of nuclear spectroscopy use specific nuclei to probe the electric and magnetic fields around the nucleus. E.g. electric field gradients are very sensitive to small changes caused by lattice expansion/compression (thermal or pressure), phase changes, or local defects. Common methods are Mössbauer spectroscopy and perturbed angular correlation. Optical properties For metallic materials, their optical properties arise from the collective excitation of conduction electrons. The coherent oscillations of electrons under electromagnetic radiation along with associated oscillations of the electromagnetic field are called surface plasmon resonances. The excitation wavelength and frequency of the plasmon resonances provide information on the particle's size, shape, composition, and local optical environment. For non-metallic materials or semiconductors, they can be characterized by their band structure. It contains a band gap that represents the minimum energy difference between the top of the valence band and the bottom of the conduction band. The band gap can be determined using Ultraviolet-visible spectroscopy to predict the photochemical properties of the semiconductors. Further characterization In many cases, new solid compounds are further characterized by a variety of techniques that straddle the fine line that separates solid-state chemistry from solid-state physics. See Characterisation in material science for additional information. References External links , Sadoway, Donald. 3.091SC; Introduction to Solid State Chemistry, Fall 2010. (Massachusetts Institute of Technology: MIT OpenCourseWare) Materials science
Solid-state chemistry
Physics,Chemistry,Materials_science,Engineering
2,358
3,104,727
https://en.wikipedia.org/wiki/Phoning%20home
In computing, phoning home is a term often used to refer to the behavior of security systems that report network location, username, or other such data to another computer. Phoning home may be useful for the proprietor in tracking a missing or stolen computer. In this way, it is frequently performed by mobile computers at corporations. It typically involves a software agent which is difficult to detect or remove. However, phoning home can also be malicious, as in surreptitious communication between end-user applications or hardware and its manufacturers or developers. The traffic may be encrypted to make it difficult or impractical for the end user to determine what data are being transmitted. The Stuxnet attack on Iran's nuclear facilities was facilitated by phone-home technology, as reported by The New York Times. Legally phoning home Some uses for the practice are legal in some countries. For example, phoning home could be for access restriction, such as transmitting an authorization key. This was done with the Adobe Creative Suite: Each time one of the programs is opened, it phones home with the serial number. If the serial number is already in use, or a fake, then the program will present the user with the option of entering the correct serial number. If the user refuses, the next time the program loads, it will operate in trial mode until a valid serial number has been entered. However, the method can be thwarted by either disabling the internet connection when starting the program or adding a firewall or Hosts file rule to prevent the program from communicating with the verification server. Phoning home could also be for marketing purposes, such as the "Sony BMG rootkit", which transmits a hash of the currently playing CD back to Sony, or a digital video recorder (DVR) reporting on viewing habits. High-end computing systems such as mainframes have been able to phone home for many years, to alert the manufacturer of hardware problems with the mainframes or disk storage subsystems (this enables repair or maintenance to be performed quickly and even proactively under the maintenance contract). Similarly, high-volume copy machines have long been equipped with phone-home capabilities, both for billing and for preventative/predictive maintenance purposes. In research computing, phoning home can track the daily usage of open source academic software. This is used to develop logs for the purposes of justification in grant proposals to support the ongoing funding of such projects. Aside from malicious activity, phoning home may also be done to track computer assets—especially mobile computers. One of the most well-known software applications that leverage phoning home for tracking is Absolute Software's CompuTrace. This software employs an agent which calls into an Absolute-managed server on regular intervals with information companies or the police can use to locate a missing computer. More uses Other than phoning the home (website) of the applications' authors, applications can allow their documents to do the same thing, thus allowing the documents' authors to trigger (essentially anonymous) tracking by setting up a connection that is intended to be logged. Such behavior, for example, caused v7.0.5 of Adobe Reader to add an interactive notification whenever a PDF file tries phoning home to its author. HTML e-mail messages can easily implement a form of "phoning home". Images and other files required by the e-mail body may generate extra requests to a remote web server before they can be viewed. The IP address of the user's own computer is sent to the webserver (an unavoidable process if a reply is required), and further details embedded in request URLs can further identify the user by e-mail address, marketing campaign, etc. Such extra page resources have been referred to as "web bugs" and they can also be used to track off-line viewing and other uses of ordinary web pages. So as to prevent the activation of these requests, many e-mail clients do not load images or other web resources when HTML e-mails are first viewed, giving users the option to load the images only if the e-mail is from a trusted source. Maliciously phoning home There are many malware applications that can "phone home" to gather and store information about a person's machine. For example, the Pushdo Trojan shows the new complexity of modern malware applications and the phoning-home capabilities of these systems. Pushdo has 421 executables available to be sent to an infected Windows client. Surveillance cameras Foscam have been reported by security researcher Brian Krebs to secretly phone home to the manufacturer. See also Digital rights management (DRM) Product activation Spyware Internet of Things Telemetry References Computer network security Spyware Internet privacy
Phoning home
Engineering
971
4,145,476
https://en.wikipedia.org/wiki/Doxefazepam
Doxefazepam (marketed under brand name Doxans) is a benzodiazepine medication. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. It is used therapeutically as a hypnotic. According to Babbini and colleagues in 1975, this derivative of flurazepam was between 2 and 4 times more potent than the latter while at the same time being half as toxic in laboratory animals. It was patented in 1972 and came into medical use in 1984. Side effects Section 5.5 of the article Doxefazepam in volume 66 of the World Health Organization's (WHO) and International Agency for Research on Cancer's (IARC) IARC Monographs On The Evaluation Of Carcinogenic Risks To Humans, an article describing the carcinogenic/toxic effects of doxefazepam on humans and experimental animals, states that there is "inadequate evidence in humans for the carcinogenicity of doxefazepam" and limited evidence in experimental for the carcinogenicity of doxefazepam," and concluded that the overall evaluation of the substance's carcinogenicity to humans is "not classifiable." See also Benzodiazepine References External links Inchem.org - Doxefazepam IARC Monographs - Doxefazepam Primary alcohols Lactims Benzodiazepines Chloroarenes 2-Fluorophenyl compounds Hypnotics Lactams
Doxefazepam
Biology
321
14,848,284
https://en.wikipedia.org/wiki/Table%20of%20volume%20of%20distribution%20for%20drugs
This is a table of volume of distribution (Vd) for various medication. For comparison, those with a Vd L/kg body weight of less than 0.2 are mainly distributed in blood plasma, 0.2-0.7 mostly in the extracellular fluid and those with more than 0.7 are distributed throughout total body water. References & footnotes Volume of distribution for drugs, Table Pharmacokinetics
Table of volume of distribution for drugs
Chemistry
89
15,593,568
https://en.wikipedia.org/wiki/Email%20storm
An email storm (also called a reply all storm or sometimes reply allpocalypse) is a sudden spike of "reply all" messages on an email distribution list, usually caused by a controversial or misdirected message. Such storms can start when even one member of the distribution list replies to the entire list at the same time in response to an instigating message. When other members respond, pleading for the cessation of messages, asking to be removed from the list, or adding vitriol to the discussion this triggers a chain reaction of email messages. The sheer load of traffic generated by these storms can render the email servers inoperative, similar to a distributed denial-of-service attack. Some email viruses also have the capacity to create email storms by sending copies of themselves to an infected user's contacts, including distribution lists, infecting the contacts in turn. Examples On 31 March 1987, Jordan Hubbard, using rwall, intended to message every machine at UC Berkeley, but the message was sent to every machine on the Internet listed in /etc/hosts. This message was not an email. On 3 October 2007, an email storm was generated at the U.S. Department of Homeland Security, causing more than 2.2 million messages to be sent and exposing the names of hundreds of security professionals. In early 2009, U.S. State Department employees were warned they could face disciplinary action for taking part in a massive email storm that "nearly knocked out one of the State Department's main electronic communications systems". In November 2012, New York University experienced a reply-all email storm with 39,979 subscribed addresses affected due to an older listserv-based mailing list. On 18 September 2013, a Cisco employee sent an email to a "sep_training1" mailing list containing 23,570 members requesting that an online training be performed. The resulting storm of more than four million reply emails, many of which were requests to unsubscribe and facepalm images, generated over 375 GB of network traffic and an estimated $600,000 of lost productivity. The following month on 23 October 2013, a nearly identical email storm occurred when an employee sent a message to a Cisco group containing 34,562 members. The thread was flooded with "remove me from the list", "me too", "please don't reply-all", and even a pizza recipe. On 18 March 2014, a Capgemini employee sent an internal mail to an erroneously generated mail group containing 47,212 members in 15 countries. This was followed by a subsequent wave of over 500 reply-alls requesting removal from the list, asking for people to stop replying along with jokes in various languages. It lasted approximately 6 hours, involved more than 21 million emails, and generated an estimated 1.5 TB of traffic. On 8 October 2014, an email storm of over 3,000 messages, including both spam and student comments, reached University College London's 26,000 students. Dubbed "Bellogate", the email chain was started by a prank email sent from an anonymous user pretending to be the provost. On 26 August 2015, Thomson Reuters, a media and information firm, experienced a "reply all" email storm reaching out to over 33,000 employees. Seven hours later, the original email resulted in nearly 23 million emails. The storm was initiated by an employee located in the Philippines requesting his phone to be re-activated. Employees from all over the globe took to social media trending the hashtag #ReutersReplyAllGate. On 2 October 2015, Atos, a European IT services corporation, experienced a "reply all" email storm. In about one hour, 379 emails were sent to an email distribution list with 91,053 employees, leading to more than 34.5 million emails. The storm was initiated by an employee located in India, requesting a password reset for a machine. On 14 November 2016, at least 840,000 employees of the United Kingdom's National Health Service (out of a total of 1.2 million employees) were sent a 'test e-mail' by a Croydon-based IT contractor, resulting in an estimated total of 186 million e-mails generated during the reply-all storm. On 7 December 2018, the Utah state government experienced an email storm originating in a holiday potluck invite that was mistakenly sent to 25,000 state employees, nearly the entire state workforce. Utah Lieutenant Governor Spencer Cox called it "an emergency". On 24 January 2019, GitHub notifications caused a large number of emails at Microsoft. There is a GitHub group called @Microsoft/everyone that the notifications were sent to. To make things worse, replying to the notifications automatically resubscribed the user. On 28 May 2019, an employee at the United States House of Representatives sent out a message to an email group called "Work Place Rights 2019". The group contained every single House employee's contact. The email replies lasted over two hours. On 3 June 2022, a user made a pull request to a GitHub repository belonging to the Epic Games organization, tagging several of the organization's teams. Notifications were delivered to members of the tagged teams, sending emails to around 400,000 members of the tagged "EpicGames/developers" team in the process. Furthermore, some individuals received an additional 150 notifications as a result of the ensuing comments submitted in response to the request. Epic Games uses GitHub to distribute source code for its Unreal Engine game engine and grants access to the private repositories by adding users to the "EpicGames/developers" team, accounting for its unusually large number of members compared to other GitHub organizations. On 13 December 2022, a medical student at the Mayo Clinic Alix School of Medicine in Arizona sent an email to several large distribution lists which included employees of Mayo Clinic Arizona for an academic project. Over 3,000 individuals received the email. On 8 September 2023, an emergency drill held in the United States Senate led to an email storm when users who were asked to give their location used "reply all" to the entire Senate. On 9 November 2024, a misconfigured email list at Miami University resulted in students receiving hundreds of emails from others asking to be removed from the list, alongside other memes. See also Etiquette in technology Information overload Blind carbon copy References Email Internet terminology
Email storm
Technology
1,326
7,318,182
https://en.wikipedia.org/wiki/Superordinate%20goals
In social psychology, superordinate goals are goals that are worth completing but require two or more social groups to cooperatively achieve. The idea was proposed by social psychologist Muzafer Sherif in his experiments on intergroup relations, run in the 1940s and 1950s, as a way of reducing conflict between competing groups. Sherif's idea was to downplay the two separate group identities and encourage the two groups to think of themselves as one larger, superordinate group. This approach has been applied in many contexts to reduce intergroup conflict, including in classrooms and business organizations. However, it has also been critiqued by other social psychologists who have proposed competing theories of intergroup conflict, such as contact theory and social categorization theory. In the context of goal-setting theory, the concept is seen in terms of three goal levels. These are classified as subordinate, intermediate and superordinate. An organization's superordinate goals are expressed through its Vision and Mission Statement and support strategic alignment of activities (subordinate and intermediate goals) with the overall purpose (superordinate goals). Origin Superordinate goals were first described and proposed as a solution to intergroup conflict by social psychologist Muzafer Sherif. He studied conflict by creating a boys' summer camp for his Robbers Cave experiments. Sherif assigned the participating campers to two separate groups, the blue and red groups. The boys had separate games and activities, lived in different cabins, ate at different tables, and only spent time with their own group. Sherif then introduced competition between the groups, setting up athletic contests between them. This created conflict between the two groups of boys that developed into hostile attitudes towards the other group, pranking, name-calling, shows of group pride, negative stereotyping, and even occasionally physical violence. In order to reduce the conflict between the two groups of boys, Sherif had first attempted to have both groups spend time together non-competitively. He had also encouraged them to mix and eat meals and play games with boys from the other group. However, the groups remained hostile toward each other. He had also tried to unite both groups against a common enemy, an outside summer camp, in an early version of the experiment. However, this was deemed an inadequate solution as this simply created a new conflict between the new group and the common enemy. Sherif then introduced superordinate goals as a possible solution to the conflict. These were goals that were important to the summer camp but could only be achieved with both groups working together, such as obtaining water during a water shortage or procuring a film that both groups wanted to see but did not have enough money for. Sherif found that these goals encouraged cooperation between the boys, which reduced conflict between the groups, increased positive beliefs about boys from the other group, and increased cross-group friendships. Background Superordinate goals are most often discussed in the context of realistic conflict theory, which proposes that most intergroup conflicts stem from a fight over scarce resources, especially in situations that are seen as zero-sum. Under realistic conflict theory, prejudice and discrimination are functional, because groups are tools used to achieve goals, including obtaining scarce resources that would be difficult to get as an individual. In this case, groups see other groups with similar goals as threats and therefore perceive them negatively. Groups that are both competing for the same limited resource are said to have a negative interdependence. On the other hand, there are groups that benefit from working together on goals that are not zero-sum. In this case, these groups are said to have a positive interdependence. In order to remove competition between different factions under realistic group conflict theory, it is necessary to have non-zero sum goals that create a positive interdependence within groups rather than a negative interdependence. Superordinate goals can create positive interdependence if they are seen as desirable by both groups but are not achievable by each faction independently. Psychological Mechanisms Work in social psychology suggests that superordinate goals differ from single group goals in that they make the larger group identity more salient and increase positive beliefs about everyone in the larger superordinate group. Cooperation and Interdependence Superordinate goals differ from smaller group goals in that they cannot be achieved by a single small group, and thus force multiple groups to work together, encouraging cooperation and penalizing competition. This encourages each group to consider the other group positively rather than negatively, as the other group is instrumental to achieving the common goal. This fosters a sense of positive interdependence rather than negative interdependence. Superordinate Goals and Identity In addition to increasing positive interdependence, having two groups work together on a single superordinate goal makes the larger group identity more salient. In effect, superordinate goals make it more likely that both groups will consider themselves as part of a larger superordinate group that has a common goal rather than two independent groups who are in conflict with each other. In the case of Sherif's summer camp, both groups of boys, the red and the blue, thought of themselves simply as campers when they were working together, rather than as part of the blue or red groups. Ingroups Having both groups consider themselves part of one larger superordinate group is valuable to the reduction of discrimination, because evaluation of members in one's own group tends to be more positive than evaluation of members outside of one's group. However, the two groups do not need to lose their individual identities in order to become part of the superordinate group. In fact, superordinate goals work best to reduce intergroup conflict when both groups consider themselves subgroups that have a shared identity and a common fate. This allows both groups to keep the positive aspects of their individual identities while also keeping salient everything that the two subgroups have in common. Rebuttal of Contact Theory Sherif's work on superordinate goals is widely seen as a rebuttal of contact theory, which states that prejudice and discrimination between groups widely exists due to a lack of contact between them. This lack of contact causes both sides to develop misconceptions about those who they do not know and to act on those misconceptions in discriminatory ways. However, Sherif's work showed that contact between groups is not enough to eliminate prejudice and discrimination. If groups are competing for the same limited resources, increasing contact between the groups will not convince the groups to see each other more positively. Instead, they will continue to discriminate, as the boys in Sherif's summer camps did. This is especially true when the groups are of unequal status and one group can control the resources and power. Caveats and Critiques Longevity The effects of superordinate goals have not always been shown to last beyond the completion of such goals. In Sherif's study, the separate group identities did not dissolve until the end of the camp. The two groups of boys had less hostility toward each other but still identified with their own groups rather than the larger superordinate identity. Zero-Sum Goals In some cases, there are no superordinate goals that can bring together two separate groups. If there really are zero-sum goals that put groups in competition with each other, groups will remain separate and will stereotype each other and discriminate against each other. In some cases, simply the perception that goals are zero-sum, whether they are or not, can increase prejudice. Therefore, not only is there a need for non-zero-sum goals, but they must be perceived as such. Complementarity Superordinate goals are not as effective when both groups are performing similar or the same roles within the group to achieve the goal. If this is the case, both groups may see the other as infringing on their work or getting in the way. It is considered to be more effective to have members of each group playing complementary roles in the achievement of the goal, although the evidence to support this idea is mixed. Absence of Trust or Inequality of Power Some also argue that with an absence of trust, the prospect of working together to achieve a mutual goal may not serve to bring groups to a superordinate identity. In some cases, when there are inequalities of power or a lack of trust among groups, the idea that they must work together and foster trust and positive interdependence may backfire and lead to more discrimination rather than less. Competing Theories Social categorization theory and social identity theory differ from realistic group conflict theory in that they suggest that people do not only belong to groups to gain material advantage. Therefore, these theories propose other ways of improving intergroup social relations. Social Categorization Theory Social categorization theory proposes that people naturally categorize themselves and others into groups, even when there is no motive to do so. Supporting this idea is Tajfel's minimal group paradigm, which has shown there is discrimination among groups created in a laboratory that have no history, future, interaction, or motivation. Social categorization suggests that intergroup competition may be a feature of this tendency to categorize and may arise without zero-sum goals. Under Tajfel's paradigm, people will go as far as hurting their own group in order to harm the other group even more. Thus, superordinate goals may not solve all forms of discrimination. Social Identity Theory Social identity theory proposes that not only do people naturally categorize themselves and others, but they derive part of their own identities from being a part of a social group. Being part of a social group is a source of positive self-esteem and motivates individuals to think of their own group as better than other groups. Under social identity theory, superordinate goals are only useful insofar as they make salient the superordinate identity. It is the superordinate identity that is important for reducing intergroup conflict, and not the goals themselves. If the superordinate identity can be made salient without the use of goals, then the goals themselves are not instrumental to reducing conflict. Applications Superordinate goals have been applied to multiple types of situations in order to reduce conflict between groups. Jigsaw Classroom Elliot Aronson applied the idea of superordinate goals in Austin, Texas during the integration of the Austin public schools. Aronson used group projects in elementary school classrooms as a way to get white and black children to work together and reduce discrimination. Aronson had teachers assign projects that could only be completed if everyone in the group participated, and had the teachers give group grades. Having children work together and rely on each other for grades fostered positive interdependence and increased liking among the black and white children as well as decreased bullying and discrimination. Additionally, it increased the performance of all the children. Business Organizations and Negotiations Blake and Mouton applied superordinate goals to conflicts in business organizations. They specify that in a business context, the superordinate goals must be attractive to both parties in the organization or negotiation setting. If both parties are not interested in pursuing the goal or believe that they are better off without it, then the superordinate goal will not help to reduce conflict between the groups. Blake and Mouton also suggest that superordinate goals will often be a consequence of their intergroup problem-solving model. Israeli-Palestinian Conflict Herbert Kelman applied superordinate goals to the Israeli-Palestinian conflict to improve relations between members of the two groups. He created problem-solving workshops where Israelis and Palestinians were encouraged to solve together the problems given to them as well as to interact in a positive atmosphere. These workshops often focused on specific problems, such as tourism, economic development, or trade, which allowed both groups to find practical, positive solutions to these problems and improve relations between the groups. Interracial Basketball Teams McClendon and Eitzen studied interracial basketball teams in the 1970s and found that interracial basketball teams where the interdependence of black and white team members was high and the team had a high winning percentage had lower instances of anti-black attitudes among white players and higher preference for integration. However, teams that did not have high interdependence among black and white teammates or high winning percentages did not show reduced prejudice. Additionally, black members of the winning teams did not show more positive attitudes towards their white teammates than the losing teams. References Motivation
Superordinate goals
Biology
2,566
372,242
https://en.wikipedia.org/wiki/Radical%20of%20a%20ring
In ring theory, a branch of mathematics, a radical of a ring is an ideal of "not-good" elements of the ring. The first example of a radical was the nilradical introduced by , based on a suggestion of . In the next few years several other radicals were discovered, of which the most important example is the Jacobson radical. The general theory of radicals was defined independently by and . Definitions In the theory of radicals, rings are usually assumed to be associative, but need not be commutative and need not have a multiplicative identity. In particular, every ideal in a ring is also a ring. A radical class (also called radical property or just radical) is a class σ of rings possibly without multiplicative identities, such that: the homomorphic image of a ring in σ is also in σ every ring R contains an ideal S(R) in σ that contains every other ideal of R that is in σ S(R/S(R)) = 0. The ideal S(R) is called the radical, or σ-radical, of R. The study of such radicals is called torsion theory. For any class δ of rings, there is a smallest radical class Lδ containing it, called the lower radical of δ. The operator L is called the lower radical operator. A class of rings is called regular if every non-zero ideal of a ring in the class has a non-zero image in the class. For every regular class δ of rings, there is a largest radical class Uδ, called the upper radical of δ, having zero intersection with δ. The operator U is called the upper radical operator. A class of rings is called hereditary if every ideal of a ring in the class also belongs to the class. Examples The Jacobson radical Let R be any ring, not necessarily commutative. The Jacobson radical of R is the intersection of the annihilators of all simple right R-modules. There are several equivalent characterizations of the Jacobson radical, such as: J(R) is the intersection of the regular maximal right (or left) ideals of R. J(R) is the intersection of all the right (or left) primitive ideals of R. J(R) is the maximal right (or left) quasi-regular right (resp. left) ideal of R. As with the nilradical, we can extend this definition to arbitrary two-sided ideals I by defining J(I) to be the preimage of J(R/I) under the projection map R → R/I. If R is commutative, the Jacobson radical always contains the nilradical. If the ring R is a finitely generated Z-algebra, then the nilradical is equal to the Jacobson radical, and more generally: the radical of any ideal I will always be equal to the intersection of all the maximal ideals of R that contain I. This says that R is a Jacobson ring. The Baer radical The Baer radical of a ring is the intersection of the prime ideals of the ring R. Equivalently it is the smallest semiprime ideal in R. The Baer radical is the lower radical of the class of nilpotent rings. Also called the "lower nilradical" (and denoted Nil∗R), the "prime radical", and the "Baer-McCoy radical". Every element of the Baer radical is nilpotent, so it is a nil ideal. For commutative rings, this is just the nilradical and closely follows the definition of the radical of an ideal. The upper nil radical or Köthe radical The sum of the nil ideals of a ring R is the upper nilradical Nil*R or Köthe radical and is the unique largest nil ideal of R. Köthe's conjecture asks whether any left nil ideal is in the nilradical. Singular radical An element of a (possibly non-commutative ring) is called left singular if it annihilates an essential left ideal, that is, r is left singular if Ir = 0 for some essential left ideal I. The set of left singular elements of a ring R is a two-sided ideal, called the left singular ideal, and is denoted . The ideal N of R such that is denoted by and is called the singular radical or the Goldie torsion of R. The singular radical contains the prime radical (the nilradical in the case of commutative rings) but may properly contain it, even in the commutative case. However, the singular radical of a Noetherian ring is always nilpotent. The Levitzki radical The Levitzki radical is defined as the largest locally nilpotent ideal, analogous to the Hirsch–Plotkin radical in the theory of groups. If the ring is Noetherian, then the Levitzki radical is itself a nilpotent ideal, and so is the unique largest left, right, or two-sided nilpotent ideal. The Brown–McCoy radical The Brown–McCoy radical (called the strong radical in the theory of Banach algebras) can be defined in any of the following ways: the intersection of the maximal two-sided ideals the intersection of all maximal modular ideals the upper radical of the class of all simple rings with multiplicative identity The Brown–McCoy radical is studied in much greater generality than associative rings with 1. The von Neumann regular radical A von Neumann regular ring is a ring A (possibly non-commutative without multiplicative identity) such that for every a there is some b with a = aba. The von Neumann regular rings form a radical class. It contains every matrix ring over a division algebra, but contains no nil rings. The Artinian radical The Artinian radical is usually defined for two-sided Noetherian rings as the sum of all right ideals that are Artinian modules. The definition is left-right symmetric, and indeed produces a two-sided ideal of the ring. This radical is important in the study of Noetherian rings, as outlined by . See also Related uses of radical that are not radicals of rings: Radical of a module Kaplansky radical Radical of a bilinear form References Further reading Ideals (ring theory) Ring theory
Radical of a ring
Mathematics
1,314
70,901,258
https://en.wikipedia.org/wiki/EA-1763
EA-1763, O-PPVX, V1 or propyl S-2-diisopropylaminoethylmethylphosphonothiolate is a military-grade neurotoxic organophosphonate nerve agent related to VX as it is the propyl analogue of VX. It is part of the V-series. Chemical characteristics Little information about EA-1763's physicochemical properties has been reported. V1 is a more viscous and less dense liquid than VX. It is colorless, odorless and tasteless in its pure form. When impure or in the crude form, it has a characteristic viscous amber color, giving it an appearance similar to motor oil. The appearance of the impure form varies between several shades of amber, from a viscous liquid of a transparent pale yellow color to a pasty liquid of a semi-transparent and cloudy dirty amber color. The smell varies from engine oil to an offensive brew of organosulfur compounds and organoamines. Its larger alkane chain pushes its melting point above that of VX. The estimated solubility of V1 in water is 4 times lower compared to VX (6.8 g/L of water at 25 °C). V1 has high solubility in organic solvents and other non-polar compounds. The stability of V1 is roughly the same as that of VX in either environment. Higher insolubility and lower volatility can slow down the process. A vapor pressure at least 3 times lower than VX is speculated. The longer alkane chain tends to stabilize the induction of electrons from P to O, making P less electrophilic. It is expected that the persistence of V1 is slightly higher than that of VX since the hydrolysis rate of ethyl paraoxon is 1.6 times higher than the one of n-propyl paraoxon in a neutral medium. The lower volatility and minimal persistence difference makes VX preferable to V1. Preparation It is prepared by the same route as VX using propanol instead of ethanol. References V-series nerve agents Phosphonothioates Chemical weapons of the United States Diisopropylamino compounds
EA-1763
Chemistry
468
53,810,329
https://en.wikipedia.org/wiki/NGC%20430
NGC 430 is an elliptical galaxy of type E: located in the constellation Cetus. It was discovered on October 1, 1785 by William Herschel. It was described by Dreyer as "faint, very small, round, very suddenly brighter middle similar to star." References External links 0430 17851001 Cetus Elliptical galaxies 004376
NGC 430
Astronomy
72
16,655,417
https://en.wikipedia.org/wiki/The%20Nautical%20Almanac
The Nautical Almanac has been the familiar name for a series of official British almanacs published under various titles since the first issue of The Nautical Almanac and Astronomical Ephemeris, for 1767: this was the first nautical almanac to contain data dedicated to the convenient determination of longitude at sea. It was originally published from the Royal Greenwich Observatory in England. A detailed account of how the publication was produced in its earliest years has been published by the National Maritime Museum. Since 1958 (with the issue for the year 1960), His Majesty's Nautical Almanac Office and the US Naval Observatory have jointly published a unified Nautical Almanac, for use by the navies of both countries. Publication history The changing names and contents of related titles in the series are summarised as follows. (The issue years mentioned below are those for which the data in the relevant issue were calculated—and the issues were in practice published in advance of the year for which they were calculated, at different periods of history, anything from 1 to 5 years in advance). (For many years, official nautical almanacs and astronomical ephemerides in the UK and the USA had a linked history, and they became merged in both titles and contents in 1981.) In the UK, the official publications have been: 1767–1959 For 1767–1959, The Nautical Almanac and Astronomical Ephemeris contained both astro-navigational and general astronomical data (this complete publication was often referred to, for short, especially in the earlier years, as just The Nautical Almanac). From 1832, responsibility for publication was transferred to His Majesty's Nautical Almanac Office. The main distinctive feature of the inaugural issue for 1767 was the tabulation of lunar distances as a tool to facilitate the determination of longitude at sea from observations of the Moon. Within a few years, the publishers of almanacs of other countries began to adopt the practice of tabulating lunar distances. Lunar distances continued to be published in the UK official almanacs until 1906, by which time their use had declined in practice. For some time thereafter, in the issues for the years 1907–1919, examples of how to calculate them were given instead. Time: The issues for 1767 to 1833 gave their ephemeris tabulations in terms of Greenwich apparent (not mean) time. This was on the grounds that an important class of user was the 'Mariner', and that 'apparent Time' was "the same which he will obtain by the Altitudes of the Sun or Stars in the Manner hereafter prescribed". Mean time at Greenwich (i.e. mean solar time) was adopted as from the issue for 1834 and continued to 1959. Until the issue for 1924, the time argument for Greenwich Mean Time was counted from 0h starting at Greenwich mean noon (on the civil day with the same number), and starting with the issue for 1925 the commencement point of the time argument was changed so that 0h became midnight at the beginning of the civil day with the relevant number, to coincide for the future with the civil reckoning. During parts of the period 1767–1959, separate subsidiary titles dedicated to navigation were also published: For 1896–1913: Part 1 of the Nautical Almanac and Astronomical Ephemeris (containing the astro-navigational data) was also published separately as The Nautical Almanac & Astronomical Ephemeris, Part 1. For 1914–1951: the former Part 1 (after redesign) was renamed The Nautical Almanac Abridged for the Use of Seamen. For 1952–1959: after further redesign, it was again renamed, as The Abridged Nautical Almanac (and renamed yet again for 1960 onwards as simply The Nautical Almanac). 1960–1980 From the issues for 1960, the official titles were redesigned and unified (as to content) between the UK and USA, under the titles (in UK) The Astronomical Ephemeris and (separately) The Nautical Almanac. Time: A major change introduced with the 1960 issue of The Astronomical Ephemeris was the use of ephemeris time in place of mean solar time for the major ephemeris tabulations. But the Nautical Almanac, now continuing as a separate publication addressed largely to navigators, continued to give tabulations based on mean solar time (UT). 1981 to date For 1981 to date, the official titles have been unified in UK and USA (as to title as well as (redesigned) content): The Astronomical Almanac and The Nautical Almanac. The British Nautical Almanac in the United States In the US, an official (and initially separate) series of ephemeris publications began with the issue for 1855 as The American Ephemeris and Nautical Almanac; but before that, the British Nautical Almanac was commonly used on American ships and in the United States – sometimes in the form of an independently printed American 'impression' instead. Modern alternative data sources Almanac data is now also available online from the US Naval Observatory. References Bibliography Mary Croarken (2002 September). Journal of Maritime Research (Greenwich: National Maritime Museum). External links HM Nautical Almanac Office: Publications Online catalogue to copies of the Nautical Almanac held as part of the Royal Greenwich Observatory Archives at Cambridge University Library Nautical Almanac on Internet Archive for 1922, 1861, 1820, 1773 etc. Essay about The Nautical Almanac, by Sophie Waring, with digitised original documents relating to its creation. 1767 establishments in Great Britain British non-fiction books United States Naval Observatory Astronomical almanacs
The Nautical Almanac
Astronomy
1,134
2,878,186
https://en.wikipedia.org/wiki/Delta%20Arae
Delta Arae, Latinized from δ Arae, is the Bayer designation for a double star in the southern constellation Ara. It has an apparent visual magnitude of 3.62 and is visible to the naked eye. Based upon an annual parallax of 16.48 mas, it is about distant from the Earth. Delta Arae is massive B-type main sequence star with a stellar classification of B8 Vn. The 'n' suffix indicates the absorption lines are spread out broadly because the star is spinning rapidly. It has a projected rotational velocity of 255 km/s, resulting in an equatorial bulge with a radius 13% larger than the polar radius. It has a magnitude 9.5 companion G-type main sequence star that may form a binary star system with Delta Arae. There is a 12th magnitude optical companion located 47.4 arcseconds away along a position angle of 313°. Etymology Delta Arae was known as (meaning: "the 3rd (star) of ") in traditional Chinese astronomy. Allen erroneously called both Delta and Zeta Arae "Tseen Yin" (). He probably confused the constellation "Ara" with "Ari", as 天陰 is actually in Aries. See also Ara (Chinese astronomy) Aries (Chinese astronomy) References Further reading (1987): , pp. 312, 328. External links HR 6500 AEEA (Activities of Exhibition and Education in Astronomy) 天文教育資訊網 2006 年 7 月 1 日 Double stars Arae, Delta 158094 085727 6500 Ara (constellation) B-type main-sequence stars Durchmusterung objects G-type main-sequence stars
Delta Arae
Astronomy
342
29,941,454
https://en.wikipedia.org/wiki/Control%20of%20chromosome%20duplication
In cell biology, eukaryotes possess a regulatory system that ensures that DNA replication occurs only once per cell cycle. A key feature of the DNA replication mechanism in eukaryotes is that it is designed to replicate relatively large genomes rapidly and with high fidelity. Replication is initiated at multiple origins of replication on multiple chromosomes simultaneously so that the duration of S phase is not limited by the total amount of DNA. This flexibility in genome size comes at a cost: there has to be a high-fidelity control system that coordinates multiple replication origins so that they are activated only once during each S phase. If this were not the case, daughter cells might inherit an excessive amount of any DNA sequence, which could lead to many harmful effects. The replication origin Replication in eukaryotes begins at replication origins, where complexes of initiator proteins bind and unwind the helix. In eukaryotes, it is still unclear what exact combinations of DNA sequence, chromatin structure, and other factors define these sites. The relative contribution of these factors varies between organisms. Yeast origins are defined primarily by DNA sequence motifs, while origin locations in other organisms seem to be defined by local chromatin structure. Yeast Origins in budding yeast are defined by the autonomously replicating sequence (ARS), a short stretch of DNA (100-200 bp) that can initiate replication when transferred to any sequence of DNA. The ARS contains several specific sequence elements. One of these is the A element (ACS), an 11 bp consensus sequence rich in adenines and thymines that is essential for initiation. Single base-pair mutations in the ACS can abolish initiation activity. The ORC, a component of the initiation complex, binds the ACS in vivo throughout the cell cycle, and in vitro in an ATP dependent manner. When a few of these sequences are deleted, DNA is still copied from other intact origins, but when many are deleted, chromosome replication slows down dramatically. Still, presence of an ACS sequence is not sufficient to identify an origin of replication. Only about 30% of ACS sequences present in the genome are the sites of initiation activity. Origins in fission yeast contain long stretches of DNA rich in thymines and adenines that are important for origin function, but do not exhibit strong sequence similarity. Animals In animals, no highly conserved sequence elements have been found to direct origin activity, and it has proved difficult to identify common features of replication origins. At some loci, initiation occurs within small, relatively definable stretches of DNA, while at others, larger initiation zones of 10–50 kb seem to direct origin activity. At the sequence level, AT rich elements and CpG islands have been found at origins, but their importance or role is not yet clear. At the level of DNA structure, bent DNA and loop formation have been identified as origin features. Features identified at the chromatin level include nucleosome free regions, histone acetylation and DNAse sensitive sites. The pre-replication complex Before DNA replication can start, the pre-replicative complex assembles at origins to load helicase onto DNA. The complex assembles in late mitosis and early G1. Assembly of these pre-replicative complexes (pre-RCs) is regulated in a manner that coordinates DNA replication with the cell cycle. Components of the pre-RC The ORC The ORC is a six subunit complex that binds DNA and provides a site on the chromosome where additional replication factors can assemble. It was identified in S. cerevisiae by its ability to bind the conserved A and B1 elements of yeast origins. It is a conserved feature of the replication system in Eukaryotes. Studies in Drosophila showed that recessive lethal mutations in multiple drosophila ORC subunits reduces the amount of BrdU (a marker of active replication), incorporated. Studies in Xenopus extracts show that immuno-depletion of ORC subunits inhibits DNA replication of Xenopus sperm nuclei. In some organisms, the ORC appears to associate with chromatin throughout the cell cycle, but in others it dissociates at specific stages of the cell cycle. Cdc6 and Cdt1 Cdc6 and Cdt1 assemble on the ORC and recruit the Mcm proteins. Homologs for these two S. cerevisiae proteins have been found in all eukaryotes. Studies have shown that these proteins are necessary for DNA replication. Mutations in S. pombe cdt1 blocked DNA replication. The Mcm Complex Mcm 2-7 form a six-subunit complex and is thought to have helicase activity. Deletion of any single subunit of the complex has a lethal phenotype in yeast. Studies in Xenopus revealed the Mcm2-7 complex is a critical component of DNA replication machinery. Inactivation of temperature sensitive mutants of any of the Mcm proteins in "S. cerevisiae" caused DNA replication to halt if inactivation occurred during S phase, and prevented initiation of replication if inactivation occurred earlier. Although biochemical data support the hypothesis that the Mcm complex is a helicase, helicase activity was not detected in all species, and some studies suggest that some of the mcm subunits act together as the helicase, while other subunits act as inhibitors of this activity. If this is true, activation of the Mcm complex probably involves rearrangement of the subunits. Regulation of pre-RC complex assembly A two-step mechanism ensures that DNA is replicated only once per cycle. Assembly of the pre-RC complex (licensing) is limited to late mitosis and early G1 because it can occur only when CDK activity is low, and APC activity is high. Origin firing occurs only in S phase, when the APC is inactivated, and CDKs are activated. Yeast In budding yeast, CDK is the key regulator of pre-RC assembly. Evidence for this is that inactivation of CDKs in cells arrested in G2/M or in S phase drives reassembly of pre-RCs. CDK acts by inhibiting the individual components of the pre-RC. CDK phosphorylates Cdc6 to mark it for degradation by the SCF in late G1 and early S phase. CDK also induces export of Mcm complexes and Cdt1 from the nucleus. Evidence that CDKs regulate the localization of Mcm2-7 is that inactivation of CDKs in nocodazole arrested cells induced accumulation of Mcm2-7 in the nucleus. Cdt1 is also exported because it binds to the Mcm complex. In Mcm depleted cells, Cdt1 did not accumulate in the nucleus. Conversely, when a nuclear localization signal was attached to Mcm7, Mcm2-7 and Cdt1 were always found in the nucleus. Export of Mcm from the nucleus prevents loading of new Mcm complexes but does not affect the complexes that have already been loaded onto the DNA. CDK also phosphorylates ORC proteins. It has been suggested that phosphorylation affects the ability of the ORC to bind other components of the pre-RC. To get substantial re-replication of DNA, regulation of all three components, Cdc6, Mcm2-7 and the ORC has to be prevented. Having multiple mechanisms to prevent re-replication is beneficial because it the regulatory network continues to function even if one of the components fails. Animals Geminin is an important inhibitor of pre-Rc assembly is metazoan cells. Geminin was identified in a screen for APC/C substrates in Xenopus. Studies have shown that Geminin prevents pre_RC assembly by binding to cdt1 and preventing its association with the pre-RC. Since geminin is degraded by the APC/C, pre-Rc assembly can proceed only when APC/C activity is high, which occurs in G1. The importance of CDKs in preventing re-licensing in metazoan cells is still unclear. Some studies have showed that under some conditions, CDKs can also promote licensing. In G0 mammalian cells, APC mediated degradation of Cdc6 prevents licensing. However, when the cells transition into a proliferative state, CDK phosphorylates Cdc6 to stabilizes it and allow it to accumulate and bind to origins before licensing inhibitors such as geminin accumulate. Activation of replication origins While pre-RC complexes mark potential sites for origin activation, further proteins and complexes must assemble at these sites to activate replication (origin firing). The following events must occur in order to activate the origin: the DNA helix has to open, the helicase must be activated, and DNA polymerases and the rest of the replicative machinery have to load onto the DNA. These events depend on the assembly of several proteins to form the pre-initiation complex at the replication origins loaded with pre-replicative complexes. Assembly of the pre-initiation complex depends on the activities of S-Cdks and the protein kinase Cdc7. The pre-initiation complex activates the Mcm helicase and recruits DNA polymerase. When the cell commits to a new cell cycle, after passing through the Start checkpoint, G1 and G1/S cyclin CDK complexes are activated. These activate the expression of the replicative machinery and of S-Cdk cylin complexes. S-Cdks and G1/S Cdks act to activate replication origins. At the same time, S-Cdks suppress formation of new pre-RCs during S phase, G2 and early M, when S cyclin levels remains high. Cdc7 is activated in late G1 and is required throughout S phase for origin firing. Mutations in this protein in budding yeast, and in its homolog in fission yeast block initiation of replication. Cdc7 is highly conserved – related proteins have been identified in frogs and humans. DNA replication is inhibited when Cdc7 homologs are inhibited with antibodies in frog or human cells. It is not known whether CDKs and Cdc7 just regulate protein assembly at origins, or whether they directly activate components of the pre-initiation complex. Role of CdK In S. cerevisiae, the S cyclins Clb5 and Clb6 play and important role in initiating replication. In frog embryos, cyclin E-Cdk2 is primarily responsible for activating origins. Removal of cyclin E with antibodies blocks replication. Cyclin E-CDk2 is also important in Drosophila. Levels of cyclin E rise during S phase and activate Cdk2. Role of Cdc7 Cdc7 levels remain relatively constant throughout the cell cycle, but its activity varies. Its activity is low in G1, increases in late G1, and remains high till late mitosis. Dbf4 is the key regulator of Cdc7 activity – association Cdc7 with Dbf4 activates its kinase activity. In a similar manner to cyclin levels, dbf4 levels fluctuate throughout the cell cycle. In vitro biochemical studies have shown that Cdc7-Dbf4 phosphorylates individual components of the Mcm complex. It also seems to be involved in the recruitment of Cdc45 to chromatin at the time of initiation. In Xenopus eggs, Cdc45 has been shown to interact with DNA polymerase α, and in yeast, mutations in Cdc45 prevent assembly of DNA pol α at origins, suggesting that Cdc45 recruits DNA pol α to chromatin in a Cdc7/Dbf4 dependent manner. References DNA replication
Control of chromosome duplication
Biology
2,401
63,763
https://en.wikipedia.org/wiki/Solved%20game
A solved game is a game whose outcome (win, lose or draw) can be correctly predicted from any position, assuming that both players play perfectly. This concept is usually applied to abstract strategy games, and especially to games with full information and no element of chance; solving such a game may use combinatorial game theory or computer assistance. Overview A two-player game can be solved on several levels: Ultra-weak solution Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides . This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any details of the perfect play. Weak solution Provide an algorithm that secures a win for one player, or a draw for either, against any possible play by the opponent, from the beginning of the game. Strong solution Provide an algorithm that can produce perfect play for both players from any position, even if imperfect play has already occurred on one or both sides. Despite their name, many game theorists believe that "ultra-weak" proofs are the deepest, most interesting and valuable. "Ultra-weak" proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized. By contrast, "strong" proofs often proceed by brute force—using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs are not as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win. Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database and are effectively nothing more. As a simple example of a strong solution, the game of tic-tac-toe is easily solvable as a draw for both players with perfect play (a result manually determinable). Games like nim also admit a rigorous analysis using combinatorial game theory. Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g., Maharajah and the Sepoys). An ultra-weak solution (e.g., Chomp or Hex on a sufficiently large board) generally does not affect playability. Perfect play In game theory, perfect play is the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Perfect play for a game is known when the game is solved. Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). By backward reasoning, one can recursively evaluate a non-final position as identical to the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result. Perfect play can be generalized to non-perfect information games, as the strategy that would guarantee the highest minimal expected outcome regardless of the strategy of the opponent. As an example, the perfect strategy for rock paper scissors would be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome. Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certain endgame positions (in the form of endgame tablebases), which will allow it to play perfectly after some point in the game. Computer chess programs are well known for doing this. Solved games Awari (a game of the Mancala family) The variant of Oware allowing game ending "grand slams" was strongly solved by Henri Bal and John Romein at the Vrije Universiteit in Amsterdam, Netherlands (2002). Either player can force the game into a draw. Chopsticks Strongly solved. If two players both play perfectly, the game will go on indefinitely. Connect Four Solved first by James D. Allen on October 1, 1988, and independently by Victor Allis on October 16, 1988. The first player can force a win. Strongly solved by John Tromp's 8-ply database (Feb 4, 1995). Weakly solved for all boardsizes where width+height is at most 15 (as well as 8×8 in late 2015) (Feb 18, 2006). Solved for all boardsizes where width+height equals 16 on May 22, 2024. Free gomoku Solved by Victor Allis (1993). The first player can force a win without opening rules. Ghost Solved by Alan Frank using the Official Scrabble Players Dictionary in 1987. Hexapawn 3×3 variant solved as a win for black, several other larger variants also solved. Kalah Most variants solved by Geoffrey Irving, Jeroen Donkers and Jos Uiterwijk (2000) except Kalah (6/6). The (6/6) variant was solved by Anders Carstensen (2011). Strong first-player advantage was proven in most cases. L game Easily solvable. Either player can force the game into a draw. Maharajah and the Sepoys This asymmetrical game is a win for the sepoys player with correct play. Nim Strongly solved. Nine men's morris Solved by Ralph Gasser (1993). Either player can force the game into a draw. Order and Chaos Order (First player) wins. Ohvalhu Weakly solved by humans, but proven by computers. (Dakon is, however, not identical to Ohvalhu, the game which actually had been observed by de Voogt) Pangki Strongly solved by Jason Doucette (2001). The game is a draw. There are only two unique first moves if you discard mirrored positions. One forces the draw, and the other gives the opponent a forced win in 15 moves. Pentago Strongly solved by Geoffrey Irving with use of a supercomputer at NERSC. The first player wins. Quarto Solved by Luc Goossens (1998). Two perfect players will always draw. Renju-like game without opening rules involved Claimed to be solved by János Wagner and István Virág (2001). A first-player win. Teeko Solved by Guy Steele (1998). Depending on the variant either a first-player win or a draw. Three men's morris Trivially solvable. Either player can force the game into a draw. Three musketeers Strongly solved by Johannes Laire in 2009, and weakly solved by Ali Elabridi in 2017. It is a win for the blue pieces (Cardinal Richelieu's men, or, the enemy). Tic-tac-toe Trivially strongly solvable because of the small game tree. The game is a draw if no mistakes are made, with no mistake possible on the opening move. Wythoff's game Strongly solved by W. A. Wythoff in 1907. Weak-solves English draughts (checkers) This 8×8 variant of draughts was weakly solved on April 29, 2007, by the team of Jonathan Schaeffer. From the standard starting position, both players can guarantee a draw with perfect play. Checkers has a search space of 5×1020 possible game positions. The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50. Fanorona Weakly solved by Maarten Schadd. The game is a draw. Losing chess Weakly solved in 2016 as a win for White beginning with 1. e3. Othello (Reversi) Weakly solved in 2023 by Hiroki Takizawa, a researcher at Preferred Networks. From the standard starting position on an 8×8 board, a perfect play by both players will result in a draw. Othello is the largest game solved to date, with a search space of 1028 possible game positions. Pentominoes Weakly solved by H. K. Orman. It is a win for the first player. Qubic Weakly solved by Oren Patashnik (1980) and Victor Allis. The first player wins. Sim Weakly solved: win for the second player. Lambs and tigers Weakly solved by Yew Jin Lim (2007). The game is a draw. Partially solved games Chess Fully solving chess remains elusive, and it is speculated that the complexity of the game may preclude it ever being solved. Through retrograde computer analysis, endgame tablebases (strong solutions) have been found for all three- to seven-piece endgames, counting the two kings as pieces. Some variants of chess on a smaller board with reduced numbers of pieces have been solved. Some other popular variants have also been solved; for example, a weak solution to Maharajah and the Sepoys is an easily memorable series of moves that guarantees victory to the "sepoys" player. Go The 5×5 board was weakly solved for all opening moves in 2002. The 7×7 board was weakly solved in 2015. Humans usually play on a 19×19 board, which is over 145 orders of magnitude more complex than 7×7. Hex A strategy-stealing argument (as used by John Nash) shows that all square board sizes cannot be lost by the first player. Combined with a proof of the impossibility of a draw, this shows that the game is a first player win (so it is ultra-weak solved). On particular board sizes, more is known: it is strongly solved by several computers for board sizes up to 6×6. Weak solutions are known for board sizes 7×7 (using a swapping strategy), 8×8, and 9×9; in the 8×8 case, a weak solution is known for all opening moves. Strongly solving Hex on an N×N board is unlikely as the problem has been shown to be PSPACE-complete. If Hex is played on an N×(N + 1) board then the player who has the shorter distance to connect can always win by a simple pairing strategy, even with the disadvantage of playing second. International draughts All endgame positions with two through seven pieces were solved, as well as positions with 4×4 and 5×3 pieces where each side had one king or fewer, positions with five men versus four men, positions with five men versus three men and one king, and positions with four men and one king versus four men. The endgame positions were solved in 2007 by Ed Gilbert of the United States. Computer analysis showed that it was highly likely to end in a draw if both players played perfectly. m,n,k-game It is trivial to show that the second player can never win; see strategy-stealing argument. Almost all cases have been solved weakly for k ≤ 4. Some results are known for k = 5. The games are drawn for k ≥ 8. See also Computer chess Computer Go Computer Othello Game complexity God's algorithm Zermelo's theorem (game theory) References Further reading Allis, Beating the World Champion? The state-of-the-art in computer game playing. in New Approaches to Board Games Research. External links Computational Complexity of Games and Puzzles by David Eppstein. GamesCrafters solving two-person games with perfect information and no chance Mathematical games Abstract strategy games Combinatorial game theory
Solved game
Mathematics
2,610
43,611,767
https://en.wikipedia.org/wiki/Pacifastin
Pacifastin is a family of serine proteinase inhibitors found in arthropods. Pacifastin inhibits the serine peptidases trypsin and chymotrypsin. All pacifastin members that have been characterized at the molecular level are precursor peptides composed of an N-terminal signal sequence followed by a precursor domain and a variable number of inhibitor domains. Each of these inhibitor domains carries a six-cysteine motif – see below. The first family members to be identified were isolated from Locusta migratoria migratoria (migratory locust) which were HI, LMCI-1 (PMP-D2) and LMCI-2 (PMP-C). A further five members, SGPI-1 to 5, were then isolated from Schistocerca gregaria (desert locust), and a heterodimeric serine protease inhibitor was isolated from the haemolymph of Pacifastacus leniusculus (Signal crayfish), and named pacifastin. Function Peptide proteinase inhibitors are in many cases synthesised as part of a larger precursor protein, referred to as a propeptide or zymogen, which remains inactive until the precursor domain is cleaved off in the lysosome, the precursor domain preventing access of the substrate to the active site until necessary. Proteinase inhibitors destined for secretion have an additional N-terminal signal-peptide domain which will be cleaved by a signal-peptidase. Removal of these one or two N-terminal inhibitor domains, either by interaction with a second peptidase or by autocatalytic cleavage, will activate the zymogen. Very little is known about the endogenous function of pacifastin-like inhibitors except that they may play roles in arthropod immunity and in regulation of the physiological processes involved in insect reproduction. Structure The inhibitor unit of pacifastin is a conserved pattern of six cysteine residues (Cys1 – Xaa9–12 – Cys2 – Asn – Xaa – Cys3 – Xaa – Cys4 – Xaa2–3 – Gly – Xaa3–6 – Cys5 – Thr – Xaa3 – Cys6). Detailed analysis of the 3-D structure shows that these six residues form three disulfide bridges (Cys1–4, Cys2–6, Cys3–5), giving members of the pacifastin family a typical fold and remarkable stability. Pacifastin is a 155kDa protein composed of two covalently linked subunits, which are separately encoded. The heavy chain of pacifastin (105 kDa) is related to transferrins as it carries three transferrin lobes, two of which seem to be active in iron binding. A number of the transferrin family members are also serine peptidases, and belong to MEROPS peptidase family S60 (INTERPRO). The light chain of pacifastin (44 kDa) is the proteinase inhibitory subunit, and consists of up to nine cysteine-rich inhibitory domains that are homologous to each other. The locust inhibitors share a conserved array of six residues with the pacifastin light chain. The structure of members of this family reveals that they consist of a triple-stranded antiparallel beta-sheet connected by three disulphide bridges. This family of serine protease inhibitors belongs to MEROPS inhibitor family I19, clan IW. They inhibit chymotrypsin, a peptidase belong to the S1 family (INTERPRO). References Protein families
Pacifastin
Biology
783
698,830
https://en.wikipedia.org/wiki/Street%20light
A street light, light pole, lamp pole, lamppost, streetlamp, light standard, or lamp standard is a raised source of light on the edge of a road or path. Similar lights may be found on a railway platform. When urban electric power distribution became ubiquitous in developed countries in the 20th century, lights for urban streets followed, or sometimes led. Many lamps have light-sensitive photocells that activate the lamp automatically when needed, at times when there is little-to-no ambient light, such as at dusk, dawn, or the onset of dark weather conditions. This function in older lighting systems could be performed with the aid of a solar dial. Many street light systems are being connected underground instead of wiring from one utility post to another. Street lights are an important source of public security lighting intended to reduce crime. History Preindustrial era Early lamps were used in the Ancient Greek and Ancient Roman civilizations, where light primarily served the purpose of security, to both protect the wanderer from tripping on the path over something and keep potential robbers at bay. At that time, oil lamps were used predominantly, as they provided a long-lasting and moderate flame. A slave responsible for lighting the oil lamps in front of Roman villas was called a . In the words of Edwin Heathcote, "Romans illuminated the streets with oil lamps, and cities from Baghdad to Cordoba were similarly lit when most of Europe was living in what it is now rather unfashionable to call the Dark Ages but which were, from the point of view of street lighting, exactly that." So-called "link boys" escorted people from one place to another through the murky, winding streets of medieval towns. Before incandescent lamps, candle lighting was employed in cities. The earliest lamps required that a lamplighter tour the town at dusk, lighting each of the lamps. According to some sources, illumination was ordered in London in 1417 by Sir Henry Barton, Mayor of London though there is no firm evidence of this. Public street lighting was first developed in the 16th century, and accelerated following the invention of lanterns with glass windows by Edmund Heming in London and Jan van der Heyden in Amsterdam, which greatly improved the quantity of light. In 1588 the Parisian Parliament decreed that a torch be installed and lit at each intersection, and in 1594 the police changed this to lanterns. Still, in the mid 17th century it was a common practice for travelers to hire a lantern-bearer if they had to move at night through the dark, winding streets. King Louis XIV authorized sweeping reforms in Paris in 1667, which included the installation and maintenance of lights on streets and at intersections, as well as stiff penalties for vandalizing or stealing the fixtures. Paris had more than 2,700 streetlights by the end of the 17th century, and twice as many by 1730. Under this system, streets were lit with lanterns suspended apart on a cord over the middle of the street at a height of ; as an English visitor enthused in 1698, 'The streets are lit all winter and even during the full moon!' In London, public street lighting was implemented around the end of the 17th century; a diarist wrote in 1712 that 'All the way, quite through Hyde Park to the Queen's Palace at Kensington, lanterns were placed for illuminating the roads on dark nights.' A much-improved oil lantern, called a , was introduced in 1745 and improved in subsequent years. The light shed from these réverbères was considerably brighter, enough that some people complained of glare. These lamps were attached to the top of lampposts; by 1817, there were 4,694 lamps on the Paris streets. During the French Revolution (1789–1799), the revolutionaries found that the lampposts were a convenient place to hang aristocrats and other opponents. Gas lamp lighting The first widespread system of street lighting used piped coal gas as fuel. Stephen Hales was the first person who procured a flammable fluid from the actual distillation of coal in 1726 and John Clayton, in 1735, called gas the "spirit" of coal and discovered its flammability by accident. William Murdoch (sometimes spelled "Murdock") was the first to use this gas for the practical application of lighting. In the early 1790s, while overseeing the use of his company's steam engines in tin mining in Cornwall, Murdoch began experimenting with various types of gas, finally settling on coal-gas as the most effective. He first lit his own house in Redruth, Cornwall in 1792. In 1798, he used gas to light the main building of the Soho Foundry and in 1802 lit the outside in a public display of gas lighting, the lights astonishing the local population. The first public street lighting with gas was demonstrated in Pall Mall, London on 4 June 1807 by Frederick Albert Winsor. In 1811, Engineer Samuel Clegg designed and built what is now considered the oldest extant gasworks in the world. Gas was used to light the worsted mill in the village of Dolphinholme in North Lancashire. The remains of the works, including a chimney and gas plant, have been put on the National Heritage List for England. Clegg's installation saved the building's owners the cost of up to 1,500 candles every night. It also lit the mill owner's house and the street of millworkers' houses in Dolphinholme. In 1812, Parliament granted a charter to the London and Westminster Gas Light and Coke Company, and the first gas company in the world came into being. Less than two years later, on 31 December 1813, the Westminster Bridge was lit by gas. Following this success, gas lighting spread outside London, both within Britain and abroad. The first place outside London in England to have gas lighting, was Preston, Lancashire in 1816, where Joseph Dunn's Preston Gaslight Company introduced a new, brighter gas lighting. Another early adopter was the city of Baltimore, where the gaslights were first demonstrated at Rembrandt Peale's Museum in 1816, and Peale's Gas Light Company of Baltimore provided the first gas streetlights in the United States. In the 1860s, streetlights were started in the Southern Hemisphere in New Zealand. Kerosene streetlamps were invented by Polish pharmacist Ignacy Łukasiewicz in the city of Lemberg (Austrian Empire), in 1853. His kerosene lamps were later widely used in Bucharest, Paris, and other European cities. He went on to open the world's first mine in 1854 and the world's first kerosene refinery in 1856 in Jasło, Poland. In Paris, public street lighting was first installed on a covered shopping street, the Passage des Panoramas, in 1817, private interior gas lighting having been previously demonstrated in a house on the rue Saint-Dominique seventeen years prior. The first gas lamps on the main streets of Paris appeared in January 1829 on the place du Carrousel and the Rue de Rivoli, then on the rue de la Paix, place Vendôme, and rue de Castiglione. By 1857, the Grands Boulevards were all lit with gas; a Parisian writer enthused in August 1857: "That which most enchants the Parisians is the new lighting by gas of the boulevards...From the church of the Madeleine all the way to rue Montmartre, these two rows of lamps, shining with a clarity white and pure, have a marvelous effect." The gaslights installed on the boulevards and city monuments in the 19th century gave the city the nickname "The City of Light." Oil-gas appeared in the field as a rival of coal-gas. In 1815, John Taylor patented an apparatus for the decomposition of "oil" and other animal substances. Public attention was attracted to "oil-gas" by the display of the patent apparatus at Apothecary's Hall, by Taylor & Martineau. Farola fernandina Farola fernandina is a traditional design of gas streetlight which remains popular in Spain. Essentially, it is a neoclassical French style of gas lamp dating from the late 18th century. It may be either a wall-bracket or standard lamp. The standard base is cast metal with an escutcheon bearing two intertwined letters 'F', the Royal cypher of King Ferdinand VII of Spain and commemorates the date of the birth of his daughter, the Infanta Luisa Fernanda, Duchess of Montpensier. Arc lamps The first electric street lighting employed arc lamps, initially the "electric candle", "Jablotchkoff candle", or "Yablochkov candle", developed by Russian Pavel Yablochkov in 1875. This was a carbon arc lamp employing alternating current, which ensured that both electrodes were consumed at equal rates. In 1876, the common council of the city of Los Angeles ordered four arc lights installed in various places in the fledgling town for street lighting. On 30 May 1878, the first electric streetlights in Paris were installed on the avenue de l'Opera and the Place de l'Étoile, around the Arc de Triomphe, to celebrate the opening of the Paris Universal Exposition. In 1881, to coincide with the Paris International Exposition of Electricity, streetlights were installed on the major boulevards. The first streets in London lit with the electrical arc lamp were by the Holborn Viaduct and the Thames Embankment in 1878. More than 4,000 were in use by 1881, though by then an improved differential arc lamp had been developed by Friedrich von Hefner-Alteneck of Siemens & Halske. The United States was quick in adopting arc lighting, and by 1890 over 130,000 were in operation in the US, commonly installed in exceptionally tall moonlight towers. Arc lights had two major disadvantages. First, they emit an intense and harsh light which, although useful at industrial sites like dockyards, was discomforting in ordinary city streets. Second, they are maintenance-intensive, as carbon electrodes burn away swiftly. With the development of cheap, reliable and bright incandescent light bulbs at the end of the 19th century, arc lights passed out of use for street lighting, but remained in industrial use longer. Incandescent lighting The first street to be lit by an incandescent lightbulb was Mosley Street, in Newcastle. The street was lit for one night by Joseph Swan's incandescent lamp on 3 February 1879. Consequently, Newcastle has the first city street in the world to be lit by electric lighting. The first city in the United States to successfully demonstrate electric lighting was Cleveland, Ohio, with 12 electric lights around the Public Square road system on 29 April 1879. Wabash, Indiana, lit 4 Brush arc lamps with 3,000 candlepower each, suspended over their courthouse on 2 February 1880, making the town square "as light as midday". Kimberley, Cape Colony (modern South Africa), was the first city in the Southern Hemisphere and in Africa to have electric streetlights – with 16 first lit on 2 September 1882. The system was only the second in the world, after that of Philadelphia, to be powered municipally. In Central America, San Jose, Costa Rica, lit 25 lamps powered by a hydroelectric plant on 9 August 1884. Nuremberg was the first city in Germany to have electric public lighting on 7 June 1882, followed by Berlin on 20 September 1882 (Potsdamer Platz only). Temesvár (Timișoara in present-day Romania) was the first city in the Austrian-Hungarian Monarchy to have electric public lighting, on 12 November 1884; 731 lamps were used. On 9 December 1882, Brisbane, Queensland, Australia was introduced to electricity by having a demonstration of 8 arc lights, erected along Queen Street Mall. The power to supply these arc lights was taken from a 10 hp Crompton DC generator driven by a Robey steam engine in a small foundry in Adelaide Street and occupied by J. W. Sutton and Co. In 1884, Walhalla, Victoria, had two lamps installed on the main street by the Long Tunnel (Gold) Mining Company. In 1886, the isolated mining town of Waratah in Tasmania was the first to have an extensive system of electrically powered street lighting installed. In 1888, the New South Wales town of Tamworth installed a large system illuminating a significant portion of the city, with over 13 km of streets lit by 52 incandescent lights and 3 arc lights. Powered by a municipal power company, this system gave Tamworth the title of "First City of Light" in Australia. On 10 December 1885, Härnösand became the first town in Sweden with electric street lighting, following the Gådeå power station being taken into use. Later developments Incandescent lamps were primarily used for street lighting until the advent of high-intensity gas-discharge lamps. They were often operated at high-voltage series circuits. Series circuits were popular since their higher voltage produced more light per watt consumed. Furthermore, before the invention of photoelectric controls, a single switch or clock could control all the lights in an entire district. To avoid having the entire system go dark if a single lamp burned out, each streetlamp was equipped with a device that ensured that the circuit would remain intact. Early series streetlights were equipped with isolation transformers. that would allow current to pass across the transformer whether the bulb worked or not. Later, the film cutout was invented. This was a small disk of insulating film that separated two contacts connected to the two wires leading to the lamp. If the lamp failed (an open circuit), the current through the string became zero, causing the voltage of the circuit (thousands of volts) to be imposed across the insulating film, penetrating it (see Ohm's law). In this way, the failed lamp was bypassed and power was restored to the rest of the district. The streetlight circuit contained an automatic current regulator, preventing the current from increasing as lamps burned out, preserving the life of the remaining lamps. When the failed lamp was replaced, a new piece of film was installed, once again separating the contacts in the cutout. This system was recognizable by the large porcelain insulator separating the lamp and reflector from the mounting arm. This was necessary because the two contacts in the lamp's base may have operated at several thousand volts above ground. Modern lights Today, street lighting commonly uses high-intensity discharge lamps. Low-pressure sodium (LPS) lamps became commonplace after World War II for their low power consumption and long life. Late in the 20th century, high-pressure sodium (HPS) lamps were preferred, taking further the same virtues. Such lamps provide the greatest amount of photopic illumination for the least consumption of electricity. Two national standards now allow for variation in illuminance when using lamps of different spectra. In Australia, HPS lamp performance needs to be reduced by a minimum value of 75%. In the UK, illuminances are reduced with higher values S/P ratio. New street lighting technologies, such as LED or induction lights, emit a white light that provides high levels of scotopic lumens. It is a commonly accepted practice to justify and implement a lower luminance level for roadway lighting based on increased scotopic lumens provided by white light. However, this practice fails to provide the context needed to apply laboratory-based visual performance testing to the real world. Critical factors such as visual adaptation are left out of this practice of lowering luminance levels, leading to reduced visual performance. Additionally, there have been no formal specifications written around Photopic/Scotopic adjustments for different types of light sources, causing many municipalities and street departments to hold back on implementation of these new technologies until the standards are updated. Eastbourne in East Sussex, UK is currently undergoing a project to see 6000 of its streetlights converted to LED and will be closely followed by Hastings in early 2014. Many UK councils are undergoing mass-replacement schemes to LED, and though streetlights are being removed along many long stretches of UK motorways (as they are not needed and cause light pollution), LEDs are preferred in areas where lighting installations are necessary. Milan, Italy, is the first major city to have entirely switched to LED lighting. In North America, the city of Mississauga, Canada was one of the first and largest LED conversion projects, with over 46,000 lights converted to LED technology between 2012 and 2014. It is also one of the first cities in North America to use Smart City technology to control the lights. DimOnOff, a company based in Quebec City, was chosen as a Smart City partner for this project. In the United States, the city of Ann Arbor, Michigan was the first metropolitan area to fully implement LED street lighting in 2006. Since then, sodium-vapor lamps were slowly being replaced by LED lamps. Photovoltaic-powered LED luminaires are gaining wider acceptance. Preliminary field tests show that some LED luminaires are energy-efficient and perform well in testing environments. In 2007, the Civil Twilight Collective created a variant of the conventional LED streetlight, namely the Lunar-resonant streetlight. These lights increase or decrease the intensity of the streetlight according to the lunar light. This streetlight design thus reduces energy consumption as well as light pollution. Measurement Two very similar measurement systems were created to bridge the scotopic and photopic luminous efficiency functions, creating a Unified System of Photometry. These mesopic visual performance models are conducted in laboratory conditions in which the viewer is not exposed to higher levels of luminance than the level being tested for. Further research is needed to bring additional factors into these models such as visual adaptation and the biological mechanics of rod cells before these models are able to accurately predict visual performance in real world conditions. The current understanding of visual adaptation and rod cell mechanics suggests that any benefits from rod-mediated scotopic vision are difficult, if not impossible, to achieve in real world conditions under the presence of high luminance light sources. Outdoor Site-Lighting Performance (OSP) is a method for predicting and measuring three different aspects of light pollution: glow, trespass and glare. Using this method, lighting specifiers can quantify the performance of existing and planned lighting designs and applications to minimize excessive or obtrusive light leaving the boundaries of a property. Advantages Major advantages of street lighting include prevention of automobile accidents and increase in safety. Studies have shown that darkness results in numerous crashes and fatalities, especially those involving pedestrians; pedestrian fatalities are 3 to 6.75 times more likely in the dark than in daylight. At least in the 1980s and 1990s, when automobile crashes were far more common, street lighting was found to reduce pedestrian crashes by approximately 50%. Furthermore, in the 1970s, lighted intersections and highway interchanges tended to have fewer crashes than unlighted intersections and interchanges. Some say lighting reduces crime, as many would expect. However, others say any correlation (let alone causation) is not found in the data. Towns, cities, and villages can use the unique locations provided by lampposts to hang decorative or commemorative banners. Many communities in the US use lampposts as a tool for fundraising via lamppost banner sponsorship programs first designed by a US-based lamppost banner manufacturer. Disadvantages The major criticisms of street lighting are that it can actually cause accidents if misused, and cause light pollution. Health and safety There are three optical phenomena that need to be recognized in streetlight installations. The loss of night vision because of the accommodation reflex of drivers' eyes is the greatest danger. As drivers emerge from an unlighted area into a pool of light from a streetlight their pupils quickly constrict to adjust to the brighter light, but as they leave the pool of light the dilation of their pupils to adjust to the dimmer light is much slower, so they are driving with impaired vision. Additionally, research has shown that pupil reflexes are more pronounced and post-light recovery takes longer after exposure to blue light compared to red light. This poses a continually increasing risk as blue-rich light sources become more common in roadway lighting and vehicle headlights. As a person gets older, the eye's recovery speed gets slower, so driving time and distance under impaired vision increases. The loss of night vision due to visual adaptation of retinal cells to a higher luminance level provided by streetlights. Once an individual leaves the illuminated area their retinal cells require adaptation time before they are sensitive to objects and motion under the lower luminance levels of unlit areas. Oncoming headlights are more visible against a black background than a grey one. The contrast creates greater awareness of the oncoming vehicle. Stray voltage is also a concern in many cities. Stray voltage can accidentally electrify lampposts and has the potential to injure or kill anyone who comes into contact with the post. There are also physical dangers to the posts of streetlamps, other than children climbing them for recreational purposes. Streetlight stanchions (lampposts) pose a collision risk to motorists and pedestrians, particularly those affected by poor eyesight or under the influence of alcohol. This can be reduced by designing them to break away when hit (known as frangible, collapsible, or passively safe supports), protecting them by guardrails, or marking the lower portions to increase their visibility. High winds or accumulated metal fatigue also occasionally topple streetlights. Light pollution Astronomy Light pollution can hide the stars and interfere with astronomy. In settings near astronomical telescopes and observatories, low-pressure sodium lamps may be used. These lamps are advantageous over other lamps such as mercury and metal halide lamps because low-pressure sodium lamps emit lower intensity, monochromatic light. Observatories can filter the sodium wavelength out of their observations and virtually eliminate the interference from nearby urban lighting. Full cutoff streetlights also reduce light pollution by reducing the amount of light that is directed at the sky, which also improves the luminous efficiency of the light. Ecosystems Streetlights can impact biodiversity and ecosystems—for instance, disrupting the migration of some nocturnally migrating bird species. In the Netherlands, Philips found that birds can get disoriented by the red wavelengths in street lighting, and in response developed alternative lighting that only emit in the green and blue wavelengths of the visible spectrum. The lamps were installed on Ameland in a small-scale test. If successful, the technology could be used on ships and offshore installations to avoid luring birds towards the open sea at night. Bats can be negatively impacted by streetlights, with evidence showing that red light can be least harmful. As a result, some areas have installed red LED streetlights to minimise disruption to bats. A study published in Science Advances reported that streetlights in southern England had detrimental impacts on local insect populations. Streetlights can also impact plant growth and the number of insects that depend on plants for food. Energy consumption As of 2017, globally 70% of all electricity was generated by burning fossil fuels, a source of air pollution and greenhouse gases, and also globally there are approximately 300 million streetlights using that electricity. Cities are exploring more efficient energy use, reducing streetlight power consumption by dimming lights during off-peak hours and switching to LED streetlights which illuminate a smaller area to a lower level of luminance. Many councils are using a part-night lighting scheme to turn off lighting at quieter times of night. This is typically midnight to 5:30 AM, as seen by the sign on the right. There have, however, been questions about the impact on crime rates. Typical collector road lighting in New York State costs $6,400/mile/year for high pressure sodium at 8.5 kW/mile or $4,000 for light-emitting diode luminaires at 5.4 kW/mile. Improvements can be made by optimising directionality and shape, however. Transitioning to wide angle lights enabled the doubling of distance between streetlights in Flanders from 45 m to 90 m, cutting annual street lighting electricity expenditures to €9 million for the 2,150 km long network that was retrofitted, corresponding to ca. €4,186/km. Street light control systems A number of street light control systems have been developed to control and reduce energy consumption of a town's public lighting system. These range from controlling a circuit of street lights and/or individual lights with specific ballasts and network operating protocols. These may include sending and receiving instructions via separate data networks, at high frequency over the top of the low voltage supply or wireless. Street light controllers are smarter versions of the mechanical or electronic timers previously used for street light ON-OFF operation. They come with energy conservation options like twilight saving, staggering or dimming. Many street light controllers come with an astronomical clock for a particular location or a Global Positioning System (GPS) connection to give the best ON-OFF time and energy saving. Accessories Some intelligent street light controllers also come with Global System for Mobile Communications (GSM), Radio frequency (RF) or General Packet Radio Service (GPRS) communication, user adjusted according to latitude and longitude (low cost type), for better street light management and maintenance. Many street light controllers also come with traffic sensors to manage the lux level of the lamp according to the traffic and to save energy by decreasing lux when there is no traffic. The United States, Canada, India, and many other countries have started introducing street light controllers to their road lighting for energy conservation, street light management and maintenance purpose. Economics Street light controllers can be expensive in comparison with normal timers, and can cost between $100 and $2,500, but most of them return the investment between 6 months and 2 years. As the equipment's lifetime is 7 to 10 years, it saves energy and cost after the initial investment has been recouped. Image-based street light control A number of companies are now manufacturing intelligent street lighting that adjust light output based on usage and occupancy, i.e. automating classification of pedestrian versus cyclist, versus automobile, sensing also velocity of movement and illuminating a certain number of streetlights ahead and fewer behind, depending on velocity of movement. Also, the lights adjust depending on road conditions, for example, snow produces more reflectance therefore reduced light is required. Purpose There are three distinct main uses of street lights, each requiring different types of lights and placement. Using the wrong types of lights can make the situation worse by compromising visibility or safety. Beacon lights A modest steady light at the intersection of two roads is an aid to navigation because it helps a driver see the location of a side road as they come closer to it, so that they can adjust their braking and know exactly where to turn if they intend to leave the main road or see vehicles or pedestrians. A beacon light's function is to say "here I am" and even a dim light provides enough contrast against the dark night to serve the purpose. To prevent the dangers caused by a car driving through a pool of light, a beacon light must never shine onto the main road, and not brightly onto the side road. In residential areas, this is usually the only appropriate lighting, and it has the bonus side effect of providing spill lighting onto any sidewalk there for the benefit of pedestrians. On Interstate highways, this purpose is commonly served by placing reflectors at the sides of the road. Roadway lights Because of the dangers discussed above, roadway lights are properly used sparingly and only when a particular situation justifies increasing the risk. This usually involves an intersection with several turning movements and much signage, situations where drivers must take in much information quickly that is not in the headlights' beam. In these situations (a freeway junction or exit ramp), the intersection may be lit so that drivers can quickly see all hazards, and a well-designed plan will have gradually increasing lighting for approximately a quarter of a minute before the intersection and gradually decreasing lighting after it. The main stretches of highways remain unlighted to preserve the driver's night vision and increase the visibility of oncoming headlights. If there is a sharp curve where headlights will not illuminate the road, a light on the outside of the curve is often justified. If it is desired to light a roadway (perhaps due to heavy and fast multi-lane traffic), to avoid the dangers of casual placement of street lights, it should not be lit intermittently since this requires repeated eye readjustment, which causes eyestrain and temporary blindness when entering and leaving light pools. In this case, the system is designed to eliminate the need for headlights. This is usually achieved with bright lights placed on high poles at close, regular intervals so that there is consistent light along the route. The lighting goes from curb to curb. Cycle path lights Policies that encourage utility cycling have been proposed and implemented, including lighting bike paths to increase safety at night. Usage on rail transport Lights similar to street lights are used on railway platforms at train stations in the open air. Their purpose is similar to that of beacon lights: they help a train driver see the location of a station at night as the train comes closer to it, so that the driver can adjust the braking and know exactly where to stop. A train station light must never shine directly onto the tracks, and has the bonus side effect of providing spill lighting onto any platform for the benefit of passengers waiting there. Maintenance Street lighting systems require ongoing maintenance, which can be classified as either reactive or preventative. Reactive maintenance is a direct response to a lighting failure, such as replacing a discharge lamp after it has failed or replacing an entire lighting unit after it has been hit by a vehicle. Preventative maintenance is the scheduled replacement of lighting components, for example, replacing all the discharge lamps in an area of the city when they have reached 85% of their expected life. Maintenance may be undertaken by the lighting owners or by a contractor. In the United Kingdom, the Roads Liaison Group has issued a Code of Practice recommending specific reactive and preventative maintenance procedures. Some street lights in New York City have an orange or red light on top of the luminaire (light fixture) or a red light attached to the lamppost. This indicates that near to this lighting pole or in the same intersection, there is a fire alarm pull box. Other street lights have a small red light next to the street light bulb; when the small light flashes, it indicates an issue with the electric current. Street lights as public goods Street lights are the basic example of public goods, which are nonexcludable and nonrival. This means that the producer cannot prevent those who do not pay from consuming, and the consumption of one person cannot prevent the consumption of another person. This becomes a problem for governments, because no private company would have the incentive to produce street lights, which is why most governments are in charge of placing and maintaining street lights. For example, in Armenia, building and maintaining infrastructure is the duty of local self-governance. See also Charging station Floodlight Gas lighting History of street lighting in the United States Intelligent street lighting Lighting-up time Light pollution List of light sources Solar street light Street furniture Street light interference References Bibliography Further reading External links An enthusiast's guide to street lighting – including many close-up photographs of UK street lighting equipment, as well as information on installations through the ages. (UK) Example Installation of Integrated Renewable Power in Street Lighting, an example of a street lighting system with integrated solar and wind generator from Panasonic/Matsushita Transportation Lighting at the Lighting Research Center Lighting Research at the University of Sheffield Light fixtures Light pollution Street furniture Urban planning
Street light
Engineering
6,485
73,252,403
https://en.wikipedia.org/wiki/Carbon-carbon%20bond%20activation
Carbon-carbon bond activation refers to the breaking of carbon-carbon bonds in organic molecules. This process is an important tool in organic synthesis, as it allows for the formation of new carbon-carbon bonds and the construction of complex organic molecules. However, C–C bond activation is challenging mainly for the following reasons: (1) C-H bond activation is a competitive process of C-C activation, which is both energetically and kinetically more favorable; (2) the accessibility of the transition metal center to C–C bonds is generally difficult due to its 'hidden' nature; (3) relatively high stability of the C–C bond (90 kcal/mol−1). As a result, in the early stage, most examples of C-C activation are of stringed ring systems, which makes C-C activation more favorable by increasing the energy of the starting material. However, C-C activation of unstrained C-C bonds has remained challenging until the recent two decades. Examples of C-C bond activation Due to the difficulty of C-C activation, a driving force is required to facilitate the reaction. One common strategy is to form stable metal complexes. One example is reported by Milstein and coworkers, in which the C(sp2)–C(sp3) bond of bisphosphine ligands was selectively cleaved by a number of metals to afford stable pincer complexes under mild conditions. Aromatization is another driving force that is utilized for C–C bond activation. For example, Chaudret group reported that the C–C bond of steroid compounds can be cleaved through the Ru-promoted aromatization of the B ring. At the same time, a methane molecule is released, which is possibly another driving force for this reaction. In addition, the metalloradical has also been proven to have the ability to cleave the C–C single bond. Chan group reported the C–C bond scission of cyclooctane via 1,2-addition with Rh(III) porphyrin hydride, which involved [RhII(ttp)]· radical as the key intermediate. Mechanism of C-C bond activation Generally speaking, there are two distinct mechanistic pathways that lead to C-C bond activation: (a) the β-carbon elimination of metal complexes. In this mechanism, a M–C intermediate and a double bond are formed at the same time; and (b) the direct oxidative addition of C–C bonds into low-valent metal adducts to form a bis(organyl)metal complex. β-carbon elimination In 1997, Tamaru group reported the first metal-catalyzed β-carbon elimination of an unstrained compound. Their work revealed a novel Pd(0)-catalyzed ring opening of 4-vinyl cyclic carbonates. They proposed that the reaction is initiated by the elimination of carbon dioxide to form π-allylpalladium intermediate, which is followed by β-decarbopalladation to form dienals and dienones. Since then, this field has bloomed, and a lot of similar reactions were developed and showed their great potential in organic synthesis. The early stage of research in this field has focused on the reaction of M–O–C–C species and β-carbon elimination of the M–N–C–C intermediate was not discovered until the recent ten years. In 2010, Nakamura reported a Cu-catalyzed substitution reaction of propargylic amines with alkynes or other amines as the first example of the transition-metal-catalyzed β-carbon elimination of amines. Oxidative addition Compared with β-carbon elimination, oxidative addition of C-C bond is a more direct way of C-C bond activation. However, it is more challenging to do for the following reasons: 1) It forms two weak M-C bonds at the expense of breaking a stable C-C bond, so it is energetically unfavorable; 2) the C-C bond is usually hindered, which makes the metal center hard to approach. As a result, the cleavage of unstrained compounds that have been achieved is mainly focused on ketone substrates. This is because the C–C bond adjacent to the carbonyl of ketones is weaker and can be much more easily cleaved. It also benefits from the less steric hindrance from the planar structure of the carbonyl motif. Suggs and Jun are pioneers in this field. They found that an Rh(I) complex, [RhCl(C2H4)2]2, can be oxidatively inserted into the C–C bond of 8-acylquinolines at the 8-position to form relatively stable 5-membered rhodacycles. Subsequently, 8-acylquinoline can be coupled with ethylene to afford 8-quinolinyl ethylketone, which represented the first transition-metal-catalyzed scission of C–C bonds via oxidative addition. Applications of C-C bond activation Carbon-carbon bond activation reactions have numerous applications in organic synthesis, materials science, and pharmaceuticals. In organic synthesis, these reactions are used to construct complex molecules in a highly efficient and selective manner. For example, in 2021 Dong Group described the first enantioselective total synthesis of the natural product penicibilaenes using a late-stage carbon-carbon bond activation strategy. There are also a lot of other examples highlighting the potential of carbon-carbon bond activation strategies in the total synthesis of complex natural products with high stereocontrol. References Organic chemistry Chemical bonding
Carbon-carbon bond activation
Physics,Chemistry,Materials_science
1,181
24,324,550
https://en.wikipedia.org/wiki/Armillaria%20paulensis
Armillaria paulensis is a species of mushroom in the family Physalacriaceae. This species is found in South America. See also List of Armillaria species References paulensis Fungal tree pathogens and diseases Fungus species
Armillaria paulensis
Biology
48
40,592
https://en.wikipedia.org/wiki/Darbepoetin%20alfa
Darbepoetin alfa (INN) is a re-engineered form of erythropoietin containing 5 amino acid changes (N30, T32, V87, N88, T90) resulting in the creation of 2 new sites for N-linked carbohydrate addition. It has a 3-fold longer serum half-life compared to epoetin alpha and epoetin beta. It stimulates erythropoiesis (increases red blood cell levels) by the same mechanism as rHuEpo (binding and activating the Epo receptor) and is used to treat anemia, commonly associated with chronic kidney failure and cancer chemotherapy. Darbepoetin is marketed by Amgen under the trade name Aranesp. The medication was approved in September 2001, by the US Food and Drug Administration for treatment of anemia in patients with chronic kidney failure by intravenous or subcutaneous injection. In June 2001, it had been approved by the European Medicines Agency for this indication as well as the treatment of anemia in cancer patients undergoing chemotherapy. Dr. Reddy's Laboratories launched darbepoetin alfa in India under the brand name Cresp in August 2010. This is the world's first follow-on biologic of darbepoetin alfa. Darbepoetin is produced by recombinant DNA technology in modified Chinese hamster ovary cells. It differs from endogenous erythropoietin (EPO) by containing two more N-linked oligosaccharide chains. It is an erythropoiesis-stimulating 165-amino acid protein. It is on the World Health Organization's List of Essential Medicines. Contraindications Use of darbepoetin alfa is contraindicated in patients with hypersensitivity to the drug, pre-existing uncontrolled hypertension, and pure red cell aplasia. Adverse effects Darbepoetin alfa has black box warnings in the United States for increased risk of death, myocardial infarction, stroke, venous thromboembolism, thrombosis of vascular access, and tumor progression or recurrence. To avoid side effects, it is recommended for patients with chronic kidney failure or cancer to use the lowest possible dose needed to avoid red blood cell (RBC) transfusions. In addition to those listed in the black box warning, use of darbepoetin alfa also increases the risk of cardiovascular problems, including cardiac arrest, arrhythmia, hypertension and congestive heart failure, and edema. A recent study has extended these findings to treatment of patients exhibiting cancer-related anemia (distinct from anemia resulting from chemotherapy). Other reported adverse reactions include increased risk of seizure, hypotension, and chest pain. Pregnancy and lactation Darbepoetin alfa is not assigned a pregnancy category in the United States. It is not known if darbepoetin alfa is excreted in breast milk. Mechanism of action Darbepoetin alfa binds to the erythropoietin receptor on erythroid progenitor cells, stimulating RBC production and differentiation. Safety advisories in anemic cancer patients Amgen sent a "dear stockholders" letter in January 2007, that highlighted results from a recent anemia of cancer trial, and warned doctors to consider use in that off-label indication with caution. Amgen advised the U.S. Food and Drug Administration (FDA) as to the results of the DAHANCA 10 clinical trial. The DAHANCA 10 data monitoring committee found that 3-year loco-regional control in subjects treated with Aranesp was significantly worse than for those not receiving Aranesp (p=0.01). In response to these advisories, the FDA released a Public Health Advisory on 9 March 2007, and a clinical alert for doctors on 16 February 2007, about the use of erythropoeisis-stimulating agents (ESAs) such as epoetin alfa (marketed as Epogen) and darbepoetin alfa. The advisory recommended caution in using these agents in cancer patients receiving chemotherapy or off chemotherapy, and indicated a lack of clinical evidence to support improvements in quality of life or transfusion requirements in these settings. According to the 2010 update to clinical practice guidelines from the American Society of Clinical Oncology (ASCO) and the American Society of Hematology (ASH), use of ESAs such as darbepoetin alfa in cancer patients is appropriate when following stipulations outlined in FDA-approved labeling. Society and culture Like EPO, darbepoetin alfa has the potential to be abused by athletes seeking a competitive advantage. Its use during the 2002 Winter Olympic Games to improve performance led to the disqualification of cross-country skiers Larisa Lazutina of Russia, Olga Danilova of Russia and Johann Mühlegg of Spain from their final races. Economics Epogen and Aranesp had more than $6 billion in combined sales in 2006. Procrit sales were about $3.2 billion in 2006. References Antianemic preparations Growth factors Erythropoiesis-stimulating agents Orphan drugs
Darbepoetin alfa
Chemistry
1,084
13,530,107
https://en.wikipedia.org/wiki/Technological%20momentum
Technological momentum is a theory about the relationship between technology and society over time. The term, which is considered a fourth technological determinism variant, was originally developed by the historian of technology Thomas P. Hughes. The idea is that relationship between technology and society is reciprocal and time-dependent so that one does not determine the changes in the other but both influence each other. Theory Hughes's thesis is a synthesis of two separate models for how technology and society interact. One, technological determinism, claims that society itself is modified by the introduction of a new technology in an irreversible and irreparable way—for example, the introduction of the automobile has influenced the manner in which American cities are designed, a change that can clearly be seen when comparing the pre-automobile cities on the East Coast to the post-automobile cities on the West Coast. Technology, under this model, self-propagates as well—there is no turning back once the adoption has taken place, and the very existence of the technology means that it will continue to exist in the future. The other model, social determinism, claims that society itself controls how a technology is used and developed—for example, the rejection of nuclear power technology in the USA amid the public fears after the Three Mile Island incident. Technological momentum takes the two models and adds time as the unifying factor. In Hughes's theory, when a technology is young, deliberate control over its use and scope is possible and enacted by society. However, as a technology matures, and becomes increasingly enmeshed in the society where it was created, its own deterministic force takes hold, achieving technological momentum in the process. According to Hughes this inertia, which is particularly the case for large technological systems with their technological and social components, makes them difficult to influence and steer as they start to go more on their own way, assuming deterministic traits in the process. In other words, Hughes's says that the relationship between technology and society always starts with a social determinism model, but evolves into a form of technological determinism over time and as its use becomes more prevalent and important. Since its introduction by Hughes, the technological momentum concept has been applied by a number of other historians of technology. For instance, it is considered an effective approach to reconciling the apparently opposite perspectives of the autonomy of technology and the social and political motivations behind technological choices. It is able to describe how socially and politically conditioned technological institutions become independent and autonomous over time. Notes References Thomas P. Hughes, "The Evolution of Large Technological Systems," in Wiebe E. Bijker, Thomas P. Hughes, and Trevor Pinch, eds., The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, 2012 (1987), pp. 45-76. Thomas P. Hughes, "Technological momentum," in Albert Teich, ed., Technology and the Future, 8th edn., 2000. Thomas P. Hughes, "Technological momentum," in Merritt Roe Smith and Leo Marx, ed., Does Technology Drive History?: The Dilemma of Technological Determinism, Massachusetts Institute of Technology, 1994, pp. 101–113 Thomas P. Hughes, "Technological Momentum in History: Hydrogenation in Germany 1898-1933", Past and Present, No. 44 (Aug., 1969), pp. 106–132 History of technology Technological change
Technological momentum
Technology
705
3,302,238
https://en.wikipedia.org/wiki/Proview%20International%20Holdings
Proview International Holdings Ltd (; ) was a Hong Kong–based manufacturer of computer monitors and other media devices. The company marketed its products under its own and other brand name through its extensive distribution network over the world. Proview manufactured CRT and LCD monitors, LCD TVs, Plasma TVs and DVD players. Proview had production facilities located in Shenzhen and Wuhan in China, as well as in Brazil and Taiwan. Proview held the "iPad" trademark for China and sued Apple for US$1.6 billion in damages. Apple countered that the suit was a shakedown to prop up the company due to its significant debt and impending collapse. Apple ended the dispute with a $60 million payment to Proview. Proview has been delisted from the stock market and the surviving group have not been able to return to business as of May 2023. Their website is no longer online. References External links Companies listed on the Hong Kong Stock Exchange Defunct computer hardware companies Hong Kong brands
Proview International Holdings
Technology
202
33,287,571
https://en.wikipedia.org/wiki/Titanium%28IV%29%20hydride
Titanium(IV) hydride (systematically named titanium tetrahydride) is an inorganic compound with the empirical chemical formula . It has not yet been obtained in bulk, hence its bulk properties remain unknown. However, molecular titanium(IV) hydride has been isolated in solid gas matrices. The molecular form is a colourless gas, and very unstable toward thermal decomposition. As such the compound is not well characterised, although many of its properties have been calculated via computational chemistry. Synthesis and stability Titanium(IV) hydride was first produced in 1963 by the photodissociation of mixtures of and , followed by immediate mass spectrometry. Rapid analysis was required as titanium(IV) hydride is extremely unstable. Computational analysis of has given a theoretical bond dissociation energy (relative to M+4H) of 132 kcal/mole. As the dissociation energy of is 104 kcal/mole the instability of can be expected to be thermodynamic; with it dissociating to metallic titanium and hydrogen: (76 kcal/mole) , along with other unstable molecular titanium hydrides, (TiH, , and polymeric species) has been isolated at low temperature following laser ablation of titanium. Structure It is suspected that within solid titanium(IV) hydride, the molecules form aggregations (polymers), being connected by covalent bonds. Calculations suggest that is prone to dimerisation. This largely attributed to the electron deficiency of the monomer and the small size of the hydride ligands; which allows dimerisation to take place with a very low energy barrier as there is a negligible increase in inter-ligand repulsion. The dimer is a calculated to be a fluxional molecule rapidly inter-converting between a number of forms, all of which display bridging hydrogens. This is an example of three-center two-electron bonding. Monomeric titanium(IV) hydride is the simplest transition metal molecule that displays sd3 orbital hybridisation. References Titanium(IV) compounds Metal hydrides
Titanium(IV) hydride
Chemistry
432
236,319
https://en.wikipedia.org/wiki/Trioxane
Trioxane refers to any of three isomeric organic compounds composed of a six-membered ring with three carbon atoms and three oxygen atoms, having the molecular formula C3H6O3. Isomers The three isomers are: 1,2,3-trioxane, a hypothetical compound that is the parent structure of the molozonides 1,2,4-trioxane, a hypothetical compound whose skeleton occurs as a structural component of some antimalarial agents (artemisinin and similar drugs) 1,3,5-trioxane, a trimer of formaldehyde used as fuel and in plastics manufacturing, and also as a solid fuel tablet when combined with hexamine References Hypothetical chemical compounds
Trioxane
Chemistry
151
115,628
https://en.wikipedia.org/wiki/Westwego%2C%20Louisiana
Westwego is a city in the U.S. state of Louisiana, located in Jefferson Parish. It is a suburban community of New Orleans in the Greater New Orleans metropolitan area and lies along the west bank of the Mississippi River. The population of the city of Westwego was 8,568 at the 2020 United States census. Etymology One story states that Westwego was so named because it was a major crossing point on the Mississippi River during the great westward movement of the late 19th century. When travelers were asked their destination, they would often reply "west we go". Another more specific tale, recounted in John Churchill Chase's Frenchmen, Desire, Good Children is that the name was the specific outcome of an 1871 meeting of a railroad board of directors in New York, where planning was undertaken to use the site as an eastern terminus ("...west we go from there"). There has been further speculation that this use of "Westwego" as a place name may have been influenced by the board members' familiarity with the name of Oswego, New York. History Early history and development The area of Westwego, Louisiana was inhabited by Native Americans for thousands of years before Europeans settled here. These indigenous people created huge shell middens that can still be seen in the vicinity today. The French first developed the area in 1719 when French minister of state Claude le Blanc started a plantation and a port along the Mississippi River. The port became an important site in the history of the transatlantic slave trade. The estate was later owned by the Zeringue family, who turned it into bustling sugar plantation, known as Seven Oaks. Planter Camille Zeringue built a canal at the plantation that played a prominent role in the community's history for decades. Other antebellum plantations in the area included the Whitehouse Plantation, Magnolia Lane, and the LaBranche Plantation among others. After Camille Zeringue's death, Seven Oaks was owned by Pablo Sala, who divided the property along the canal into lots, which he sold for $40 each. Many of these lots were purchased by displaced hurricane victims from Cheniere Caminada whose homes were destroyed in the great unnamed 1893 storm. With the addition of these families, who were mostly fisherman and trappers, the community of Salaville was born. Salaville grew and the local railroad barons coined the name "Westwego". A number of industries grew around the city's wetlands and bayous, including those involving fisheries, shrimping, the canning of seafood, etc. Westwego was incorporated as a city in 1951 as its population continued to grow and grow. Within the last decade, Westwego has taken on a number of historical restoration projects, inspired by historian Daniel P. Alario Sr. Grain elevator explosion On December 23, 1977, the Continental Grain Elevator in Westwego exploded. The explosion and resulting collapse of the elevator killed 36 people and injured at least 11 others. Most of the fatalities were caused by a concrete tower collapsing onto an office building, where workers were gathered for a Christmas party. The explosion is believed to have been caused by the ignition of grain dust. The blast caused silos to fall and lean against each over in essentially a domino effect. The accident is the deadliest grain elevator accident in history. The Westwego accident, along with other explosions that occurred within the two-year period, led to new regulations for preventing dust explosions. Geography The city of Westwego is located in the Greater New Orleans metropolitan area and region. According to the United States Census Bureau, the city has a total area of , of which is land and , or 10.64%, is water. Demographics According to the 2010 United States census, there were 8,534 people, 3,811 households, and 2,450 families residing in the city. By the 2020 census, there were 8,568 people living in Westwego. Since 2010, there were 4,211 households, out of which 32.7% had children under the age of 18 living with them, 41.1% were married couples living together, 20.8% had a female householder with no husband present, and 32.3% were non-families. 26.7% of all households were made up of individuals, and 10.5% had someone living alone who was 65 years of age or older. The average household size was 2.56 and the average family size was 3.10. Among the population in 2020, there were 3,265 households. In the city, the population was spread out, with 27.5% under the age of 18, 8.9% from 18 to 24, 29.9% from 25 to 44, 21.2% from 45 to 64, and 12.5% who were 65 years of age or older. The median age was 35 years. For every 100 females, there were 90.5 males. For every 100 females age 18 and over, there were 84.6 males at the 2010 census. By 2020, the median age increased to 38.9 years, reflecting the aging population of the city. In 2010, the racial makeup of the city was 78.16% White, 14.02% Black and African American, 2.94% Native American, 1.58% Asian, 1.01% from other races, and 1.40% from two or more races. Hispanic or Latino Americans of any race were 4.59% of the population. Per the 2019 American Community Survey, the population of Westwego was 58.4% non-Hispanic white, 26.7% Black and African American, 1.4% American Indian or Alaska Native, 4.4% Asian, 0.1% some other race, 1.5% two or more races, and 7.4% Hispanic and Latino American of any race. Per the 2020 census, the racial and ethnic makeup was 57.84% non-Hispanic white, 27.15% Black or African American, 0.89% Native American, 1.02% Asian, 0.04% Pacific Islander, 4.2% multiracial or some other race, and 8.87% Hispanic or Latino American of any race; the growth of the non-white population from 2010 to 2020 reflected state- and nationwide trends of diversification. In 2010, the median income for a household in the city was $27,218, and the median income for a family was $31,187. Males had a median income of $29,398 versus $18,916 for females. The per capita income for the city was $13,160. About 17.9% of families and 22.4% of the population were below the poverty line, including 33.7% of those under age 18 and 9.5% of those age 65 or over. In 2019, the median income for a household in the city was $30,126. Males had a median income of $30,217 versus $43,022 for females. An estimated 28.3% of the population lived at or below the poverty line. Education The public schools in Westwego are operated by the Jefferson Parish Public School System. Elementary schools taking portions of Westwego include: Isaac G. Joseph Elementary School in Westwego, Gilbert PreK-8 school in Avondale, and Truman PreK-8 in Estelle. In 2020, Myrtle C. Thibodeaux and Vic A. Pitre Elementary Schools were consolidated into Joseph Elementary, on the site of Pitre, and named after the first African-American superintendent of the school district. Most middle school Westwego residents are zoned to Worley Middle School in Westwego, while some are zoned to Gilbert PreK-8 and Truman PreK-8. Gilbert was known as Henry Ford Middle School until 2019. High school residents are zoned to Higgins High School in an unincorporated area. Our Lady of Prompt Succor Catholic School is a private Catholic school (of the Roman Catholic Archdiocese of New Orleans) in the city. Jefferson Parish Library operates the Edith S. Lawson Library in Westwego. In 2020, Ray St. Pierre Academy moved into the Thibodaux facility. Joshua Butler Elementary School, a public elementary school in Westwego, closed in 2023. Government and infrastructure The United States Postal Service operates the Westwego Post Office. Notable people John Alario, dean of the Louisiana State Legislature; State Senate President since 2012 Sherman A. Bernard (1925-2012), Louisiana insurance commissioner lived at the time of his election in 1972 in Westwego. Robert Billiot, member of the Louisiana House of Representatives for Jefferson Parish since 2008; former educator from Westwego Skyler Green, reared in Westwego, professional football wide receiver and return specialist Ted Haggard, former Evangelical preacher, lived there for a short while in 2007 after a gay prostitution scandal References External links City of Westwego Cities in the New Orleans metropolitan area Westwego grain elevator explosion Westwego grain elevator explosion Louisiana populated places on the Mississippi River
Westwego, Louisiana
Chemistry
1,865
3,063,823
https://en.wikipedia.org/wiki/Architecture%20of%20Kievan%20Rus%27
The architecture of Kievan Rus' comes from the medieval state of Kievan Rus' which incorporated parts of what is now modern Ukraine, Russia, and Belarus, and was centered on Kiev and Novgorod. Its architecture is the earliest period of Russian and Ukrainian architecture, using the foundations of Byzantine culture but with great use of innovations and architectural features. Most remains are Russian Orthodox churches or parts of the gates and fortifications of cities. After the disintegration of Kievan Rus' followed by Mongol invasion in the first half of the 13th century, the architectural tradition continued in the principalities of Novgorod, Vladimir-Suzdal, Galicia-Volhynia and eventually had direct influence on the Russian, Ukrainian, and Belarusian architecture. The Old Russian architecture of churches originates from the pre-Christian Slavic ( - construction). Church architecture The great churches of Kievan Rus', built after the adoption of Christianity in 988, were the first examples of monumental architecture in the East Slavic lands. The architectural style of the Kievan state, which quickly established itself, was strongly influenced by Byzantine architecture. Early Eastern Orthodox churches were mainly made of wood with the simplest form of church becoming known as a cell church. Major cathedrals often featured scores of small domes, which led some art historians to take this as an indication of what the pagan Slavic temples should have looked like. The 10th-century Church of the Tithes in Kiev was the first cult building to be made of stone. The earliest Kievan churches were built and decorated with frescoes and mosaics by Byzantine masters. Another great example of an early church of Kievan Rus' was the thirteen-domed Saint Sophia Cathedral in Kiev (1037–54), built by Yaroslav the Wise. Much of its exterior has been altered with time, extending over the area and eventually acquiring 25 domes. Saint Sophia Cathedral in Novgorod (1045–1050), on the other hand, expressed a new style that exerted a strong influence on Russian church architecture. Its austere thick walls, small narrow windows, and helmeted cupolas have much in common with the Romanesque architecture of Western Europe. Even further departure from Byzantine models is evident in succeeding cathedrals of Novgorod: St Nicholas's (1113), St Anthony's (1117–19), and St George's (1119). Along with cathedrals, of note was the architecture of monasteries of these times. The 12th–13th centuries were the period of feudal division of Kievan Rus into princedoms which were in nearly permanent feud, with multiplication of cathedrals in emerging princedoms and courts of local princes (knyazes). By the end of the 12th century, the divide of the country was final and new centers of power took the Kievan style and adopted it to their traditions. In the northern principality of Vladimir-Suzdal the local churches were built of white stone. The Suzdal style is also known as "white-stone architecture" (""). The first white-stone church was the St. Boris and Gleb Church commissioned by Yuri Dolgoruky, a church-fortress in Kideksha near Suzdal, at the supposed place of the stay of knyazes Boris and Gleb on their pilgrimage to Kiev. The white-stone churches mark the highest point of pre-Mongolian Rus' architecture. The most important churches in Vladimir are the Assumption Cathedral (built 1158–60, enlarged 1185–98, frescoes 1408) and St Demetrios Cathedral (built 1194–97). In the western splinter of Kingdom of Galicia-Volhynia churches in a traditional Kievan style were built for some time, but eventually the style began to drift towards Central European Romanesque tradition. The white stone masonry of Galician school of architecture was likely the inspiration of the development of a similar style in Vladimir-Suzdal. Celebrated as these structures are, the contemporaries were even more impressed by churches of Southern Rus', particularly the Svirskaya Church of Smolensk (1191–94). As southern structures were either ruined or rebuilt, restoration of their original outlook has been a source of contention between art historians. The most memorable reconstruction is the Piatnytska Church (1196–99) in Chernigov (modern Chernihiv, Ukraine), by Peter Baranovsky. Secular architecture There were very few examples of secular (non-religious) architecture in Kievan Rus. Golden Gates of Vladimir, despite much 18th-century restoration, could be regarded as an authentic monument of the pre-Mongolian period. In Kyiv, the capital of Ukraine, no secular monuments survived aside from pieces of walls and ruins of gates. The Golden Gates of Kyiv were destroyed completely over the years with only the ruins remaining. In the 20th century a museum was erected above the ruins. It is a close image of the gates of the Kievan Rus period but is not a monument of the time. One of the best examples, the fortress of Bilhorod Kyivskyi, is still lying under the ground waiting major excavation. In the 1940s, the archaeologist Nikolai Voronin discovered the well-preserved remains of Andrei Bogolyubsky's palace in Bogolyubovo, dating from 1158 to 1165. Examples Examples in Belarus Examples in Russia Examples in Ukraine See also List of buildings of pre-Mongol Kievan Rus' Ukrainian architecture List of Russian church types Old Russian ornament References External links Directory of Orthodox Architecture in Russia - photogallery of church architecture Culture of Kievan Rus' Architecture by region Architectural history Architecture in Belarus Architecture in Russia Architecture in Ukraine Architecture in Ukraine by period or style Architecture in Kyiv
Architecture of Kievan Rus'
Engineering
1,162
34,034,699
https://en.wikipedia.org/wiki/VinyLoop
VinyLoop is a proprietary physical plastic recycling process for polyvinyl chloride (PVC). It is based on dissolution in order to separate PVC from other materials or impurities. Background A major factor of the recycling of polyvinyl chloride waste is the purity of the recycled material. In most composite materials, PVC is among several other materials, such as wood, metal, or textile. To make new products from the recycled PVC, it is necessary to separate it from other materials. Traditional recycling methods are not sufficient and expensive because this separation has to be done manually and product by product. VinyLoop is a recycling process which separates PVC from other materials through a process of dissolution, filtration and separation of contamination. A solvent is used in a closed loop to elute PVC from the waste. This makes it possible to recycle composite structure PVC waste, which would normally be incinerated or put in a landfill site. Process The process consists of the following steps: Pre-treatment: waste plastics are cleaned, ground and mixed Dissolution: a specific solvent is used to selectively dissolve the PVC compound in a closed loop Filtration: impurities which have not been dissolved are removed through filtration—they are separated by type of material by filtration, centrifugation and decantation. After separation, the secondary materials are washed with pure solvent to dissolve all remaining PVC compounds Precipitation of the regenerated PVC compound: the solution of PVC is recovered in a precipitation tank, where steam is injected to evaporate the solvent and precipitate the PVC. The PVC compound is separated in the form of aqueous effluent. and dried. Drying: after recovering the excess water from the slurry, the wet PVC goes to a dryer. Possible products made from recycled PVC are coatings for waterproofing membranes, pond foils, shoe soles, hoses, diaphragms tunnel, coated fabrics, and PVC sheets. It is an attempt to solve the recycling waste problem of PVC products. Ecological importance VinyLoop-based recycled PVC's primary energy demand is 46 percent lower than conventional produced PVC. The global warming potential is 39 percent lower. The VinyLoop process has been selected to recycle membranes of different temporary venues of the London Olympics 2012. Roofing covers of the Olympic Stadium, the Water Polo Arena, the London Aquatics Centre and the Royal Artillery Barracks will be deconstructed and a part will be recycled in the VinyLoop process. Closure Since the process could not remove low molecular weight phthalate plasticizers during recycling, tightening EU regulations meant the recycling plant based in Ferrara, Italy has closed as of 28 June 2018. See also Plastic pressure pipe systems Plastic recycling Polyvinyl fluoride Polyvinylidene chloride Polyvinylidene fluoride Smart polymer Vinyl roof membrane References External links Recycling Green chemistry Chemical processes Separation processes
VinyLoop
Chemistry,Engineering,Environmental_science
610
319,941
https://en.wikipedia.org/wiki/Thaumatin
Thaumatin (also known as talin) is a low-calorie sweetener and taste modifier. The protein is often used primarily for its flavor-modifying properties and not exclusively as a sweetener. The thaumatins were first found as a mixture of proteins isolated from the katemfe fruit (Thaumatococcus daniellii) (Marantaceae) of West Africa. Although very sweet, thaumatin's taste is markedly different from sugar's. The sweetness of thaumatin builds very slowly. Perception lasts a long time, leaving a liquorice-like aftertaste at high concentrations. Thaumatin is highly water soluble, stable to heating, and stable under acidic conditions. Biological role Thaumatin production is induced in katemfe in response to an attack upon the plant by viroid pathogens. Several members of the thaumatin protein family display significant in vitro inhibition of hyphal growth and sporulation by various fungi. The thaumatin protein is considered a prototype for a pathogen-response protein domain. This thaumatin domain has been found in species as diverse as rice and Caenorhabditis elegans. Thaumatins are pathogenesis-related (PR) proteins, which are induced by various agents ranging from ethylene to pathogens themselves, and are structurally diverse and ubiquitous in plants: They include thaumatin, osmotin, tobacco major and minor PR proteins, alpha-amylase/trypsin inhibitor, and P21 and PWIR2 soybean and wheat leaf proteins. The proteins are involved in systematically-acquired stress resistance and stress responses in plants, although their precise role is unknown. Thaumatin is an intensely sweet-tasting protein (on a molar basis about 100,000 times as sweet as sucrose) found in the fruit of the West African plant Thaumatococcus daniellii: it is induced by attack by viroids, which are single-stranded unencapsulated RNA molecules that do not code for protein. The thaumatin protein I consists of a single polypeptide chain of 207 residues. Like other PR proteins, thaumatin is predicted to have a mainly beta structure, with a high content of beta-turns and little helix. Tobacco cells exposed to gradually increased salt concentrations develop a greatly increased tolerance to salt, due to the expression of osmotin, a member of the PR protein family. Wheat plants attacked by barley powdery mildew express a PR protein (PWIR2), which results in resistance against that infection. The similarity between this PR protein and other PR proteins and the maize alpha-amylase/trypsin inhibitor has suggested that PR proteins may act as some form of inhibitor. Within West Africa, the katemfe fruit has been locally cultivated and used to flavour foods and beverages for some time. The fruit's seeds are encased in a membranous sac, or aril, that is the source of thaumatin. In the 1970s, Tate and Lyle began extracting thaumatin from the fruit. In 1990, researchers at Unilever reported the isolation and sequencing of the two principal proteins found in thaumatin, which they dubbed thaumatin I and thaumatin II. These researchers were also able to express thaumatin in genetically engineered bacteria. Thaumatin has been approved as a sweetener in the European Union (E957), Israel, and Japan. In the United States, it is generally recognized as safe as a flavouring agent (FEMA GRAS 3732) but not as a sweetener. Crystallization Since thaumatin crystallizes very quickly and easily in the presence of tartrate ions, thaumatin-tartrate mixtures are frequently used as model systems to study protein crystallization. The solubility of thaumatin, its crystal habit, and mechanism of crystal formation are dependent upon the chirality of precipitant used. When crystallized with L- tartrate, thaumatin forms bipyramidal crystals and displays a solubility that increases with temperature; with D- and meso-tartrate, it forms stubby and prismatic crystals and displays a solubility that decreases with temperature. This suggests control of precipitant chirality may be an important factor in protein crystallization in general. Characteristics As a food ingredient, thaumatin is considered to be safe for consumption. In a chewing gum production plant, thaumatin has been identified as an allergen. Switching from using powdered thaumatin to liquid thaumatin reduced symptoms among affected workers. Additionally, eliminating contact with powdered gum arabic (a known allergen) resulted in the disappearance of symptoms in all affected workers. Thaumatin interacts with human TAS1R3 receptor to produce a sweet taste. The interacting residues are specific to old world monkeys and apes (including humans); only these animals can perceive it as sweet. See also Curculin, a sweet protein from Malaysia with taste-modifying activity Miraculin, a protein from West Africa with taste-modifying activity Monellin, a sweet protein found in West Africa Stevia, a non-nutritive sweetener up to 150 times sweeter than sugar Lugduname, a sweetening agent up to 300,000 times sweeter than sugar References Further reading External links Sugar substitutes Protein domains Taste modifiers Food additives Plant proteins E-number additives
Thaumatin
Biology
1,130
14,210,324
https://en.wikipedia.org/wiki/Dental%20Laboratories%20Association
The Dental Laboratories Association (DLA) is the professional body for dental laboratory owners in the United Kingdom. It is estimated that members of the DLA are responsible for over 80 per cent of the dental laboratory services in the UK. Origin The DLA began as a division of the Surgical Instrument Manufacturers Association, later becoming a separate entity. The first meeting of the DLA Council took place in London in 1961. The first Secretary was John Wrench and administration was provided by a firm of accountants called Hughes Allan. In 1977 Trevor Roadley, a Nottingham dental laboratory owner was appointed as Secretary. Operating initially at his laboratory and then in the DLA's first premises, Roadley gradually built up the range of member services and heightened the standing of the association. The DLA moved to larger premises in Nottingham in 1986, a converted chapel that was being used as a dental laboratory. At this stage the DLA had approximately 400 members. In 1987 Bill Courtney took over as Secretary. Like Trevor before him, Bill was a dental laboratory owner and member of the DLA Council. By now the DLA was a recognised and respected member of the dental world and regularly met with organisations such as the General Dental Council, Orthodontic Technicians Association, British Dental Association and the Department of Health. These close relationships continue to this day and the DLA has presented evidence to the Health Select Committee at the House of Commons on the challenges faced by the profession. The DLA was instrumental in setting up the Dental Technicians Education and Training Advisory Board (DTETAB), which is now known as the Dental Technologists Association (DTA). The late 1980s and early 1990s saw the emergence of the Medical Devices Directive and the introduction of Quality Systems to the industry. The DLA worked to establish an industry-led standard and the first example of this was a system based on BS 5750 called the Certification Authority for Dental Laboratories and Suppliers (CADLAS), which is still operating as AMTAC MEDICA and is now audited to ISO9002. With the lessons learned in setting up CADLAS, a new system called the Dental Appliance Manufacturers Audit Scheme (DAMAS) was launched in 1998. This is based around ISO 9000 and also addresses the Medical Devices Regulations (MDR). By now the Association membership had risen to over 900 and the association moved into new offices at Arboretum Gate in February 1997. There have been many changes since 2000, including the retirement of Bill Courtney and appointment of Richard Daniels as Chief Executive. Membership has passed the 1,000 mark, and the DLA is building a profile as a campaigning organisation with appearances across the media. Offices In May 2001 the Association moved to their current offices on Wollaton Road, Beeston in Nottingham. Logo The DLA logo is circular in shape with the initials "DLA" in the middle. Memberships The DLA is a member of the British Dental Health Foundation (BDHF), the Royal Society for the Prevention of Accidents (RoSPA) and the Federation of European Dental Laboratory Owners (FEPPD). Events Dental Technology Showcase (DTS) As the flagship event of the Dental Laboratories Association (DLA), the Dental Technology Showcase (DTS) is a highly respected platform for dental technicians, clinical dental technicians and lab owners to update and refresh their knowledge and skills. Held alongside The Dentistry Show since 2014, the event offers vast networking opportunities as well as outstanding education, verifiable CPD and access to the very latest innovations in the UK industry. DTS 2015 will take place on Friday 17th and Saturday 18 April at the NEC in Birmingham UK, ensuring a convenient and central location for thousands of dental professionals to attend. An extensive trade exhibition will host over 100 leading dental suppliers and manufacturers including 3Shape, Bracon, Cendres + Metaux, Nobel Biocare, Straumann, Metrodent, Ivoclar Vivadent and Schottlander to name but a few, each demonstrating the latest products, materials and technologies available. Experts will be on hand to provide any information, advice or guidance you may need, helping you to choose the most suitable equipment for optimal clinical results, streamlined workflows and maximum return on investment. Council The DLA Council is the governing body of the association and is elected by the members. Its Chairman is Gordon Watters. The DLA is run by the Nationally Elected Council. All member laboratories may send an observer to Council meetings. References External links Official website 'Standards drop' in NHS dentistry – BBC News 15 September 2007 OTA website Dental Technology Show 2008 Orthodontic Technicians Association Conference 2008 1961 establishments in the United Kingdom Dental organisations based in the United Kingdom Dental technology Beeston, Nottinghamshire Business organisations based in the United Kingdom Health in Nottinghamshire Organisations based in Nottinghamshire Organizations established in 1961 Professional associations based in the United Kingdom
Dental Laboratories Association
Biology
985
59,633,745
https://en.wikipedia.org/wiki/Biaoxingma%20method
The Biaoxingma Input Method (), also abbreviated to simply Biaoxingma (), is a kind of shape-based Chinese character input method invented by Chen Aiwen, an overseas Chinese scholar living in France in the 1980s. Because it is intuitive in the splitting of Chinese characters and has theoretical support in Chinese characters, it had once attracted widespread attention at the beginning of the invention and was listed as a key project in China Torch Project. However, there was afterwards no such influence as Wubi method and Zhengma method in terms of popularization and commercialization. Biaoxingma was pre-installed by Microsoft in Chinese Windows 95 and Windows 98 first edition, but was removed from Windows 98 second edition and later Windows versions. Biaoxingma was also installed in IBM AIX. Basics The smallest constituent parts of each Chinese character are called strokes. One or more strokes form the components of a character. Characters are divided into several components, which are coded to the English letter resembling them. Due to the resemblance of the letter and the character components it refer to, Biaoxingma is easy to learn and remember compared with Wubi method and Zhengma method. Moreover, the biggest advantage of Biaoxingma is that crossed strokes are never divided into two components. In other words, the character components never cross each other. This makes the way of splitting characters very intuitive. Here are two examples: "吼" - divided into - O+Z+L = OZL "啊" - divided into - O+P+T+O = OPTO References CJK input methods Input methods
Biaoxingma method
Technology
325
22,538,283
https://en.wikipedia.org/wiki/Cape%20Canaveral%20Launch%20Complex%2016
Launch Complex 16 (LC-16) is a launch pad site located at Cape Canaveral Space Force Station (CCSFS) in Florida. Part of the Missile Row lineup of launch pads, it was built for use by LGM-25 Titan missiles, and later used for NASA operations before being transferred back to the US military and used for tests of MGM-31 Pershing missiles. Six Titan I missiles were launched from the complex between December 1959 and May 1960. These were followed by seven Titan II missiles, starting with the type's maiden flight on March 16, 1962. The last Titan II launch from LC-16 was conducted on May 29, 1963. Following the end of its involvement with the Titan missile, on 16 September 1964 LC-16 was released to NASA, which used it for Gemini crew processing, and static firing tests of the Apollo Service Module's propulsion engine. Following its return to the US Air Force in 1972, it was converted for use by the Pershing missile, which made its first flight from the complex on May 7, 1974. Seventy-nine Pershing 1a and 49 Pershing II missiles were launched from LC-16. The last Pershing launch from the facility was conducted on March 21, 1988. It was deactivated the next day and subsequently decommissioned under the Intermediate-Range Nuclear Forces Treaty. It was announced on January 17, 2019, that Relativity Space had entered a 5-year agreement to use LC-16 for its Terran 1 orbital launch vehicle and eventually its Terran R. The maiden flight of the Terran 1 launch vehicle took place on 23 March 2023 and resulted in a failure. The maiden flight of Terran 1 was the first orbital launch attempt from Launch Complex 16 (141 suborbital launches before the Terran 1). Launch statistics See also Pershing missile launches References Sources Cape Canaveral Space Force Station
Cape Canaveral Launch Complex 16
Astronomy
383
37,194,181
https://en.wikipedia.org/wiki/Science%20DMZ%20Network%20Architecture
The term Science DMZ refers to a computer subnetwork that is structured to be secure, but without the performance limits that would otherwise result from passing data through a stateful firewall. The Science DMZ is designed to handle high volume data transfers, typical with scientific and high-performance computing, by creating a special DMZ to accommodate those transfers. It is typically deployed at or near the local network perimeter, and is optimized for a moderate number of high-speed flows, rather than for general-purpose business systems or enterprise computing. The term Science DMZ was coined by collaborators at the US Department of Energy's ESnet in 2010. A number of universities and laboratories have deployed or are deploying a Science DMZ. In 2012 the National Science Foundation funded the creation or improvement of Science DMZs on several university campuses in the United States. The Science DMZ is a network architecture to support Big Data. The so-called information explosion has been discussed since the mid 1960s, and more recently the term data deluge has been used to describe the exponential growth in many types of data sets. These huge data sets, often need to be copied from one location to another using the Internet. The movement of data sets of this magnitude in a reasonable amount of time should be possible on modern networks. For example, it should only take less than 4 hours to transfer 10 Terabytes of data on a 10 Gigabit Ethernet network path, assuming disk performance is adequate The problem is that this requires networks that are free from packet loss and middleboxes such as traffic shapers or firewalls that slow network performance. Stateful firewalls Most businesses and other institutions use a firewall to protect their internal network from malicious attacks originating from outside. All traffic between the internal network and the external Internet must pass through a firewall, which discards traffic likely to be harmful. A stateful firewall tracks the state of each logical connection passing through it, and rejects data packets inappropriate for the state of the connection. For example, a website would not be allowed to send a page to a computer on the internal network, unless the computer had requested it. This requires a firewall to keep track of the pages recently requested, and match requests with responses. A firewall must also analyze network traffic in much more detail, compared to other networking components, such as routers and switches. Routers only have to deal with the network layer, but firewalls must also process the transport and application layers as well. All this additional processing takes time, and limits network throughput. While routers and most other networking components can handle speeds of 100 billion bits per second (Gbps), firewalls limit traffic to about 1 Gbit/s, which is unacceptable for passing large amounts of scientific data. Modern firewalls can leverage custom hardware (ASIC) to accelerate traffic and inspection, in order to achieve higher throughput. This can present an alternative to Science DMZs and allows in place inspection through existing firewalls, as long as unified threat management (UTM) inspection is disabled. While stateful firewall may be necessary for critical business data, such as financial records, credit cards, employment data, student grades, trade secrets, etc., science data requires less protection, because copies usually exist in multiple locations and there is less economic incentive to tamper. DMZ A firewall must restrict access to the internal network but allow external access to services offered to the public, such as web servers on the internal network. This is usually accomplished by creating a separate internal network called a DMZ, a play on the term "demilitarized zone." External devices are allowed to access devices in the DMZ. Devices in the DMZ are usually maintained more carefully to reduce their vulnerability to malware. Hardened devices are sometimes called bastion hosts. The Science DMZ takes the DMZ idea one step farther, by moving high performance computing into its own DMZ. Specially configured routers pass science data directly to or from designated devices on an internal network, thereby creating a virtual DMZ. Security is maintained by setting access control lists (ACLs) in the routers to only allow traffic to/from particular sources and destinations. Security is further enhanced by using an intrusion detection system (IDS) to monitor traffic, and look for indications of attack. When an attack is detected, the IDS can automatically update router tables, resulting in what some call a Remotely Triggered BlackHole (RTBH). Justification The Science DMZ provides a well-configured location for the networking, systems, and security infrastructure that supports high-performance data movement. In data-intensive science environments, data sets have outgrown portable media, and the default configurations used by many equipment and software vendors are inadequate for high performance applications. The components of the Science DMZ are specifically configured to support high performance applications, and to facilitate the rapid diagnosis of performance problems. Without the deployment of dedicated infrastructure, it is often impossible to achieve acceptable performance. Simply increasing network bandwidth is usually not good enough, as performance problems are caused by many factors, ranging from underpowered firewalls to dirty fiber optics to untuned operating systems. The Science DMZ is the codification of a set of shared best practices—concepts that have been developed over the years—from the scientific networking and systems community. The Science DMZ model describes the essential components of high-performance data transfer infrastructure in a way that is accessible to non-experts and scalable across any size of institution or experiment. Components The primary components of a Science DMZ are: A high performance Data Transfer Node (DTN) running parallel data transfer tools such as GridFTP A network performance monitoring host, such as perfSONAR A high performance router/switch Optional Science DMZ components include: Support for layer-2 Multiprotocol Label Switching (MPLS) Virtual Private Networks (VPN) Support for software-defined networking See also Big Data perfSONAR References External links ESnet web pages describing the Science DMZ NSF Program funding Science DMZs "science_dmz"_internet Announcement on Ohio State University Science DMZ NSF Solicitation on funding to build Science DMZs University of Utah's Science DMZ Computer network security Network architecture Network performance
Science DMZ Network Architecture
Engineering
1,313
76,666,752
https://en.wikipedia.org/wiki/Trichosporon%20asahii
Trichosporon asahii is a non-Candida yeast that has been reported to cause infections in immunocompromised patients. T. asahii is the most prominent human pathogen in its genus, causing more than half of all Trichosporon infections. First discovered and named in 1929, The currently accepted nomenclature of T. asahii was validated in 1994. Disease The clinical manifestations of T. asahii infection are non-specific and vary depending on the site of infection. The most common types of infection were urinary tract infections, fungemia, and disseminated infection. Cutaneous infections have also been reported. Identification and culture T. asahii grows readily on routine laboratory media, producing white, yellow, or cream, yeast-like colonies on Sabouraud dextrose agar. This fungus has a rapid growth rate and colonies mature in 5 days. When grown on cornmeal-Tween 80 agar, true hyphae, pseudohyphae, and blastoconidia can be seen under microscopic examination. Arthroconidia can be observed in older cultures. This fungus is able to hydrolyze urea through the production of urease. Treatment ESCMID/ECMM guidelines recommend the use of voriconazole for the treatment of invasive T. asahii infections. Patients have also been treated with amphotericin B and triazole therapy. References Tremellomycetes Yeasts
Trichosporon asahii
Biology
304
37,890,657
https://en.wikipedia.org/wiki/NGC%2080
NGC 80 is a lenticular galaxy located in the constellation Andromeda. It is currently interacting with two other barred spiral galaxies NGC 47 and NGC 68, and was discovered on August 17, 1828 by John Herschel. Physical properties NGC 80 is classfied as a giant lenticular galaxy. Its circumnuclear ring measured 5″t 7″ in radius, is 7 billion years with an older stellar population of 10 billion years. The galaxy also has a metal-rich chemically distinct nucleus. NGC 80 group NGC 80 is the brightest cluster galaxy of the NGC 80 group, a galaxy group named after it. Other galaxies that forms the group are NGC 79, NGC 81, NGC 83, NGC 85, NGC 86, Arp 65 (NGC 90 and NGC 93, NGC 94, NGC 96, IC 1542 and IC 1546. According to astronomers who studied the budges of seven members in the NGC 80 group in 2008 using the BTA-6 telescope, they discovered the stars have an estimated age of between 10 to 15 billion years old. However, IC 1548 (another member of the NGC 80 group) was exceptional since it showed signs of recent star formations, with a budge and nucleus age calculated to be 3 and 1.5 billion years respectively. Moreover, IC 1548 also has a thin-like gas structure indicating its interaction caused it to become a lenticular galaxy. The following year, the same telescope was used, this time to observe 13 disk galaxies in the group. Of the 13 galaxies, 9 were lenticulars. Astronomers also found there is one case of ongoing star formation in UCM 0018+2216 and that all galaxies studied exhibited a two-layered stellar disk brighter than M B ∼ -18. References External links SEDS Lenticular galaxies 0080 00203 001351 18280817 Andromeda (constellation)
NGC 80
Astronomy
381
61,246,073
https://en.wikipedia.org/wiki/Charles%20Matcham
Charles Arthur Matcham (15 January 1862 – 22 September 1911) was an English civil engineer and businessman who spent most of his life in America. He founded numerous businesses, mostly within the cement-making industry, in areas including Phillipsburg, in New Jersey, and Allentown, and Portland in Pennsylvania. He was the younger brother of the English theatre architect Frank Matcham. Life and career Charles Matcham was born on 15 January 1862 in Torquay, Devonshire. He was the third son of Charles Matcham (1826–1888), a brewer, and his wife, Elizabeth Lancaster (1830–1905). Charles Jr. was educated at schools in Hambledon in Hampshire, and then Brighton, East Sussex. Matcham entered the engineering industry in 1875 in London where he received an honours mention for his mechanical drawing and designs at the National Art Training School in South Kensington. From 1877–80 he worked as a mechanical draftsman in London. In 1879 he joined the newly-formed American Bell Telephone Company and built telephone exchanges in Europe, including Antwerp, Brussels, and Charleroi. He also worked in St. Petersburg and Riga, where he introduced the newly-invented telephone and installed the Alexander II of Russia's system, personally. In 1881 Matcham travelled to Chicago, America, where he started work with the Chicago Telephone Company for whom he built exchanges. Three years later he joined the Pennsylvania Telephone Company and became the Chief Engineer and Superintendent. In 1890, along with his brother-in-law, he founded a cement plant in Phillipsburg, New Jersey, called the Whittaker Cement Company. He stayed with the business until its sale in 1897 to the Alpha Portland Cement Company, of which he was manager. That year he established the Lehigh Portland Cement Company where he stayed for 10 years before joining the Allentown Cement Company as general manager. He was a member of the American Society of Civil Engineers, American Institute of Mining Engineers, American Society for Testing Materials, and the National Geographic Society. Through his work within the civil engineering industry, he invented a cement stone pulveriser, for which he owned the patent. Illness and death Matcham retired in 1910 after failing health. He died of a chest infection on 22 September 1911 at the age of 49. Personal life Matcham married Margaret Ormrod in 1888 and they had three children; a son, Charles, and daughters Dorothy and Catherine. He was the younger brother of the theatre architect Frank Matcham, and Sydney, who was noted for founding the first travel agency in Allentown, called the Matcham Travel Bureau. Notes and references Notes References Sources 1862 births 1911 deaths Civil engineers People from Torquay Businesspeople from Devon Engineers from Devon English emigrants to the United States
Charles Matcham
Engineering
555
53,293
https://en.wikipedia.org/wiki/Tangloids
Tangloids is a mathematical game for two players created by Piet Hein to model the calculus of spinors. A description of the game appeared in the book "Martin Gardner's New Mathematical Diversions from Scientific American" by Martin Gardner from 1996 in a section on the mathematics of braiding. Two flat blocks of wood each pierced with three small holes are joined with three parallel strings. Each player holds one of the blocks of wood. The first player holds one block of wood still, while the other player rotates the other block of wood for two full revolutions. The plane of rotation is perpendicular to the strings when not tangled. The strings now overlap each other. Then the first player tries to untangle the strings without rotating either piece of wood. Only translations (moving the pieces without rotating) are allowed. Afterwards, the players reverse roles; whoever can untangle the strings fastest is the winner. Try it with only one revolution. The strings are of course overlapping again but they can not be untangled without rotating one of the two wooden blocks. The Balinese cup trick, appearing in the Balinese candle dance, is a different illustration of the same mathematical idea. The anti-twister mechanism is a device intended to avoid such orientation entanglements. A mathematical interpretation of these ideas can be found in the article on quaternions and spatial rotation. Mathematical articulation This game serves to clarify the notion that rotations in space have properties that cannot be intuitively explained by considering only the rotation of a single rigid object in space. The rotation of vectors does not encompass all of the properties of the abstract model of rotations given by the rotation group. The property being illustrated in this game is formally referred to in mathematics as the "double covering of SO(3) by SU(2)". This abstract concept can be roughly sketched as follows. Rotations in three dimensions can be expressed as 3x3 matrices, a block of numbers, one each for x,y,z. If one considers arbitrarily tiny rotations, one is led to the conclusion that rotations form a space, in that if each rotation is thought of as a point, then there are always other nearby points, other nearby rotations that differ by only a small amount. In small neighborhoods, this collection of nearby points resembles Euclidean space. In fact, it resembles three-dimensional Euclidean space, as there are three different possible directions for infinitesimal rotations: x, y and z. This properly describes the structure of the rotation group in small neighborhoods. For sequences of large rotations, however, this model breaks down; for example, turning right and then lying down is not the same as lying down first and then turning right. Although the rotation group has the structure of 3D space on the small scale, that is not its structure on the large scale. Systems that behave like Euclidean space on the small scale, but possibly have a more complicated global structure are called manifolds. Famous examples of manifolds include the spheres: globally, they are round, but locally, they feel and look flat, ergo "flat Earth". Careful examination of the rotation group reveals that it has the structure of a 3-sphere with opposite points identified. That means that for every rotation, there are in fact two different, distinct, polar opposite points on the 3-sphere that describe that rotation. This is what the tangloids illustrate. The illustration is actually quite clever. Imagine performing the 360 degree rotation one degree at a time, as a set of tiny steps. These steps take you on a path, on a journey on this abstract manifold, this abstract space of rotations. At the completion of this 360 degree journey, one has not arrived back home, but rather instead at the polar opposite point. And one is stuck there -- one can't actually get back to where one started until one makes another, a second journey of 360 degrees. The structure of this abstract space, of a 3-sphere with polar opposites identified, is quite weird. Technically, it is a projective space. One can try to imagine taking a balloon, letting all the air out, then gluing together polar opposite points. If attempted in real life, one soon discovers it can't be done globally. Locally, for any small patch, one can accomplish the flip-and-glue steps; one just can't do this globally. (Keep in mind that the balloon is , the 2-sphere; it's not the 3-sphere of rotations.) To further simplify, one can start with , the circle, and attempt to glue together polar opposites; one still gets a failed mess. The best one can do is to draw straight lines through the origin, and then declare, by fiat, that the polar opposites are the same point. This is the basic construction of any projective space. The so-called "double covering" refers to the idea that this gluing-together of polar opposites can be undone. This can be explained relatively simply, although it does require the introduction of some mathematical notation. The first step is to blurt out "Lie algebra". This is a vector space endowed with the property that two vectors can be multiplied. This arises because a tiny rotation about the x-axis followed by a tiny rotation about the y-axis is not the same as reversing the order of these two; they are different, and the difference is a tiny rotation in along the z-axis. Formally, this inequivalence can be written as , keeping in mind that x, y and z are not numbers but infinitesimal rotations. They don't commute. One may then ask, "what else behaves like this?" Well, obviously the 3D rotation matrices do; after all, the whole point is that they do correctly, perfectly mathematically describe rotations in 3D space. As it happens, though, there are also 2x2, 4x4, 5x5, ... matrices that also have this property. One may reasonably ask "OK, so what is the shape of their manifolds?". For the 2x2 case, the Lie algebra is called su(2) and the manifold is called SU(2), and quite curiously, the manifold of SU(2) is the 3-sphere (but without the projective identification of polar opposites). This now allows one to play a bit of a trick. Take a vector in ordinary 3D space (our physical space) and apply a rotation matrix to it. One obtains a rotated vector . This is the result of applying an ordinary, "common sense" rotation to . But one also has the Pauli matrices ; these are 2x2 complex matrices that have the Lie algebra property that and so these model the behavior of infinitesimal rotations. Consider then the product . The "double covering" is the property that there exists not one, but two 2x2 matrices such that Here, denotes the inverse of ; that is, The matrix is an element of SU(2), and so for every matrix in SO(3), there are two corresponding : both and will do the trick. These two are the polar-opposites, and the projection is just boils down to the trivial observation that The tangeloid game is meant to illustrate that a 360 degree rotation takes one on a path from to . This is quite precise: one can consider a sequence of small rotations and the corresponding movement of ; the result does change sign. In terms of rotation angles the matrix will have a in it, but the matching will have a in it. Further elucidation requires actually writing out these formulas. The sketch can be completed with some general remarks. First, Lie algebras are generic, and for each one, there are one or more corresponding Lie groups. In physics, 3D rotations of normal 3D objects are obviously described by the rotation group, which is a Lie group of 3x3 matrices . However, the spinors, the spin-1/2 particles, rotate according to the matrices in SU(2). The 4x4 matrices describe the rotation of spin-3/2 particles, and the 5x5 matrices describe the rotations of spin-2 particles, and so on. The representation of Lie groups and Lie algebras are described by representation theory. The spin-1/2 representation belongs to the fundamental representation, and the spin-1 is the adjoint representation. The notion of double-covering used here is a generic phenomenon, described by covering maps. Covering maps are in turn a special case of fiber bundles. The classification of covering maps is done via homotopy theory; in this case, the formal expression of double-covering is to say that the fundamental group is where the covering group is just encoding the two equivalent rotations and above. In this sense, the rotation group provides the doorway, the key to the kingdom of vast tracts of higher mathematics. See also Orientation entanglement Plate trick References Mathematical games Spinors
Tangloids
Mathematics
1,851
973,239
https://en.wikipedia.org/wiki/Ethion
Ethion (C9H22O4P2S4) is an organophosphate insecticide. It is known to affect the neural enzyme acetylcholinesterase and disrupt its function. History Ethion was first registered in the US as an insecticide in the 1950s. Annual usage of ethion since then has varied depending on overall crop yields and weather conditions. For example, 1999 was a very dry year; since the drought reduced yields, the use of ethion was less economically rewarding. Since 1998, risk assessment studies have been conducted by (among others) the EPA (United States Environmental Protection Agency). Risk assessments for ethion were presented at a July 14, 1999 briefing with stakeholders in Florida, which was followed by an opportunity for public comment on risk management for this pesticide. Regulatory review Ethion was one of many substances approved for use based on data from Industrial Bio-Test Laboratories, which was later discovered to have engaged in extensive scientific misconduct and fraud, prompting the Food and Agriculture Organization and World Health Organization to recommend ethion's reanalysis in 1982. Synthesis Ethion is produced under controlled pH conditions by reacting dibromomethane with Diethyl dithiophosphoric acid in ethanol. Other methods of synthesis include the reaction of methylene bromide and sodium diethyl phosphorodithioate or the reaction of diethyl dithiophosphoric acid and formaldehyde. Reactivity and mechanism Ethion is a small lipophilic molecule. This promotes rapid passive absorption across cell membranes. Thus absorption through skin, lungs, and the gut into the blood occurs via passive diffusion. Ethion is metabolized in the liver via desulfurization, producing the metabolite ethion monoxon. This transformation leads to liver damage. Ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE), which normally facilitates nerve impulse transmission; secondary damage thus occurs in the brain. Because the chemical structure of ethion monoxon is similar to that of an organophosphate, its mechanism of poisoning is thought to be the same. See the figure, "Inhibition of cholinesterase by ethion monoxon." The figure depicts enzyme inhibition as a two-step process. Here, a hydroxyl group (OH) from a serine residue in the active site of ChE is phosphorylated by an organophosphate, causing enzyme inhibition and preventing the serine hydroxyl group from participating in the hydrolysis of another enzyme called acetylcholinesterase (Ach). The phosphorylated form of the enzyme is highly stable, and depending on the R and groups attached to phosphorus, this inhibition can be either reversible or irreversible. Metabolism Goats exposed to ethion showed clear distinctions in excretion, absorption half-life and bioavailability. These differences depend on the method of administration. Intravenous injection resulted in a half-life time of 2 hours, while oral administration resulted in a half-life time of 10 hours. Dermal administration lead to a half-life time of 85 hours. These differences in half-life times can be correlated with differences in bio-availability. The bio-availability of ethion via oral administration was less than 5%, whereas the bio-availability via dermal administration of ethion was 20%. In a study conducted among rats, it was found that ethion is readily metabolized after oral administration. Rat urine samples contained four to six polar water-soluble ethion metabolites. A study among chickens revealed more about spontaneous ethion distribution in the body. In a representative study, liver, muscle, and fat tissues were examined after 10 days of ethion exposure. In all three cases, ethion or ethion derivatives were present, indicating that it is widely spread in the body. Chicken eggs were also investigated, and it was found that the egg white reaches a steady ethion derivative concentration after four days, while the concentration in yolk was still rising after ten days. In the investigated chickens, about six polar water-soluble metabolites were also found to be present. In a study performed on goats, heart and kidney tissues were investigated after a period of ethion exposure, and in these tissues, ethion-derivatives were found. This study indicates that the highest levels were found in the liver and kidneys, and the lowest levels in fat. Derivatives were also detected in the goats' milk. Biotransformation Biotransformation of ethion occurs in the liver, where it undergoes desulfurisation to form the active metabolite ethion monoxon. The enzyme cytochrome P-450 catalyzes this step. Because it contains an active oxygen, ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE). ChE can dephosphorylate organophosphate, so in the next step of the biotransformation, ethion monoxon is dephosphorylated and ChE is phosphorylated. The subsequent step in the biotransformation process is not yet completely known, yet it is understood that this happens via esterases in the blood and liver (1). Besides the dephosphorylation of ethion monoxon by ChE, it is likely that the ethion monoxon is partially oxidized toward ethion dioxon. After solvent partitioning of urine from rats that had been fed ethion, it became clear that the metabolites found in the urine were 99% dissolved in the aqueous phase. This means that only non-significant levels (<1 %) were present in the organic phase and that the metabolites are very hydrophilic. In a parallel study in goats, radioactive labeled ethion with incorporated 14C was used. After identification of the 14C residues in organs of the goats, such as the liver, heart, kidneys, muscles and fat tissue, it appeared that 0.03 ppm or less of the 14C compounds present was non-metabolized ethion. The metabolites ethion monoxon and ethion dioxon were also not detected in any samples with a substantial threshold (0.005-0.01 ppm). In total, 64% to 75% of the metabolites from the tissues were soluble in methanol. After the addition of a protease, another 17% to 32% were solubilized. In the aqueous phase, at least four different radioactive metabolites were found. However, characterization of these compounds was repeatedly unsuccessful due to their high volatility. One compound was trapped in the kidney and was identified as formaldehyde. This is an indication that the 14C of ethion is used in the formation of natural products. Toxicity Summary of toxicity Exposure to ethion can happen by ingestion, absorption via the skin, and inhalation. Exposure can lead to vomiting, diarrhea, headache, sweating, and confusion. Severe poisoning might lead to fatigue, involuntary muscle contractions, loss of reflexes and slurred speech. In even more severe cases, death will be the result of respiratory failure or cardiac arrest. When being exposed through skin contact, the lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for female rats and 3 hours for male rats, and the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for male rats and 62 mg/kg for female rats. When being exposed through ingestion, 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory track of rats, nor did 13-week testing on dogs (8.25 mg/kg/day). LD50 values for pure ethion in rats is 208 mg/kg, and for technical ethion is 21 to 191 mg/kg. Other reported oral LD50 values are 40 mg/kg in mice and guinea pigs. Furthermore, inhalation of ethion is very toxic - during one study which was looking at technical-grade ethion, an LC50 of 2.31 mg/m^3 was found in male rats and of 0.45 mg/m^3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. Acute toxicity Ethion causes toxic effects following absorption via the skin, ingestion, and inhalation, and may cause burns when skin is exposed to it. According to Extoxnet, any form of exposure could result in the following symptoms: pallor, nausea, vomiting, diarrhea, abdominal cramps, headache, dizziness, eye pain, blurred vision, constriction or dilation of the eye pupils, tears, salivation, sweating, and confusion may develop within 12 hours. Severe poisoning may result in distorted coordination, loss of reflexes, slurred speech, fatigue and weakness, tremors of the tongue and eyelids, involuntary muscle contractions and can also lead to paralysis and respiratory problems. In more severe cases, ethion poisoning can lead to involuntary discharge of urine or feces, irregular heart beats, psychosis, loss of consciousness, and, in some cases, coma or death. Death is the result of respiratory failure or cardiac arrest. Hypothermia, AC heart blocks and arrhythmias are also found to be possible consequences of ethion poisoning. Ethion may also lead to delayed symptoms of other organophosphates. Skin exposure In rabbits receiving 250 mg/kg of technical-grade ethion for 21 days, the dermal exposure lead to increased cases of erythema and desquamation. It also lead to inhibition of brain acetylcholinesterase at 1 mg/kg/day and the NOAEL was determined to be 0.8 mg/kg/day. In guinea pigs, ethion ALS lead to a slight erythema that cleared in 48 hours, and it was determined that the compound was not a skin sensitizer. In a study determining the LD50 of ethion, 80 male and 60 female adult rats were dermally exposed to ethion dissolved in xylene. The lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for females and 3 hours for males, while the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for males and 62 mg/kg for females. Skin contact with organophosphates, in general, may cause localized sweating and involuntary muscle contractions. Other studies found the LC50 via the dermal route to be 915 mg/kg in guinea pigs and 890 mg/kg in rabbits. Ethion can also cause slight redness and inflammation to the eye and skin that will clear within 48 hours. It is also known to cause blurred vision, pupil constriction and pain. Ingestion A six-month-old boy experienced shallow excursions and intercostal retractions after accidentally ingesting 15.7 mg/kg ethion. The symptoms started one hour after ingestion, and were treated. Five hours after ingestion, respiratory arrest occurred and mechanical ventilation was needed for three hours. Following examinations after one week, one month and one year suggested that full recovery was made. The same boy also showed occurrence of tachycardia, frothy saliva (1 hour after ingestion), watery bowel movements (90 minutes after ingestion), increased white blood cell counts in urine, inability to control his head and limbs, occasional twitching, pupils non-reactive to light, purposeless eye movements, palpable liver and spleen, and there were some symptoms of paralysis. Testing on rats with 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory tract, nor did 13 week testing on dogs (8.25 mg/kg/day). values for pure ethion in rats of 208 mg/kg, and for technical-grade ethion of 21 to 191 mg/kg,. Other reported oral LD50 values (for the technical product) are 40 mg/kg in mice and guinea pigs. In a group of six male volunteers no differences in blood pressure or pulse rate were noted, neither in mice or dogs. Diarrhea did occur in mice orally exposed to ethion, severe signs of neurotoxicity were also present. The effects were consistent with cholinergic over stimulation of the gastrointestinal tract. No hematological effects were reported in an experiment with six male volunteers, nor in rats or dogs. The volunteers did not show differences in muscle tone after intermediate-duration oral exposure, nor did the testing animals to different exposure. It is however knows that ethion can result in muscle tremors and fasciculations. The animal-testing studies on rats and dogs showed no effect on the kidneys and liver, but a different study showed an increased incidence in orange-colored urine. The animal-testing studies on rats and dogs did also not show dermal or ocular effects. Rabbits, receiving 2.5 mg/kg/day of ethion showed a decrease in body weight, but no effects were seen at 0.6 mg/kg/day. The decrease body, combined with reduced food consumption, was observed for rabbits receiving 9.6 mg/kg/day . Male and female dogs receiving 0.71 mg/kg/day did not show a change in body weight, but dogs receiving 6.9 and 8.25 mg/kg/day showed reduced food consumption and reduced body weight. In a study with human volunteers, a decrease of plasma cholinesterase was observed during 0.075 mg/kg/day (16% decrease), 0.1 mg/kg/day (23% decrease) and 0.15 mg/kg/day (31%decrease) treatment periods. This was partially recovered after 7 days, and fully recovered after 12 days. No effect on erythrocyte acetylcholinesterase was observed, nor signs of adverse neurological effects. Another study showed severe neurological effects after a single oral exposure in rats. For male rats, salivation, tremors, nose bleeding, urination, diarrhea, and convulsions occurred at 100 mg/kg, and for female rats, at 10 mg/kg. In a study with albino rats, it was observed that brain acetylcholinesterase was inhibited by 22%, erythrocyte acetylcholinesterase by 87%, and plasma cholinesterase by 100% in male rats after being fed 9 mg/kg/day of ethion for 93 days. After 14 days of recovery, plasma cholinesterase recovered completely, and erythrocyte acetylcholinesterase recovered 63%. There were no observed effects at 1 mg/kg/day. In a study involving various rats, researchers observed no effects on erythrocyte acetylcholinesterase at 0, 0.1, 0.2, and 2 mg/kg/day of ethion. In a 90-day study on dogs, in which the males received 6.9 mg/kg/day and the females 8.25 mg/kg/day, ataxia, emesis, miosis, and tremors were observed. Brain and erythrocyte acetylcholinesterase were inhibited (61-64% and 93-04%, respectively). At 0.71 mg/kg/day in male dogs, the reduction in brain acetylcholinesterase was 23%. There were no observed effects at 0.06 and 0.01 mg/kg/day. Based on these findings, a minimal risk level of 0,002 mg/kg/day for oral exposure for acute and intermediate duration was established. Researchers also calculated a chronic-duration minimal risk level of 0.0004 mg/kg/day. In one study, in which rats received a maximum of 1.25 mg/kg/day, no effects on reproduction were observed. In a study on pregnant river rats, eating 2.5 mg/kg/day, it was observed that the fetuses had increased incidence of delayed ossification of pubes. Another study found that the fetuses of pregnant rabbits, eating 9.6 mg/kg/day had increased incidence of fused sterna centers. Inhalation Ethion is quite toxic to lethal via inhalation. One study, looking at technical-grade ethion, found an LC50 of 2.31 mg/m3 in male rats and of 0.45 mg/m3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. As stated earlier, ethion can also lead to pupillary constriction, muscle cramps, excessive salivation, sweating, nausea, dizziness, labored breathing, convulsions, and unconsciousness. A sensation of tightness in the chest and rhinorrhea are also very common after inhalation. Carcinogenic effects There are no indications that ethion is carcinogenic in rats and mice. When rats and mice were fed ethion for two years, the animals did not develop cancer any faster than the control group of animals that were not given ethion. Ethion has not been classified for carcinogenicity by the United States Department of Health and Human Services (DHHS), the International Agency for Research on Cancer (IARC) or the EPA. Treatment When orally exposed, gastric lavage shortly after exposure can be used to reduce the peak absorption. It is also suspected that treatment with active charcoal could be effective to reduce peak absorption. Safety guidelines also encourage to induce vomiting to reduce oral exposure, if the victim is still conscious. In case of skin exposure, it is advised to wash and rinse with plenty of water and soap to reduce exposure. In case of inhalation, fresh air is advised to reduce exposure. Treating the ethion-exposure itself is done in the same way as exposure with other organophosphates. The main danger lies in respiratory problems - if symptoms are present, then artificial respiration with an endotracheal tube is used as a treatment. The effect of ethion on muscles or nerves is counteracted with atropine. Pralidoxime can be used to act against organophosphate poisoning, this must be given as fast as possible after the ethion poisoning, for its efficacy is inhibited by the chemical change of ethion-enzyme in the body that occurs over time. Effects on animals Ethion has an influence on the environment as it is persistent and thus might accumulate through plants and animals. When looking at songbirds, ethion is very toxic. The LD50 in red-winged blackbirds is 45 mg/kg. However, it is moderately toxic to birds like the bobwhite quail (LD50 is 128.8 mg/kg) and starlings (LD50 is greater than 304 mg/kg). These birds would be classified as 'medium sized birds. When looking at larger, upland game birds (like the ring-necked pheasant and waterfowl like the mallard duck, ethion varies from barely toxic to nontoxic. Ethion, however, is very toxic to aquatic organisms like freshwater and marine fish, and is extremely toxic to freshwater invertebrates, with an average LD50 of 0.056 μg/L to 0.0077 mg/L. The LD50 for marine and estuarine invertebrates are 0.04 to 0.05 mg/L. In a chronic toxicity study, rats were fed 0, 0.1, 0.2 or 2 mg/kg/day ethion for 18 months, and no severe toxic effects were observed. The only significant change was a decrease of cholinesterase levels in the group with the highest dose. Therefore, the NOEL of this study was 0.2 mg/kg. The oral LD50 for pure ethion in rats is 208 mg/kg. The dermal LD50 in rats is 62 mg/kg, 890 mg/kg in rabbits, and 915 mg/kg in guinea pigs. For rats, the 4-hour LD50 is 0.864 mg/L ethion. Detection Methods Insecticides such as ethion can be detected by using a variety of chemical analysis methods. Some analysis methods, however, are not specific for this substance. In a recently introduced method, the interaction of silver nanoparticles (AgNPs) with ethion results in the quenching of the resonance relay scattering (RRS) intensity. The change in RRS was shown to be linearly correlated to the concentration of ethion (range: 10.0–900 mg/L). Another advantage of this method over general detection methods is that ethion can be measured in just 3 minutes with no requirement for pretreatment of the sample. From interference tests, it was shown that this method achieves good selectivity for ethion. The limit of detection (LOD) was 3.7 mg/L and limit of quantification (LOQ) was 11.0 mg/L. Relative standard deviations (RSDs) for samples containing 15.0 and 60.0 mg/L of ethion in water were 4.1 and 0.2 mg/L, respectively. Microbial degradation Ethion remains a major environmental contaminant in Australia, among other locations, because of its former usage in agriculture. However, there are some microbes that can convert ethion into less toxic compounds. Some Pseudomonas and Azospirillum bacteria were shown to degrade ethion when cultivated in minimal salts medium, where ethion was the only source of carbon. Analysis of the compounds present in the medium after bacterial digestion of ethion demonstrated that no abiotic hydrolytic degradation products of ethion (e.g., ethion dioxon or ethion monoxon) were present. The biodigestion of ethion is likely used to support rapid growth of these bacteria. References External links Acetylcholinesterase inhibitors Organophosphate insecticides Phosphorodithioates Ethyl esters
Ethion
Chemistry
4,764
16,472
https://en.wikipedia.org/wiki/Jet%20stream
Jet streams are fast flowing, narrow air currents in the Earth's atmosphere. The main jet streams are located near the altitude of the tropopause and are westerly winds, flowing west to east around the globe. The northern hemisphere and the southern hemisphere each have a polar jet around their respective polar vortex at around above sea level and typically travelling at around although often considerably faster. Closer to the equator and somewhat higher and somewhat weaker is a subtropical jet. The northern polar jet flows over the middle to northern latitudes of North America, Europe, and Asia and their intervening oceans, while the southern hemisphere polar jet mostly circles Antarctica. Jet streams may start, stop, split into two or more parts, combine into one stream, or flow in various directions including opposite to the direction of the remainder of the jet. The El Niño–Southern Oscillation affects the location of the jet streams, which in turn affects the weather over the tropical Pacific Ocean and affects the climate of much of the tropics and subtropics, and can affect weather in higher-latitude regions. The term "jet stream" is also applied to some other winds at varying levels in the atmosphere, some global (such as the higher-level polar-night jet), some local (such as the African easterly jet). Meteorologists use the location of some of the jet streams as an aid in weather forecasting. Airlines use them to reduce some flight times and fuel consumption. Scientists have considered whether the jet streams might be harnessed for power generation. In World War II, the Japanese used the jet stream to carry Fu-Go balloon bombs across the Pacific Ocean to launch small attacks on North America. Jet streams have been detected in the atmospheres of Venus, Jupiter, Saturn, Uranus, and Neptune. Discovery The first indications of this phenomenon came from American professor Elias Loomis (1811–1889), when he proposed the hypothesis of a powerful air current in the upper air blowing west to east across the United States as an explanation for the behaviour of major storms. After the 1883 eruption of the Krakatoa volcano, weather watchers tracked and mapped the effects on the sky over several years. They labelled the phenomenon the "equatorial smoke stream". In the 1920s Japanese meteorologist Wasaburo Oishi detected the jet stream from a site near Mount Fuji. He tracked pilot balloons ("pibals"), used to measure wind speed and direction, as they rose in the air. Oishi's work largely went unnoticed outside Japan because it was published in Esperanto, though chronologically he has to be credited for the scientific discovery of jet streams. American pilot Wiley Post (1898–1935), the first man to fly around the world solo in 1933, is often given some credit for discovery of jet streams. Post invented a pressurized suit that let him fly above . In the year before his death, Post made several attempts at a high-altitude transcontinental flight, and noticed that at times his ground speed greatly exceeded his air speed. German meteorologist Heinrich Seilkopf is credited with coining a special term, Strahlströmung (literally "jet current"), for the phenomenon in 1939. Many sources credit real understanding of the nature of jet streams to regular and repeated flight-path traversals during World War II. Flyers consistently noticed westerly tailwinds in excess of in flights, for example, from the US to the UK. Similarly in 1944 a team of American meteorologists in Guam, including Reid Bryson, had enough observations to forecast very high west winds that would slow bombers raiding Japan. Description The polar and subtropical jet streams are the product of two factors: the atmospheric heating by solar radiation that produces the large-scale polar, Ferrel, and Hadley circulation cells, and the action of the Coriolis force acting on those moving masses. The Coriolis force is caused by the planet's rotation on its axis. The polar jet stream forms near the interface of the polar and Ferrel circulation cells; the subtropical jet forms near the boundary of the Ferrel and Hadley circulation cells. Polar jet streams are typically located near the 250 hPa (about 1/4 atmosphere) pressure level, or above sea level while the weaker subtropical jet streams are somewhat higher. The polar jets, at lower altitude, and often intruding into mid-latitudes, strongly affect weather and aviation. The polar jet stream is most commonly found between latitudes 30° and 60° (closer to 60°), while the subtropical jet streams are located close to latitude 30°. These two jets merge at some locations and times, while at other times they are well separated. The northern polar jet stream is said to "follow the sun" as it slowly migrates northward as that hemisphere warms, and southward again as it cools. The width of a jet stream is typically a few hundred kilometres or miles and its vertical thickness often less than . Jet streams are typically continuous over long distances, but discontinuities are also common. The path of the jet typically has a meandering shape, and these meanders themselves propagate eastward, at lower speeds than that of the actual wind within the flow. Further, the meanders can split or form eddies. Each large meander, or wave, within the jet stream is known as a Rossby wave (planetary wave). Rossby waves are caused by changes in the Coriolis effect with latitude. Shortwave troughs, are smaller scale waves superimposed on the Rossby waves, with a scale of long, that move along through the flow pattern around large scale, or longwave, "ridges" and "troughs" within Rossby waves. The wind speeds are greatest where temperature differences between air masses are greatest, and often exceed . Speeds of have been measured. The jet stream moves from west to east bringing changes of weather. The path of jet streams affects cyclonic storm systems at lower levels in the atmosphere, and so knowledge of their course has become an important part of weather forecasting. For example, in 2007 and 2012, Britain experienced severe flooding as a result of the polar jet staying south for the summer. Cause In general, winds are strongest immediately under the tropopause (except locally, during tornadoes, tropical cyclones or other anomalous situations). If two air masses of different temperatures or densities meet, the resulting pressure difference caused by the density difference (which ultimately causes wind) is highest within the transition zone. The wind does not flow directly from the hot to the cold area, but is deflected by the Coriolis effect and flows along the boundary of the two air masses. All these facts are consequences of the thermal wind relation. The balance of forces acting on an atmospheric air parcel in the vertical direction is primarily between the gravitational force acting on the mass of the parcel and the buoyancy force, or the difference in pressure between the top and bottom surfaces of the parcel. Any imbalance between these forces results in the acceleration of the parcel in the imbalance direction: upward if the buoyant force exceeds the weight, and downward if the weight exceeds the buoyancy force. The balance in the vertical direction is referred to as hydrostatic. Beyond the tropics, the dominant forces act in the horizontal direction, and the primary struggle is between the Coriolis force and the pressure gradient force. Balance between these two forces is referred to as geostrophic. Given both hydrostatic and geostrophic balance, one can derive the thermal wind relation: the vertical gradient of the horizontal wind is proportional to the horizontal temperature gradient. If two air masses in the northern hemisphere, one cold and dense to the north and the other hot and less dense to the south, are separated by a vertical boundary and that boundary should be removed, the difference in densities will result in the cold air mass slipping under the hotter and less dense air mass. The Coriolis effect will then cause poleward-moving mass to deviate to the East, while equatorward-moving mass will deviate toward the west. The general trend in the atmosphere is for temperatures to decrease in the poleward direction. As a result, winds develop an eastward component and that component grows with altitude. Therefore, the strong eastward moving jet streams are in part a simple consequence of the fact that the Equator is warmer than the north and south poles. Polar jet stream The thermal wind relation does not explain why the winds are organized into tight jets, rather than distributed more broadly over the hemisphere. One factor that contributes to the creation of a concentrated polar jet is the undercutting of sub-tropical air masses by the more dense polar air masses at the polar front. This causes a sharp north–south pressure (south–north potential vorticity) gradient in the horizontal plane, an effect which is most significant during double Rossby wave breaking events. At high altitudes, lack of friction allows air to respond freely to the steep pressure gradient with low pressure at high altitude over the pole. This results in the formation of planetary wind circulations that experience a strong Coriolis deflection and thus can be considered 'quasi-geostrophic'. The polar front jet stream is closely linked to the frontogenesis process in midlatitudes, as the acceleration/deceleration of the air flow induces areas of low/high pressure respectively, which link to the formation of cyclones and anticyclones along the polar front in a relatively narrow region. Subtropical jet A second factor which contributes to a concentrated jet is more applicable to the subtropical jet which forms at the poleward limit of the tropical Hadley cell, and to first order this circulation is symmetric with respect to longitude. Tropical air rises to the tropopause, and moves poleward before sinking; this is the Hadley cell circulation. As it does so it tends to conserve angular momentum, since friction with the ground is slight. Air masses that begin moving poleward are deflected eastward by the Coriolis force (true for either hemisphere), which for poleward moving air implies an increased westerly component of the winds Effects Hurricane protection The subtropical jet stream rounding the base of the mid-oceanic upper trough is thought to be one of the causes most of the Hawaiian Islands have been resistant to the long list of Hawaii hurricanes that have approached. For example, when Hurricane Flossie (2007) approached and dissipated just before reaching landfall, the U.S. National Oceanic and Atmospheric Administration (NOAA) cited vertical wind shear as evidenced in the photo. Uses The northern polar jet stream is the most important one for aviation and weather forecasting, as it is much stronger and at a much lower altitude than the subtropical jet streams and also covers many countries in the northern hemisphere, while the southern polar jet stream mostly circles Antarctica and sometimes the southern tip of South America. Aviation The location of the jet stream is important for aviation. Aircraft flight time can be dramatically affected by either flying with the flow or against it. Often, airlines work to fly with the jet stream to obtain significant fuel cost and time savings. Commercial use of the jet stream began on 18 November 1952, when Pan Am flew from Tokyo to Honolulu at an altitude of . It cut the trip time by over one-third, from 18 to 11.5 hours. Within North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jet stream. Across the Atlantic Ocean the North Atlantic Tracks service allows airlines and air traffic control to accommodate the jet stream for the benefit for airlines and other users. Associated with jet streams is a phenomenon known as clear-air turbulence (CAT), caused by vertical and horizontal wind shear caused by jet streams. The CAT is strongest on the cold air side of the jet, next to and just under the axis of the jet. Clear-air turbulence can cause aircraft to plunge and so present a passenger safety hazard that has caused fatal accidents, such as the death of one passenger on United Airlines Flight 826 in 1997. Unusual wind speed in the jet stream in late February 2024 pushed commercial jets to excess of relative to the ground. Possible future power generation Scientists are investigating ways to harness the wind energy within the jet stream. According to one estimate of the potential wind energy in the jet stream, only one percent would be needed to meet the world's current energy needs. In the late 2000s it was estimated that the required technology would reportedly take 10–20 years to develop. There are two major but divergent scientific articles about jet stream power. Archer & Caldeira claim that the Earth's jet streams could generate a total power of 1700 terawatts (TW) and that the climatic impact of harnessing this amount would be negligible. However, Miller, Gans, & Kleidon claim that the jet streams could generate a total power of only 7.5 TW and lacks the potential to make a significant contribution to renewable energy. Unpowered aerial attack Near the end of World War II, from late 1944 until early 1945, the Japanese Fu-Go balloon bomb, a type of fire balloon, was designed as a cheap weapon intended to make use of the jet stream over the Pacific Ocean to reach the west coast of Canada and the United States. Relatively ineffective as weapons, they were used in one of the few attacks on North America during World War II, causing six deaths and a small amount of damage. American scientists studying the balloons thought the Japanese might be preparing a biological attack. Changes due to climate cycles Effects of ENSO El Niño–Southern Oscillation (ENSO) influences the average location of upper-level jet streams, and leads to cyclical variations in precipitation and temperature across North America, as well as affecting tropical cyclone development across the eastern Pacific and Atlantic basins. Combined with the Pacific Decadal Oscillation, ENSO can also impact cold season rainfall in Europe. Changes in ENSO also change the location of the jet stream over South America, which partially affects precipitation distribution over the continent. El Niño During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During the Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well below normal across the Upper Midwest and Great Lakes states. The northern tier of the lower 48 exhibits above normal temperatures during the fall and winter, while the Gulf coast experiences below normal temperatures during the winter season. The subtropical jet stream across the deep tropics of the northern hemisphere is enhanced due to increased convection in the equatorial Pacific, which decreases tropical cyclogenesis within the Atlantic tropics below what is normal, and increases tropical cyclone activity across the eastern Pacific. In the southern hemisphere, the subtropical jet stream is displaced equatorward, or north, of its normal position, which diverts frontal systems and thunderstorm complexes from reaching central portions of the continent. La Niña Across North America during La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track and jet stream. The storm track shifts far enough northward to bring wetter than normal conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. Snowfall is above normal across the Pacific Northwest and western Great Lakes. Across the North Atlantic, the jet stream is stronger than normal, which directs stronger systems with increased precipitation towards Europe. Dust Bowl Evidence suggests the jet stream was at least partly responsible for the widespread drought conditions during the 1930s Dust Bowl in the Midwest United States. Normally, the jet stream flows east over the Gulf of Mexico and turns northward pulling up moisture and dumping rain onto the Great Plains. During the Dust Bowl, the jet stream weakened and changed course traveling farther south than normal. This starved the Great Plains and other areas of the Midwest of rainfall, causing extraordinary drought conditions. Longer-term climatic changes Since the early 2000s, climate models have consistently identified that global warming will gradually push jet streams poleward. In 2008, this was confirmed by observational evidence, which proved that from 1979 to 2001, the northern jet stream moved northward at an average rate of per year, with a similar trend in the southern hemisphere jet stream. Climate scientists have hypothesized that the jet stream will also gradually weaken as a result of global warming. Trends such as Arctic sea ice decline, reduced snow cover, evapotranspiration patterns, and other weather anomalies have caused the Arctic to heat up faster than other parts of the globe, in what is known as the Arctic amplification. In 2021–2022, it was found that since 1979, the warming within the Arctic Circle has been nearly four times faster than the global average, and some hotspots in the Barents Sea area warmed up to seven times faster than the global average. While the Arctic remains one of the coldest places on Earth today, the temperature gradient between it and the warmer parts of the globe will continue to diminish with every decade of global warming as the result of this amplification. If this gradient has a strong influence on the jet stream, then it will eventually become weaker and more variable in its course, which would allow more cold air from the polar vortex to leak mid-latitudes and slow the progression of Rossby waves, leading to more persistent and more extreme weather. The hypothesis above is closely associated with Jennifer Francis, who had first proposed it in a 2012 paper co-authored by Stephen J. Vavrus. While some paleoclimate reconstructions have suggested that the polar vortex becomes more variable and causes more unstable weather during periods of warming back in 1997, this was contradicted by climate modelling, with PMIP2 simulations finding in 2010 that the Arctic Oscillation (AO) was much weaker and more negative during the Last Glacial Maximum, and suggesting that warmer periods have stronger positive phase AO, and thus less frequent leaks of the polar vortex air. However, a 2012 review in the Journal of the Atmospheric Sciences noted that "there [has been] a significant change in the vortex mean state over the twenty-first century, resulting in a weaker, more disturbed vortex.", which contradicted the modelling results but fit the Francis-Vavrus hypothesis. Additionally, a 2013 study noted that the then-current CMIP5 tended to strongly underestimate winter blocking trends, and other 2012 research had suggested a connection between declining Arctic sea ice and heavy snowfall during midlatitude winters. In 2013, further research from Francis connected reductions in the Arctic sea ice to extreme summer weather in the northern mid-latitudes, while other research from that year identified potential linkages between Arctic sea ice trends and more extreme rainfall in the European summer. At the time, it was also suggested that this connection between Arctic amplification and jet stream patterns was involved in the formation of Hurricane Sandy and played a role in the early 2014 North American cold wave. In 2015, Francis' next study concluded that highly amplified jet-stream patterns are occurring more frequently in the past two decades. Hence, continued heat-trapping emissions favour increased formation of extreme events caused by prolonged weather conditions. Studies published in 2017 and 2018 identified stalling patterns of Rossby waves in the northern hemisphere jet stream as the culprit behind other almost stationary extreme weather events, such as the 2018 European heatwave, the 2003 European heat wave, 2010 Russian heat wave or the 2010 Pakistan floods, and suggested that these patterns were all connected to Arctic amplification. Further work from Francis and Vavrus that year suggested that amplified Arctic warming is observed as stronger in lower atmospheric areas because the expanding process of warmer air increases pressure levels which decreases poleward geopotential height gradients. As these gradients are the reason that cause west to east winds through the thermal wind relationship, declining speeds are usually found south of the areas with geopotential increases. In 2017, Francis explained her findings to the Scientific American: "A lot more water vapor is being transported northward by big swings in the jet stream. That's important because water vapor is a greenhouse gas just like carbon dioxide and methane. It traps heat in the atmosphere. That vapor also condenses as droplets we know as clouds, which themselves trap more heat. The vapor is a big part of the amplification story—a big reason the Arctic is warming faster than anywhere else." In a 2017 study conducted by climatologist Judah Cohen and several of his research associates, Cohen wrote that "[the] shift in polar vortex states can account for most of the recent winter cooling trends over Eurasian midlatitudes". A 2018 paper from Vavrus and others linked Arctic amplification to more persistent hot-dry extremes during the midlatitude summers, as well as the midlatitude winter continental cooling. Another 2017 paper estimated that when the Arctic experiences anomalous warming, primary production in North America goes down by between 1% and 4% on average, with some states suffering up to 20% losses. A 2021 study found that a stratospheric polar vortex disruption is linked with extreme cold winter weather across parts of Asia and North America, including the February 2021 North American cold wave. Another 2021 study identified a connection between the Arctic sea ice loss and the increased size of wildfires in the Western United States. However, because the specific observations are considered short-term observations, there is considerable uncertainty in the conclusions. Climatology observations require several decades to definitively distinguish various forms of natural variability from climate trends. This point was stressed by reviews in 2013 and in 2017. A study in 2014 concluded that Arctic amplification significantly decreased cold-season temperature variability over the northern hemisphere in recent decades. Cold Arctic air intrudes into the warmer lower latitudes more rapidly today during autumn and winter, a trend projected to continue in the future except during summer, thus calling into question whether winters will bring more cold extremes. A 2019 analysis of a data set collected from 35 182 weather stations worldwide, including 9116 whose records go beyond 50 years, found a sharp decrease in northern midlatitude cold waves since the 1980s. Moreover, a range of long-term observational data collected during the 2010s and published in 2020 suggests that the intensification of Arctic amplification since the early 2010s was not linked to significant changes on mid-latitude atmospheric patterns. State-of-the-art modelling research of PAMIP (Polar Amplification Model Intercomparison Project) improved upon the 2010 findings of PMIP2; it found that sea ice decline would weaken the jet stream and increase the probability of atmospheric blocking, but the connection was very minor, and typically insignificant next to interannual variability. In 2022, a follow-up study found that while the PAMIP average had likely underestimated the weakening caused by sea ice decline by 1.2 to 3 times, even the corrected connection still amounts to only 10% of the jet stream's natural variability. Additionally, a 2021 study found that while jet streams had indeed slowly moved polewards since 1960 as was predicted by models, they did not weaken, in spite of a small increase in waviness. A 2022 re-analysis of the aircraft observational data collected over 2002–2020 suggested that the North Atlantic jet stream had actually strengthened. Finally, a 2021 study was able to reconstruct jet stream patterns over the past 1,250 years based on Greenland ice cores, and found that all of the recently observed changes remain within range of natural variability: the earliest likely time of divergence is in 2060, under the Representative Concentration Pathway 8.5 which implies continually accelerating greenhouse gas emissions. Other upper-level jets Polar night jet The polar-night jet stream forms mainly during the winter months when the nights are much longer – hence the name referencing polar nights – in their respective hemispheres at around 60° latitude. The polar night jet moves at a greater height (about ) than it does during the summer. During these dark months the air high over the poles becomes much colder than the air over the Equator. This difference in temperature gives rise to extreme air pressure differences in the stratosphere which, when combined with the Coriolis effect, create the polar night jets, that race eastward at an altitude of about . The polar vortex is circled by the polar night jet. The warmer air can only move along the edge of the polar vortex, but not enter it. Within the vortex, the cold polar air becomes increasingly cold, due to a lack of warmer air from lower latitudes as well as a lack of energy from the Sun entering during the polar night. Low-level jets There are wind maxima at lower levels of the atmosphere that are also referred to as jets. Barrier jet A barrier jet in the low levels forms just upstream of mountain chains, with the mountains forcing the jet to be oriented parallel to the mountains. The mountain barrier increases the strength of the low level wind by 45 percent. In the North American Great Plains a southerly low-level jet helps fuel overnight thunderstorm activity during the warm season, normally in the form of mesoscale convective systems which form during the overnight hours. A similar phenomenon develops across Australia, which pulls moisture poleward from the Coral Sea towards cut-off lows which form mainly across southwestern portions of the continent. Coastal jet Coastal low-level jets are related to a sharp contrast between high temperatures over land and lower temperatures over the sea and play an important role in coastal weather, giving rise to strong coast parallel winds. Most coastal jets are associated with the oceanic high-pressure systems and thermal low over land and are mainly located along cold eastern boundary marine currents, in upwelling regions offshore California, Peru–Chile, Benguela, Portugal, Canary and West Australia, and offshore Yemen–Oman. Valley exit jet A valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach speeds of up to at heights of above the ground. Surface winds below the jet tend to be substantially weaker, even when they are strong enough to sway vegetation. Valley exit jets are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases. Africa There are several important low-level jets in Africa. Numerous low-level jets form in the Sahara, and are important for the raising of dust off the desert surface. This includes a low-level jet in Chad, which is responsible for dust emission from the Bodélé Depression, the world's most important single source of dust emission. The Somali Jet, which forms off the East African coast is an important component of the global Hadley circulation, and supplies water vapour to the Asian Monsoon. Easterly low-level jets forming in valleys within the East African Rift System help account for the low rainfall in East Africa and support high rainfall in the Congo Basin rainforest. The formation of the thermal low over northern Africa leads to a low-level westerly jet stream from June into October, which provides the moist inflow to the West African monsoon. While not technically a low-level jet, the mid-level African easterly jet (at 3000–4000 m above the surface) is also an important climate feature in Africa. It occurs during the northern hemisphere summer between 10°N and 20°N above in the Sahel region of West Africa. It is considered to play a crucial role in the West African monsoon, and helps form the tropical waves which move across the tropical Atlantic and eastern Pacific oceans during the warm season. Other planets For other planets, internal heat rather than solar heating is believed to drive their jet streams. Jupiter's atmosphere has multiple jet streams caused by the convection cells driven by internal heating. These form the familiar banded color structure. See also Atmospheric river Block (meteorology) Polar vortex Surface weather analysis Sting jet Tornado Tropical Easterly Jet Wind shear Weather References External links Current map of winds at the 250 hPa level Tim Woolings, Jet Stream - A Journey Through our Changing Climate , 2020, Oxford University Press, ISBN 978-0-19-882851-8 Atmospheric dynamics Wind Articles containing video clips
Jet stream
Chemistry
5,808
64,281,882
https://en.wikipedia.org/wiki/Pinacyanol
Pinacyanol is a cyanine dye. It is an organic cation, typically isolated as the chloride or iodide salts. The blue dye is prepared from 2-methylquinoline by quaternization with ethyl chloride or ethyl iodide. Condensation with formaldehyde results in coupling. Subsequent oxidation of the leuco intermediate gives the dye. Pinacyanol is a prototypical cyanine dye that was widely used as a sensitizer in electrophotography. Its biological properties have also been investigated widely. References Quinolines Chlorides Cyanine dyes Quaternary ammonium compounds
Pinacyanol
Chemistry
135
7,640,211
https://en.wikipedia.org/wiki/Map%20algebra
Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system (GIS) which allows one or more raster layers ("maps") of similar dimensions to produce a new raster layer (map) using mathematical or other operations such as addition, subtraction etc. History Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s. In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package (MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algebra" became a core part of GIS. Because Tomlin released the source code to MAP, its algorithms were implemented (with varying degrees of modification) as the analysis toolkit of almost every raster GIS software package starting in the 1980s, including GRASS, IDRISI (now TerrSet), and the GRID module of ARC/INFO (later incorporated into the Spatial Analyst module of ArcGIS). This widespread implementation further led to the development of many extensions to map algebra, following efforts to extend the raster data model, such as adding new functionality for analyzing spatiotemporal and three-dimensional grids. Map algebra operations Like other algebraic structures, map algebra consists of a set of objects (the domain) and a set of operations that manipulate those objects with closure (i.e., the result of an operation is itself in the domain, not something completely different). In this case, the domain is the set of all possible "maps," which are generally implemented as raster grids. A raster grid is a two-dimensional array of cells (Tomlin called them locations or points), each cell occupying a square area of geographic space and being coded with a value representing the measured property of a given geographic phenomenon (usually a field) at that location. Each operation 1) takes one or more raster grids as inputs, 2) creates an output grid with matching cell geometry, 3) scans through each cell of the input grid (or spatially matching cells of multiple inputs), 4) performs the operation on the cell value(s), and writes the result to the corresponding cell in the output grid. Originally, the inputs and the output grids were required to have the identical cell geometry (i.e., covering the same spatial extent with the same cell arrangement, so that each cell corresponds between inputs and outputs), but many modern GIS implementations do not require this, performing interpolation as needed to derive values at corresponding locations. Tomlin classified the many possible map algebra operations into three types, to which some systems add a fourth: Local Operators Operations that operate on one cell location at a time during the scan phase. A simple example would be an arithmetic operator such as addition: to compute MAP3 = MAP1 + MAP2, the software scans through each matching cell of the input grids, adds the numeric values in each using normal arithmetic, and puts the result in the matching cell of the output grid. Due to this decomposition of operations on maps into operations on individual cell values, any operation that can be performed on numbers (e.g., arithmetic, statistics, trigonometry, logic) can be performed in map algebra. For example, a LocalMean operator would take in two or more grids and compute the arithmetic mean of each set of spatially corresponding cells. In addition, a range of GIS-specific operations has been defined, such as reclassifying a large range of values to a smaller range of values (e.g., 45 land cover categories to 3 levels of habitat suitability), which dates to the original IMGRID implementation of 1975. A common use of local functions is for implementing mathematical models, such as an index, that are designed to compute a resultant value at a location from a set of input variables. Focal Operators Functions that operate on a geometric neighborhood around each cell. A common example is calculating slope from a grid of elevation values. Looking at a single cell, with a single elevation, it is impossible to judge a trend such as slope. Thus, the slope of each cell is computed from the value of the corresponding cell in the input elevation grid and the values of its immediate neighbors. Other functions allow for the size and shape of the neighborhood (e.g. a circle or square of arbitrary size) to be specified. For example, a FocalMean operator could be used to compute the mean value of all the cells within 1000 meters (a circle) of each cell. Zonal Operators Functions that operate on regions of identical value. These are commonly used with discrete fields (also known as categorical coverages), where space is partitioned into regions of homogeneous nominal or categorical value of a property such as land cover, land use, soil type, or surface geologic formation. Unlike local and focal operators, zonal operators do not operate on each cell individually; instead, all of the cells of a given value are taken as input to a single computation, with identical output being written to all of the corresponding cells. For example, a ZonalMean operator would take in two layers, one with values representing the regions (e.g., dominant vegetation species) and another of a related quantitative property (e.g., percent canopy cover). For each unique value found in the former grid, the software collects all of the corresponding cells in the latter grid, computes the arithmetic mean, and writes this value to all of the corresponding cells in the output grid. Global Operators Functions that summarize the entire grid. These were not included in Tomlin's work, and are not technically part of map algebra, because the result of the operation is not a raster grid (i.e., it is not closed), but a single value or summary table. However, they are useful to include in the general toolkit of operations. For example, a GlobalMean operator would compute the arithmetic mean of all of the cells in the input grid and return a single mean value. Some also consider operators that generate a new grid by evaluating patterns across the entire input grid as global, which could be considered part of the algebra. An example of these are the operators for evaluating cost distance. Implementation Several GIS software packages implement map algebra concepts, including ERDAS Imagine, QGIS, GRASS GIS, TerrSet, PCRaster, and ArcGIS. In Tomlin's original formulation of cartographic modeling in the Map Analysis Package, he designed a simple procedural language around the algebra operators to allow them to be combined into a complete procedure with additional structures such as conditional branching and looping. However, in most modern implementations, map algebra operations are typically one component of a general procedural processing system, such as a visual modeling tool or a scripting language. For example, ArcGIS implements Map Algebra in both its visual ModelBuilder tool and in Python. Here, Python's overloading capability allows simple operators and functions to be used for raster grids. For example, rasters can be multiplied using the same "*" arithmetic operator used for multiplying numbers. Here are some examples in MapBasic, the scripting language for MapInfo Professional: # demo for Brown's Pond data set # Give layers # altitude # development – 0: vacant, 1: major, 2: minor, 3: houses, 4: buildings, 5 cement # water – 0: dry, 2: wet, 3: pond # calculate the slope at each location based on altitude slope = IncrementalGradient of altitude # identify the areas that are too steep toosteep = LocalRating of slope where 1 replaces 4 5 6 where VOID replaces ... # create layer unifying water and development occupied = LocalRating of development where water replaces VOID notbad = LocalRating of occupied and toosteep where 1 replaces VOID and VOID where VOID replaces ... and ... roads = LocalRating of development where 1 replaces 1 2 where VOID replaces ... nearread = FocalNeighbor of roads at 0 ... 10 aspect = IncrementalAspect of altitude southface = LocalRating of aspect where 1 replaces 135 ... 225 where VOID replaces ... sites = LocalMinimum of nearroad and southface and notbad sitenums = FocalInsularity of sites at 0 ... 1 sitesize = ZonalSum of 1 within sitenums bestsites = LocalRating of sitesize where sitesize replaces 100 ... 300 where VOID replaces ... External links osGeo-RFC-39 about Layer Algebra References B. E. Davis GIS: A Visual Approach (2001 Cengage Learning) pp. 249ff. Geographic information systems Applied mathematics Algebra Geographic information science Spatial analysis
Map algebra
Physics,Mathematics,Technology
2,094
13,271,310
https://en.wikipedia.org/wiki/NAD%2B%20kinase
{{DISPLAYTITLE:NAD+ kinase}} NAD+ kinase (EC 2.7.1.23, NADK) is an enzyme that converts nicotinamide adenine dinucleotide (NAD+) into NADP+ through phosphorylating the NAD+ coenzyme. NADP+ is an essential coenzyme that is reduced to NADPH primarily by the pentose phosphate pathway to provide reducing power in biosynthetic processes such as fatty acid biosynthesis and nucleotide synthesis. The structure of the NADK from the archaean Archaeoglobus fulgidus has been determined. In humans, the genes NADK and MNADK encode NAD+ kinases localized in cytosol and mitochondria, respectively. Similarly, yeast have both cytosolic and mitochondrial isoforms, and the yeast mitochondrial isoform accepts both NAD+ and NADH as substrates for phosphorylation. Reaction The reaction catalyzed by NADK is ATP + NAD+ ADP + NADP+ Mechanism NADK phosphorylates NAD+ at the 2’ position of the ribose ring that carries the adenine moiety. It is highly selective for its substrates, NAD and ATP, and does not tolerate modifications either to the phosphoryl acceptor, NAD, or the pyridine moiety of the phosphoryl donor, ATP. NADK also uses metal ions to coordinate the ATP in the active site. In vitro studies with various divalent metal ions have shown that zinc and manganese are preferred over magnesium, while copper and nickel are not accepted by the enzyme at all. A proposed mechanism involves the 2' alcohol oxygen acting as a nucleophile to attack the gamma-phosphoryl of ATP, releasing ADP. Regulation NADK is highly regulated by the redox state of the cell. Whereas NAD is predominantly found in its oxidized state NAD+, the phosphorylated NADP is largely present in its reduced form, as NADPH. Thus, NADK can modulate responses to oxidative stress by controlling NADP synthesis. Bacterial NADK is shown to be inhibited allosterically by both NADPH and NADH. NADK is also reportedly stimulated by calcium/calmodulin binding in certain cell types, such as neutrophils. NAD kinases in plants and sea urchin eggs have also been found to bind calmodulin. Clinical significance Due to the essential role of NADPH in lipid and DNA biosynthesis and the hyperproliferative nature of most cancers, NADK is an attractive target for cancer therapy. Furthermore, NADPH is required for the antioxidant activities of thioredoxin reductase and glutaredoxin. Thionicotinamide and other nicotinamide analogs are potential inhibitors of NADK, and studies show that treatment of colon cancer cells with thionicotinamide suppresses the cytosolic NADPH pool to increase oxidative stress and synergizes with chemotherapy. While the role of NADK in increasing the NADPH pool appears to offer protection against apoptosis, there are also cases where NADK activity appears to potentiate cell death. Genetic studies done in human haploid cell lines indicate that knocking out NADK may protect from certain non-apoptotic stimuli. See also Oxidative phosphorylation Electron transport chain Metabolism References Further reading External links ENZYME entry on EC 2.7.1.23 BRENDA entry on EC 2.7.1.23 PDBe-KB provides an overview of all the structure information available in the PDB for Human NAD kinase EC 2.7.1 Cellular respiration Metabolism
NAD+ kinase
Chemistry,Biology
775
7,177,687
https://en.wikipedia.org/wiki/Gross%E2%80%93Pitaevskii%20equation
The Gross–Pitaevskii equation (GPE, named after Eugene P. Gross and Lev Petrovich Pitaevskii) describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model. A Bose–Einstein condensate (BEC) is a gas of bosons that are in the same quantum state, and thus can be described by the same wavefunction. A free quantum particle is described by a single-particle Schrödinger equation. Interaction between particles in a real gas is taken into account by a pertinent many-body Schrödinger equation. In the Hartree–Fock approximation, the total wave-function of the system of bosons is taken as a product of single-particle functions : where is the coordinate of the -th boson. If the average spacing between the particles in a gas is greater than the scattering length (that is, in the so-called dilute limit), then one can approximate the true interaction potential that features in this equation by a pseudopotential. At sufficiently low temperature, where the de Broglie wavelength is much longer than the range of boson–boson interaction, the scattering process can be well approximated by the s-wave scattering (i.e. in the partial-wave analysis, a.k.a. the hard-sphere potential) term alone. In that case, the pseudopotential model Hamiltonian of the system can be written as where is the mass of the boson, is the external potential, is the boson–boson s-wave scattering length, and is the Dirac delta-function. The variational method shows that if the single-particle wavefunction satisfies the following Gross–Pitaevskii equation the total wave-function minimizes the expectation value of the model Hamiltonian under normalization condition Therefore, such single-particle wavefunction describes the ground state of the system. GPE is a model equation for the ground-state single-particle wavefunction in a Bose–Einstein condensate. It is similar in form to the Ginzburg–Landau equation and is sometimes referred to as the nonlinear Schrödinger equation. The non-linearity of the Gross–Pitaevskii equation has its origin in the interaction between the particles: setting the coupling constant of interaction in the Gross–Pitaevskii equation to zero (see the following section) recovers the single-particle Schrödinger equation describing a particle inside a trapping potential. The Gross–Pitaevskii equation is said to be limited to the weakly interacting regime. Nevertheless, it may also fail to reproduce interesting phenomena even within this regime. In order to study the BEC beyond that limit of weak interactions, one needs to implement the Lee-Huang-Yang (LHY) correction. Alternatively, in 1D systems one can use either an exact approach, namely the Lieb-Liniger model, or an extended equation, e.g. the Lieb-Liniger Gross–Pitaevskii equation (sometimes called modified or generalized nonlinear Schrödinger equation). Form of equation The equation has the form of the Schrödinger equation with the addition of an interaction term. The coupling constant is proportional to the s-wave scattering length of two interacting bosons: where is the reduced Planck constant, and is the mass of the boson. The energy density is where is the wavefunction, or order parameter, and is the external potential (e.g. a harmonic trap). The time-independent Gross–Pitaevskii equation, for a conserved number of particles, is where is the chemical potential, which is found from the condition that the number of particles is related to the wavefunction by From the time-independent Gross–Pitaevskii equation, we can find the structure of a Bose–Einstein condensate in various external potentials (e.g. a harmonic trap). The time-dependent Gross–Pitaevskii equation is From this equation we can look at the dynamics of the Bose–Einstein condensate. It is used to find the collective modes of a trapped gas. Solutions Since the Gross–Pitaevskii equation is a nonlinear partial differential equation, exact solutions are hard to come by. As a result, solutions have to be approximated via a myriad of techniques. Exact solutions Free particle The simplest exact solution is the free-particle solution, with : This solution is often called the Hartree solution. Although it does satisfy the Gross–Pitaevskii equation, it leaves a gap in the energy spectrum due to the interaction: According to the Hugenholtz–Pines theorem, an interacting Bose gas does not exhibit an energy gap (in the case of repulsive interactions). Soliton A one-dimensional soliton can form in a Bose–Einstein condensate, and depending upon whether the interaction is attractive or repulsive, there is either a bright or dark soliton. Both solitons are local disturbances in a condensate with a uniform background density. If the BEC is repulsive, so that , then a possible solution of the Gross–Pitaevskii equation is where is the value of the condensate wavefunction at , and is the coherence length (a.k.a. the healing length, see below). This solution represents the dark soliton, since there is a deficit of condensate in a space of nonzero density. The dark soliton is also a type of topological defect, since flips between positive and negative values across the origin, corresponding to a phase shift. For the solution is where the chemical potential is . This solution represents the bright soliton, since there is a concentration of condensate in a space of zero density. Healing length The healing length gives the minimum distance over which the order parameter can heal, which describes how quickly the wave function of the BEC can adjust to changes in the potential. If the condensate density grows from 0 to n within a distance ξ, the healing length can calculated by equating the quantum pressure and the interaction energy: The healing length must be much smaller than any length scale in the solution of the single-particle wavefunction. The healing length also determines the size of vortices that can form in a superfluid. It is the distance over which the wavefunction recovers from zero in the center of the vortex to the value in the bulk of the superfluid (hence the name "healing" length). Variational solutions In systems where an exact analytical solution may not be feasible, one can make a variational approximation. The basic idea is to make a variational ansatz for the wavefunction with free parameters, plug it into the free energy, and minimize the energy with respect to the free parameters. Numerical solutions Several numerical methods, such as the split-step Crank–Nicolson and Fourier spectral methods, have been used for solving GPE. There are also different Fortran and C programs for its solution for the contact interaction and long-range dipolar interaction. Thomas–Fermi approximation If the number of particles in a gas is very large, the interatomic interaction becomes large so that the kinetic energy term can be neglected in the Gross–Pitaevskii equation. This is called the Thomas–Fermi approximation and leads to the single-particle wavefunction And the density profile is In a harmonic trap (where the potential energy is quadratic with respect to displacement from the center), this gives a density profile commonly referred to as the "inverted parabola" density profile. Bogoliubov approximation Bogoliubov treatment of the Gross–Pitaevskii equation is a method that finds the elementary excitations of a Bose–Einstein condensate. To that purpose, the condensate wavefunction is approximated by a sum of the equilibrium wavefunction and a small perturbation : Then this form is inserted in the time-dependent Gross–Pitaevskii equation and its complex conjugate, and linearized to first order in : Assuming that one finds the following coupled differential equations for and by taking the parts as independent components: For a homogeneous system, i.e. for , one can get from the zeroth-order equation. Then we assume and to be plane waves of momentum , which leads to the energy spectrum For large , the dispersion relation is quadratic in , as one would expect for usual non-interacting single-particle excitations. For small , the dispersion relation is linear: with being the speed of sound in the condensate, also known as second sound. The fact that shows, according to Landau's criterion, that the condensate is a superfluid, meaning that if an object is moved in the condensate at a velocity inferior to s, it will not be energetically favorable to produce excitations, and the object will move without dissipation, which is a characteristic of a superfluid. Experiments have been done to prove this superfluidity of the condensate, using a tightly focused blue-detuned laser. The same dispersion relation is found when the condensate is described from a microscopical approach using the formalism of second quantization. Superfluid in rotating helical potential The optical potential well might be formed by two counterpropagating optical vortices with wavelengths , effective width and topological charge : where . In cylindrical coordinate system the potential well have a remarkable double-helix geometry: In a reference frame rotating with angular velocity , time-dependent Gross–Pitaevskii equation with helical potential is where is the angular-momentum operator. The solution for condensate wavefunction is a superposition of two phase-conjugated matter–wave vortices: The macroscopically observable momentum of condensate is where is number of atoms in condensate. This means that atomic ensemble moves coherently along axis with group velocity whose direction is defined by signs of topological charge and angular velocity : The angular momentum of helically trapped condensate is exactly zero: Numerical modeling of cold atomic ensemble in spiral potential have shown the confinement of individual atomic trajectories within helical potential well. Derivations and Generalisations The Gross–Pitaevskii equation can also be derived as the semi-classical limit of the many body theory of s-wave interacting identical bosons represented in terms of coherent states. The semi-classical limit is reached for a large number of quanta, expressing the field theory either in the positive-P representation (generalised Glauber-Sudarshan P representation) or Wigner representation. Finite-temperature effects can be treated within a generalised Gross–Pitaevskii equation by including scattering between condensate and noncondensate atoms, from which the Gross–Pitaevskii equation may be recovered in the low-temperature limit. References Further reading External links Trotter-Suzuki-MPI Trotter-Suzuki-MPI is a library for large-scale simulations based on the Trotter-Suzuki decomposition that can also address the Gross–Pitaevskii equation. XMDS XMDS is a spectral partial differential equation library that can be used to solve the Gross–Pitaevskii equation. Bose–Einstein condensates Eponymous equations of physics Superfluidity
Gross–Pitaevskii equation
Physics,Chemistry,Materials_science
2,394
9,824,460
https://en.wikipedia.org/wiki/NPAS3
NPAS3 or Neuronal PAS domain protein 3 is a brain-enriched transcription factor belonging to the bHLH-PAS superfamily of transcription factors, the members of which carry out diverse functions, including circadian oscillations, neurogenesis, toxin metabolism, hypoxia, and tracheal development. NPAS3 contains basic helix-loop-helix structural motif and PAS domain, like the other proteins in the superfamily. Function NPAS3 is also known as human accelerated region 21. It may, therefore, have played a key role in differentiating humans from apes. NPAS1 and NPAS3-deficient mice display behavioral abnormalities typical to the animal models of schizophrenia. According to the same study, NPAS1 and NPAS3 disruption leads to reduced expression of reelin, which is also consistently found to be reduced in the brains of human patients with schizophrenia and psychotic bipolar disorder. Among the 49 genomic regions that undergone rapid changes in humans compared with their evolutionary ancestors, NPAS3 was found to be located in the region 21. Clinical significance Disruption of NPAS3 was found in one family affected by schizophrenia and NPAS3 gene is thought to be associated with psychiatric illness and learning disability. In a genetic study of several hundred subjects conducted in 2008, interacting haplotypes at the NPAS3 locus were found to affect the risk of schizophrenia and bipolar disorder. In a pharmacogenetical study, polymorphisms in NPAS3 gene were highly associated with response to iloperidone, a proposed atypical antipsychotic. References Further reading PAS-domain-containing proteins g1 Biology of bipolar disorder
NPAS3
Chemistry,Biology
339
22,114,492
https://en.wikipedia.org/wiki/British%20Board%20of%20Agr%C3%A9ment
The British Board of Agrément (BBA) is a UK body issuing certificates for construction products and systems and providing inspection services in support of their designers and installers. Agrément certificate This is an authoritative document proving the fitness for the purpose of a construction product and its compliance or contribution to compliance with the various building regulations applying in the United Kingdom. It is commonly referred to as a 'BBA Agrément Certificate'. History The forerunner of the BBA, the Agrément Board, was modelled on an arrangement operating in France, hence the French word agrément, which translates literally as 'approval'. The British Board of Agrément (BBA) is a construction industry approvals body, originally set up in by government and offering product and installer approval. Agrément certificates cover 200 different product sectors and the largest of these are insulation and roofing. In the former the BBA has run an Approved Installer Scheme for more than 30 years, linking installations of injected cavity wall insulation to BBA approval of the systems and dealing with both the system supplier and installer. BBA approvals show compliance with Building Regulations and other requirements, including installation quality. The BBA also inspects for the Fenestration Self Assessment Scheme (FENSA), the Federation of Master Builders and for some certificate holders to check that installers demonstrate good practice on site. The BBA also run the Highways Authorities Product Approval Scheme (HAPAS) for Highways England, County Surveyors Society and other agencies in the UK. This is similar to the Agrément Certificate process but applied to highways products. Some of these have Approved Installer schemes linked to them and the BBA also inspects those. Structure and ownership The BBA consists of three main operations, Product Approval and Certification, Inspection and Test Services. Ownership of the BBA is held by its Governing Board, consisting of three executive and four non-executive directors. Inspection activities are a key element of the planned business growth and recent restructuring of the BBA's Technical operation has been undertaken to focus effort and attention on it. Hardy Giesler is the BBA's CEO. The BBA employs approximately 185 people, most of whom are based at their offices and testing facilities near Watford, and the remainder are inspectors geographically located around the UK with access to the BBA's electronic systems allowing them to work remotely. The BBA is a company limited by guarantee - equivalent to a non-profit status used in other countries, any profits made by BBA used for the benefit of the construction industry or the public good. Coverage The organisation has inspectors based around the United Kingdom capable of returning completed reports within 24 hours of receipt. Environmental policy The BBA is accredited by UKAS to provide Environmental Certification to the ISO 14000 series and operates appropriate internal controls. References External links Government agencies established in 1966 Product-testing organizations Certification marks Government buildings in England Government buildings in Wales Government buildings in Scotland Organisations based in Hertfordshire St Albans 1966 establishments in the United Kingdom
British Board of Agrément
Mathematics
613
47,544
https://en.wikipedia.org/wiki/Carrying%20capacity
The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). Carrying capacity of the environment implies that the resources extraction is not above the rate of regeneration of the resources and the wastes generated are within the assimilating capacity of the environment. The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s. The notion of carrying capacity for humans is covered by the notion of sustainable population. An early detailed examination of global limits was published in the 1972 book Limits to Growth, which has prompted follow-up commentary and analysis, including much criticism. A 2012 review in Nature by 22 international researchers expressed concerns that the Earth may be "approaching a state shift" in which the biosphere may become less hospitable to human life and in which human carrying capacity may diminish. This concern that humanity may be passing beyond "tipping points" for safe use of the biosphere has increased in subsequent years. Recent estimates of Earth's carrying capacity run between two billion and four billion people, depending on how optimistic researchers are about international cooperation to solve collective action problems. Origins In terms of population dynamics, the term 'carrying capacity' was not explicitly used in 1838 by the Belgian mathematician Pierre François Verhulst when he first published his equations based on research on modelling population growth. The origins of the term "carrying capacity" are uncertain, with sources variously stating that it was originally used "in the context of international shipping" in the 1840s, or that it was first used during 19th-century laboratory experiments with micro-organisms. A 2008 review finds the first use of the term in English was an 1845 report by the US Secretary of State to the US Senate. It then became a term used generally in biology in the 1870s, being most developed in wildlife and livestock management in the early 1900s. It had become a staple term in ecology used to define the biological limits of a natural system related to population size in the 1950s. Neo-Malthusians and eugenicists popularised the use of the words to describe the number of people the Earth can support in the 1950s, although American biostatisticians Raymond Pearl and Lowell Reed had already applied it in these terms to human populations in the 1920s. Hadwen and Palmer (1923) defined carrying capacity as the density of stock that could be grazed for a definite period without damage to the range. It was first used in the context of wildlife management by the American Aldo Leopold in 1933, and a year later by the American Paul Lester Errington, a wetlands specialist. They used the term in different ways, Leopold largely in the sense of grazing animals (differentiating between a 'saturation level', an intrinsic level of density a species would live in, and carrying capacity, the most animals which could be in the field) and Errington defining 'carrying capacity' as the number of animals above which predation would become 'heavy' (this definition has largely been rejected, including by Errington himself). The important and popular 1953 textbook on ecology by Eugene Odum, Fundamentals of Ecology, popularised the term in its modern meaning as the equilibrium value of the logistic model of population growth. Mathematics The specific reason why a population stops growing is known as a limiting or regulating factor. The difference between the birth rate and the death rate is the natural increase. If the population of a given organism is below the carrying capacity of a given environment, this environment could support a positive natural increase; should it find itself above that threshold the population typically decreases. Thus, the carrying capacity is the maximum number of individuals of a species that an environment can support in long run. Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment varies for different species. In the standard ecological algebra as illustrated in the simplified Verhulst model of population dynamics, carrying capacity is represented by the constant K: where is the population size, is the intrinsic rate of natural increase is the carrying capacity of the local environment, and , the derivative of with respect to time , is the rate of change in population with time. Thus, the equation relates the growth rate of the population to the current population size, incorporating the effect of the two constant parameters and . (Note that decrease is negative growth.) The choice of the letter came from the German Kapazitätsgrenze (capacity limit). This equation is a modification of the original Verhulst model: In this equation, the carrying capacity , , is When the Verhulst model is plotted into a graph, the population change over time takes the form of a sigmoid curve, reaching its highest level at . This is the logistic growth curve and it is calculated with: where is the natural logarithm base (also known as Euler's number), is the value of the sigmoid's midpoint, is the curve's maximum value, is the logistic growth rate or steepness of the curve and The logistic growth curve depicts how population growth rate and carrying capacity are inter-connected. As illustrated in the logistic growth curve model, when the population size is small, the population increases exponentially. However, as population size nears carrying capacity, the growth decreases and reaches zero at . What determines a specific system's carrying capacity involves a limiting factor; this may be available supplies of food or water, nesting areas, space, or the amount of waste that can be absorbed without degrading the environment and decreasing carrying capacity. Population ecology Carrying capacity is a commonly used concept for biologists when trying to better understand biological populations and the factors which affect them. When addressing biological populations, carrying capacity can be seen as a stable dynamic equilibrium, taking into account extinction and colonization rates. In population biology, logistic growth assumes that population size fluctuates above and below an equilibrium value. Numerous authors have questioned the usefulness of the term when applied to actual wild populations. Although useful in theory and in laboratory experiments, carrying capacity as a method of measuring population limits in the environment is less useful as it sometimes oversimplifies the interactions between species. Agriculture It is important for farmers to calculate the carrying capacity of their land so they can establish a sustainable stocking rate. For example, calculating the carrying capacity of a paddock in Australia is done in Dry Sheep Equivalents (DSEs). A single DSE is 50 kg Merino wether, dry ewe or non-pregnant ewe, which is maintained in a stable condition. Not only sheep are calculated in DSEs, the carrying capacity for other livestock is also calculated using this measure. A 200 kg weaned calf of a British style breed gaining 0.25 kg/day is 5.5DSE, but if the same weight of the same type of calf were gaining 0.75 kg/day, it would be measured at 8DSE. Cattle are not all the same, their DSEs can vary depending on breed, growth rates, weights, if it is a cow ('dam'), steer or ox ('bullock' in Australia), and if it weaning, pregnant or 'wet' (i.e. lactating). In other parts of the world different units are used for calculating carrying capacities. In the United Kingdom the paddock is measured in LU, livestock units, although different schemes exist for this. New Zealand uses either LU, EE (ewe equivalents) or SU (stock units). In the US and Canada the traditional system uses animal units (AU). A French/Swiss unit is Unité de Gros Bétail (UGB). In some European countries such as Switzerland the pasture (alm or alp) is traditionally measured in Stoß, with one Stoß equaling four Füße (feet). A more modern European system is Großvieheinheit (GV or GVE), corresponding to 500 kg in liveweight of cattle. In extensive agriculture 2 GV/ha is a common stocking rate, in intensive agriculture, when grazing is supplemented with extra fodder, rates can be 5 to 10 GV/ha. In Europe average stocking rates vary depending on the country, in 2000 the Netherlands and Belgium had a very high rate of 3.82 GV/ha and 3.19 GV/ha respectively, surrounding countries have rates of around 1 to 1.5 GV/ha, and more southern European countries have lower rates, with Spain having the lowest rate of 0.44 GV/ha. This system can also be applied to natural areas. Grazing megaherbivores at roughly 1 GV/ha is considered sustainable in central European grasslands, although this varies widely depending on many factors. In ecology it is theoretically (i.e. cyclic succession, patch dynamics, Megaherbivorenhypothese) taken that a grazing pressure of 0.3 GV/ha by wildlife is enough to hinder afforestation in a natural area. Because different species have different ecological niches, with horses for example grazing short grass, cattle longer grass, and goats or deer preferring to browse shrubs, niche differentiation allows a terrain to have slightly higher carrying capacity for a mixed group of species, than it would if there were only one species involved. Some niche market schemes mandate lower stocking rates than can maximally be grazed on a pasture. In order to market ones' meat products as 'biodynamic', a lower Großvieheinheit of 1 to 1.5 (2.0) GV/ha is mandated, with some farms having an operating structure using only 0.5 to 0.8 GV/ha. The Food and Agriculture Organization has introduced three international units to measure carrying capacity: FAO Livestock Units for North America, FAO Livestock Units for sub-Saharan Africa, and Tropical Livestock Units. Another rougher and less precise method of determining the carrying capacity of a paddock is simply by looking objectively at the condition of the herd. In Australia, the national standardized system for rating livestock conditions is done by body condition scoring (BCS). An animal in a very poor condition is scored with a BCS of 0, and an animal which is extremely healthy is scored at 5: animals may be scored between these two numbers in increments of 0.25. At least 25 animals of the same type must be scored to provide a statistically representative number, and scoring must take place monthly -if the average falls, this may be due to a stocking rate above the paddock's carrying capacity or too little fodder. This method is less direct for determining stocking rates than looking at the pasture itself, because the changes in the condition of the stock may lag behind changes in the condition of the pasture. Fisheries In fisheries, carrying capacity is used in the formulae to calculate sustainable yields for fisheries management. The maximum sustainable yield (MSY) is defined as "the highest average catch that can be continuously taken from an exploited population (=stock) under average environmental conditions". MSY was originally calculated as half of the carrying capacity, but has been refined over the years, now being seen as roughly 30% of the population, depending on the species or population. Because the population of a species which is brought below its carrying capacity due to fishing will find itself in the exponential phase of growth, as seen in the Verhulst model, the harvesting of an amount of fish at or below MSY is a surplus yield which can be sustainably harvested without reducing population size at equilibrium, keeping the population at its maximum recruitment. However, annual fishing can be seen as a modification of r in the equation -i.e. the environment has been modified, which means that the population size at equilibrium with annual fishing is slightly below what K would be without it. Note that mathematically and in practical terms, MSY is problematic. If mistakes are made and even a tiny amount of fish are harvested each year above the MSY, populations dynamics imply that the total population will eventually decrease to zero. The actual carrying capacity of the environment may fluctuate in the real world, which means that practically, MSY may actually vary from year to year (annual sustainable yields and maximum average yield attempt to take this into account). Other similar concepts are optimum sustainable yield and maximum economic yield; these are both harvest rates below MSY. These calculations are used to determine fishing quotas. Humans Human carrying capacity is a function of how people live and the technology at their disposal. The two great economic revolutions that marked human history up to 1900—the agricultural and industrial revolutions—greatly increased the Earth's human carrying capacity, allowing human population to grow from 5 to 10 million people in 10,000 BCE to 1.5 billion in 1900. The immense technological improvements of the past 100 years—in applied chemistry, physics, computing, genetic engineering, and more—have further increased Earth's human carrying capacity, at least in the short term. Without the Haber-Bosch process for fixing nitrogen, modern agriculture could not support 8 billion people. Without the Green Revolution of the 1950s and 60s, famine might have culled large numbers of people in poorer countries during the last three decades of the twentieth century. Recent technological successes, however, have come at grave environmental costs. Climate change, ocean acidification, and the huge dead zones at the mouths of many of world's great rivers, are a function of the scale of contemporary agriculture and the many other demands 8 billion people make on the planet. Scientists now speak of humanity exceeding or threatening to exceed 9 planetary boundaries for safe use of the biosphere. Humanity's unprecedented ecological impacts threaten to degrade the ecosystem services that people and the rest of life depend on—potentially decreasing Earth's human carrying capacity. The signs that we have crossed this threshold are increasing.   The fact that degrading Earth's essential services is obviously possible, and happening in some cases, suggests that 8 billion people may be above Earth's human carrying capacity. But human carrying capacity is always a function of a certain number of people living a certain way. This was encapsulated by Paul Ehrlich and James Holdren's (1972) IPAT equation: environmental impact (I) = population (P) x affluence (A) x the technologies used to accommodate human demands (T). IPAT has found spectacular confirmation in recent decades within climate science, where the Kaya identity for explaining changes in emissions is essentially IPAT with two technology factors broken out for ease of use. This suggests to technological optimists that new technological discoveries (or the deployment of existing ones) could continue to increase Earth's human carrying capacity, as it has in the past. Yet technology has unexpected side effects, as we have seen with stratospheric ozone depletion, excessive nitrogen deposition in the world's rivers and bays, and global climate change. This suggests that 8 billion people may be sustainable for a few generations, but not over the long term, and the term ‘carrying capacity’ implies a population that is sustainable indefinitely. It is possible, too, that efforts to anticipate and manage the impacts of powerful new technologies, or to divide up the efforts needed to keep global ecological impacts within sustainable bounds among more than 200 nations all pursuing their own self-interest, may prove too complicated to achieve over the long haul. One issue with applying carrying capacity to any species is that ecosystems are not constant and change over time, therefore changing the resources available. Research has shown that sometimes the presence of human populations can increase local biodiversity, demonstrating that human habitation does not always lead to deforestation and decreased biodiversity. Another issue to consider when applying carrying capacity, especially to humans, is that measuring food resources is arbitrary. This is due to choosing what to consider (e.g., whether or not to include plants that are not available every year), how to classify what is considered (e.g., classifying edible plants that are not usually eaten as food resources or not), and determining if caloric values or nutritional values are privileged. Additional layers to this for humans are their cultural differences in taste (e.g., some consume flying termites) and individual choices on what to invest their labor into (e.g., fishing vs. farming), both of which vary over time. This leads to the need to determine whether or not to include all food resources or only those the population considered will consume. Carrying capacity measurements over large areas also assumes homogeneity in the resources available but this does not account for how resources and access to them can greatly vary within regions and populations. They also assume that the populations in the region only rely on that region’s resources even though humans exchange resources with others from other regions and there are few, if any, isolated populations. Variations in standards of living which directly impact resource consumption are also not taken into account. These issues show that while there are limits to resources, a more complex model of how humans interact with their ecosystem needs to be used to understand them. Recent warnings that humanity may have exceeded Earth's carrying capacity Between 1900 and 2020, Earth's human population increased from 1.6 billion to 7.8 billion (a 390% increase). These successes greatly increased human resource demands, generating significant environmental degradation. Millennium ecosystem assessment The Millennium Ecosystem Assessment (MEA) of 2005 was a massive, collaborative effort to assess the state of Earth's ecosystems, involving more than 1,300 experts worldwide. Their first two of four main findings were the following. The first finding is:Over the past 50 years, humans have changed ecosystems more rapidly and extensively than in any comparable period of time in human history, largely to meet rapidly growing demands for food, fresh water, timber, fiber, and fuel. This has resulted in a substantial and largely irreversible loss in the diversity of life on Earth.The second of the four main findings is:The changes that have been made to ecosystems have contributed to substantial net gains in human well-being and economic development, but these gains have been achieved at growing costs in the form of the degradation of many ecosystem services, increased risks of nonlinear changes, and the exacerbation of poverty for some groups of people. These problems, unless addressed, will substantially diminish the benefits that future generations obtain from ecosystems.According to the MEA, these unprecedented environmental changes threaten to reduce the Earth's long-term human carrying capacity. “The degradation of ecosystem services could grow significantly worse during the first half of this [21st] century,” they write, serving as a barrier to improving the lives of poor people around the world. Critiques of Carrying Capacity with Relation to Humans Humans and human culture itself are highly adaptable things that have overcome issues that seemed incomprehensible at the time before. It is not to say that carrying capacity is not something that should be considered and thought about, but it should be taken with some skepticism when presented as a concretely evidenced proof of something. Many biologists, ecologists, and social scientists have disposed of the term altogether due to the generalizations that are made that gloss over the complexity of interactions that take place on the micro and macro level. Carrying capacity in a human environment is subject to change at any time due to the highly adaptable nature of human society and culture. If resources, time, and energy are put into an issue, there very well may be a solution that exposes itself. This also should not be used as an excuse to overexploit or take advantage of the land or resources that are available. Nonetheless, it is possible to not be pessimistic as technological, social, and institutional adaptions could be accelerated, especially in a time of need, to solve problems, or in this case, increase carrying capacity. There are also of course resources on this Earth that are limited that most certainly will run out if overused or used without proper oversight/checks and balances. If things are left without remaining checked then overconsumption and exploitation of land and resources is likely to occur. Ecological footprint accounting Ecological Footprint accounting measures the demands people make on nature and compares them to available supplies, for both individual countries and the world as a whole. Developed originally by Mathis Wackernagel and William Rees, it has been refined and applied in a variety of contexts over the years by Global Footprint Network (GFN). On the demand side, the Ecological Footprint measures how fast a population uses resources and generates wastes, with a focus on five main areas: carbon emissions (or carbon footprint), land devoted to direct settlement, timber and paper use, food and fiber use, and seafood consumption. It converts these into per capita or total hectares used. On the supply side, national or global biocapacity represents the productivity of ecological assets in a particular nation or the world as a whole; this includes “cropland, grazing land, forest land, fishing grounds, and built-up land.” Again the various metrics to capture biocapacity are translated into the single term of hectares of available land. As Global Footprint Network (GFN) states:Each city, state or nation’s Ecological Footprint can be compared to its biocapacity, or that of the world. If a population’s Ecological Footprint exceeds the region’s biocapacity, that region runs a biocapacity deficit. Its demand for the goods and services that its land and seas can provide—fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption—exceeds what the region’s ecosystems can regenerate. In more popular communications, this is called “an ecological deficit.” A region in ecological deficit meets demand by importing, liquidating its own ecological assets (such as overfishing), and/or emitting carbon dioxide into the atmosphere. If a region’s biocapacity exceeds its Ecological Footprint, it has a biocapacity reserve.According to the GFN's calculations, humanity has been using resources and generating wastes in excess of sustainability since approximately 1970: currently humanity use Earth's resources at approximately 170% of capacity. This implies that humanity is well over Earth's human carrying capacity for our current levels of affluence and technology use. According to Global Footprint Network:In 2024, [Earth Overshoot Day] fell on August 1. Earth Overshoot Day marks the date when humanity has exhausted nature’s budget for the year. For the rest of the year, we are maintaining our ecological deficit by drawing down local resource stocks and accumulating carbon dioxide in the atmosphere. We are operating in overshoot.The concept of ‘ecological overshoot’ can be seen as equivalent to exceeding human carrying capacity. According to the most recent calculations from Global Footprint Network, most of the world's residents live in countries in ecological overshoot (see the map on the right). This includes countries with dense populations (such as China, India, and the Philippines), countries with high per capita consumption and resource use (France, Germany, and Saudi Arabia), and countries with both high per capita consumption and large numbers of people (Japan, the United Kingdom, and the United States). Planetary Boundaries Framework According to its developers, the planetary boundaries framework defines “a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth system.” Human civilization has evolved in the relative stability of the Holocene epoch; crossing planetary boundaries for safe levels of atmospheric carbon, ocean acidity, or one of the other stated boundaries could send the global ecosystem spiraling into novel conditions that are less hospitable to life—possibly reducing global human carrying capacity. This framework, developed in an article published in 2009 in Nature and then updated in two articles published in 2015 in Science and in 2018 in PNAS,  identifies nine stressors of planetary support systems that need to stay within critical limits to preserve stable and safe biospheric conditions (see figure below). Climate change and biodiversity loss are seen as especially crucial, since on their own, they could push the Earth system out of the Holocene state: “transitions between time periods in Earth history have often been delineated by substantial shifts in climate, the biosphere, or both.” The scientific consensus is that humanity has exceeded three to five of the nine planetary boundaries for safe use of the biosphere and is pressing hard on several more. By itself, crossing one of the planetary boundaries does not prove humanity has exceeded Earth's human carrying capacity; perhaps technological improvements or clever management might reduce this stressor and bring us back within the biosphere's safe operating space. But when several boundaries are crossed, it becomes harder to argue that carrying capacity has not been breached. Because fewer people helps reduce all nine planetary stressors, the more boundaries are crossed, the clearer it appears that reducing human numbers is part of what is needed to get back within a safe operating space. Population growth regularly tops the list of causes of humanity's increasing impact on the natural environment in Earth system science literature. Recently, planetary boundaries developer Will Steffen and co-authors ranked global population change as the leading indicator of the influence of socio-economic trends on the functioning of the Earth system in the modern era, post-1750. See also Further reading Kin, Cheng Sok, et al. "Predicting Earth's Carrying Capacity of Human Population as the Predator and the Natural Resources as the Prey in the Modified Lotka-Volterra Equations with Time-dependent Parameters." arXiv preprint arXiv:1904.05002 (2019). References Control of demographics Demographics indicators Ecological metrics Population ecology Economic geography Ecological economics Environmental terminology
Carrying capacity
Mathematics
5,407
48,784,874
https://en.wikipedia.org/wiki/Elias%20Ladopoulos
Elias Ladopoulos is a technologist and investor from New York City. Under the pseudonym Acid Phreak, he was a founder of the Masters of Deception (MOD) hacker group along with Phiber Optik (Mark Abene) and Scorpion (Paul Stira). Referred to as The Gang That Ruled Cyberspace in a 1995 non-fiction book, MOD was at the forefront of exploiting telephone systems to hack into the private networks of major corporations. In his later career, Ladopoulos developed new techniques for electronic trading and computerized projections of stocks and shares performance, as well as working as a security consultant for the defense department . As of 2015, he is CEO of Supermassive Corp, which is a hacker-based incubation studio for technology start-ups. Founding of MOD When Ladopoulos and Stira were engaged in exploring an unusual telephone system computer, Ladopoulos suggested seeking advice from Phiber Optik (Mark Abene), a well-known phreak who was also a member of the prestigious Legion of Doom (LOD) group. A productive phone hacking partnership developed, with the group later branding themselves Masters of Deception (MOD). MOD's hacking exploits included taking control of every major phone system and global packet-switching network in the United States . Ladopoulos claims that he and another hacker were able to place a call to Queen Elizabeth II . Their pranks included taking over the printers of the Public Broadcasting Service (PBS), an incident that escalated when another hacker used the access they had established to wipe the PBS systems . The group is also known for retrieving phone and credit information for celebrities such as Julia Roberts and John Gotti. Conflict with former Legion of Doom members Abene's involvement in both LOD and MOD showed a natural alignment between the two groups in MOD's early years. As LOD's original membership broke up, however, conflicts arose between Abene and Eric Bloodaxe (Chris Goggans), another LOD member. Goggans declaring that Abene had been expelled from LOD, resulted in a permanent split between the two groups. Ladopoulos is credited with writing "The History of MOD" for "other hackers to envy." Further disagreements and pranks, including the hacking of Goggans's security consultancy ComSec, have been characterized as the Great Hacker War. Prosecution On January 15, 1990 (Martin Luther King Day), the AT&T telephone network crashed. Later investigations revealed the cause to be a software bug; however, an FBI task force that had been investigating MOD was convinced the group was implicated. On January 24 the FBI raided the homes of five MOD members, including Ladopoulos, Abene, and Stira. Despite being released without charge due to lack of evidence, the MOD members were later re-arrested on a conspiracy charge following wire-tapping of future MOD members. After Abene rejected a plea bargain, Ladopoulos refused to testify against his fellow hacker, pleaded guilty and was sentenced to 6 months in a supervised camp facility, followed by 6 months' house arrest. According to U.S. attorney Otto Obermaier it was the "first investigative use of court-authorized wiretaps to obtain conversations and data transmissions of computer hackers" in the United States. Career After completing his sentence, Ladopoulos was hired as a security engineer by the Reuters-owned electronic trading business Instinet . Hiring other former hackers, Ladopoulos built a department responsible for securing Instinet's global trading operations and developing security systems that were later acquired by NASDAQ . Later, as a consultant for Instinet, Ladopoulos also worked as VP Operations for the government security contractor NetSec (later Verizon Government) . In 2008, he founded Kinetic Global Markets with Roger Ehrenberg . As CEO and CIO, he led a team pioneering new approaches to systematic trading based on the computational analysis of terms used in SEC filings. Ladopoulos consulted on Ehrenberg's launch of IA Venture Capital . In 2013, Ladopoulos founded Supermassive Corp., which describes itself as the original hacker incubation studio, "bringing together teams of extremely unique talents to rapidly prototype ideas that have a big impact." References External links The History of MOD modbook1.txt — "The History of MOD: Book One: The Originals" modbook2.txt — "The History of MOD: Book Two: Creative Mindz" modbook3.txt — "The Book of MOD: Part Three: A Kick in the Groin" modbook4.txt — "The Book of MOD: Part Four: End of '90-'1991" modbook5.txt — "The Book of MOD: Part 5: Who are They And Where Did They Come From? (Summer 1991)" Small Scale Sin, Act Three http://www.thisamericanlife.org/radio-archives/episode/2/small-scale-sin?act=3#play Hackers Living people Masters of Deception 1972 births Legion of Doom (hacker group) Businesspeople from New York City Phreaking
Elias Ladopoulos
Technology
1,065
1,871,333
https://en.wikipedia.org/wiki/Honda%20VT1100
The Honda VT1100 is a motorcycle engine used in the Honda Shadow 1100 motorcycle line since its debut in 1985 until production ended in 2007. In this 22-year run, there were minimal changes. It is a liquid cooled, , 45 degree V-twin. It has a bore and stroke of 87.5mm x 91.4mm with an 8:1 compression ratio. It is a shaft driven, single overhead cam SOHC, V2, with 3 valves and 2 spark plugs per cylinder. The valves are hydraulically actuated, requiring little, if any, maintenance over the life of the engine. They come with dual 36mm diaphragm-type CV carburetors and a solid state digital ignition. Depending on application and tuning, The dual pin crankshaft models produce at the crankshaft (brake horsepower) ~ @ 5000 rpm and ~ @ 2750 rpm. Single pin crank models produced about and less. The 1985-1986 models produced about 78.4 bhp @ 6000 rpm and 73 ft lbs @ 4,500rpm. These engines came with either a 5 speed manual transmission (1985-1986, 1997-2007) or a 4 speed manual transmission (1987-1996 VT1100C). All years are shaft drive. Final drive ratio is similar between these transmissions (with one exception the Honda Shadow Spirit has a 14% higher final drive ratio, this lowers the RPM at highway speeds. For the lower geared bikes such as on the VT1100T the 33T on the countershaft drives the 31T on the damper shaft (Honda calls this a cross shaft.) For the VT1100T, Sabre and Aero in high gear RPM is around 3250 @ 60 mph. Honda not only put a slightly lower first gear in the VT1100T to help with an expected fully loaded touring motorcycle, but also used this lower gear in the Tourer and Sabre. Honda also placed a slightly lower 5th gear in the Aero, Tourer, Sabre to give it around 3380 RPM @ 60 mph. 8:1 compared to the 7.6:1 of the A.C.E. For the higher geared VT1100C (1997-2007) 36T on the countershaft drives the 29T on the damper shaft. For the VT1100C Spirit high gear RPM is around 2730 @ 60 mph. Honda said the Aero has about 5 more HP than the other VT1100's because of the exhaust system design, but compared to the ACE it weighs about 40 pounds more. Also the lower high gear ratio in the Aero gives it better passing power without downshifting but at a noticeable cost in fuel economy. The VT1100 has been used in the following Honda motorcycles with these model designations: VT1100C - 1985-1996 (sometimes called "Classic") VT1100C - 1997-2007 (Spirit) models VT1100C2 - 1995-1999 American Classic Edition (ACE) and 2000-2007 Sabre models VT1100C3 - 1998-2002 Aero models VT1100T - 1998-2001 ACE Tourer models. The 1995-1999 VT1100C2 ACE and 1998-2001 VT1100C3 Aero models are single crank-pin models, all other 1100s are dual crankpin. The single crank pin model gave the engine a "loping idle" and more "rumble" in an attempt to mimic Harley-Davidson V-twins. It also lost about and around 10 ft lbs of torque compared to the dual pin engine. There is also more vibration with the single pin crank engine. References Motorcycle engines VT1100
Honda VT1100
Technology
740
262,401
https://en.wikipedia.org/wiki/Translation%20%28biology%29
In biology, translation is the process in living cells in which proteins are produced using RNA molecules as templates. The generated protein is a sequence of amino acids. This sequence is determined by the sequence of nucleotides in the RNA. The nucleotides are considered three at a time. Each such triple results in addition of one specific amino acid to the protein being generated. The matching from nucleotide triple to amino acid is called the genetic code. The translation is performed by a large complex of functional RNA and proteins called ribosomes. The entire process is called gene expression. In translation, messenger RNA (mRNA) is decoded in a ribosome, outside the nucleus, to produce a specific amino acid chain, or polypeptide. The polypeptide later folds into an active protein and performs its functions in the cell. The polypeptide can also start folding in the during protein synthesis. The ribosome facilitates decoding by inducing the binding of complementary transfer RNA (tRNA) anticodon sequences to mRNA codons. The tRNAs carry specific amino acids that are chained together into a polypeptide as the mRNA passes through and is "read" by the ribosome. Translation proceeds in three phases: Initiation: The ribosome assembles around the target mRNA. The first tRNA is attached at the start codon. Elongation: The last tRNA validated by the small ribosomal subunit (accommodation) transfers the amino acid. It carries to the large ribosomal subunit which binds it to one of the preceding admitted tRNA (transpeptidation). The ribosome then moves to the next mRNA codon to continue the process (translocation), creating an amino acid chain. Termination: When a stop codon is reached, the ribosome releases the polypeptide. The ribosomal complex remains intact and moves on to the next mRNA to be translated. In prokaryotes (bacteria and archaea), translation occurs in the cytosol, where the large and small subunits of the ribosome bind to the mRNA. In eukaryotes, translation occurs in the cytoplasm or across the membrane of the endoplasmic reticulum through a process called co-translational translocation. In co-translational translocation, the entire ribosome/mRNA complex binds to the outer membrane of the rough endoplasmic reticulum (ER), and the new protein is synthesized and released into the ER; the newly created polypeptide can be stored inside the ER for future vesicle transport and secretion outside the cell, or immediately secreted. Many types of transcribed RNA, such as tRNA, ribosomal RNA, and small nuclear RNA, do not undergo a translation into proteins. Several antibiotics act by inhibiting translation. These include anisomycin, cycloheximide, chloramphenicol, tetracycline, streptomycin, erythromycin, and puromycin. Prokaryotic ribosomes have a different structure from that of eukaryotic ribosomes, and thus antibiotics can specifically target bacterial infections without any harm to a eukaryotic host's cells. Basic mechanisms The basic process of protein production is the addition of one amino acid at a time to the end of a protein. This operation is performed by a ribosome. A ribosome is made up of two subunits, a small subunit, and a large subunit. These subunits come together before the translation of mRNA into a protein to provide a location for translation to be carried out and a polypeptide to be produced. The choice of amino acid type to add is determined by a messenger RNA (mRNA) molecule. Each amino acid added is matched to a three-nucleotide subsequence of the mRNA. For each such triplet possible, the corresponding amino acid is accepted. The successive amino acids added to the chain are matched to successive nucleotide triplets in the mRNA. In this way, the sequence of nucleotides in the template mRNA chain determines the sequence of amino acids in the generated amino acid chain. The addition of an amino acid occurs at the C-terminus of the peptide; thus, translation is said to be amine-to-carboxyl directed. The mRNA carries genetic information encoded as a ribonucleotide sequence from the chromosomes to the ribosomes. The ribonucleotides are "read" by translational machinery in a sequence of nucleotide triplets called codons. Each of those triplets codes for a specific amino acid. The ribosome molecules translate this code to a specific sequence of amino acids. The ribosome is a multisubunit structure containing ribosomal RNA (rRNA) and proteins. It is the "factory" where amino acids are assembled into proteins. Transfer RNAs (tRNAs) are small noncoding RNA chains (74–93 nucleotides) that transport amino acids to the ribosome. The repertoire of tRNA genes varies widely between species, with some bacteria having between 20 and 30 genes while complex eukaryotes could have thousands. tRNAs have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo amino acid. Aminoacyl tRNA synthetases (enzymes) catalyze the bonding between specific tRNAs and the amino acids that their anticodon sequences call for. The product of this reaction is an aminoacyl-tRNA. The amino acid is joined by its carboxyl group to the 3' OH of the tRNA by an ester bond. When the tRNA has an amino acid linked to it, the tRNA is termed "charged". In bacteria, this aminoacyl-tRNA is carried to the ribosome by EF-Tu, where mRNA codons are matched through complementary base pairing to specific tRNA anticodons. Aminoacyl-tRNA synthetases that mispair tRNAs with the wrong amino acids can produce mischarged aminoacyl-tRNAs, which can result in inappropriate amino acids at the respective position in the protein. This "mistranslation" of the genetic code naturally occurs at low levels in most organisms, but certain cellular environments cause an increase in permissive mRNA decoding, sometimes to the benefit of the cell. The ribosome has two binding sites for tRNA. They are the aminoacyl site (abbreviated A), and the peptidyl site/ exit site (abbreviated P/E). Concerning the mRNA, the three sites are oriented 5' to 3' E-P-A, because ribosomes move toward the 3' end of mRNA. The A-site binds the incoming tRNA with the complementary codon on the mRNA. The P/E-site holds the tRNA with the growing polypeptide chain. When an aminoacyl-tRNA initially binds to its corresponding codon on the mRNA, it is in the A site. Then, a peptide bond forms between the amino acid of the tRNA in the A site and the amino acid of the charged tRNA in the P/E site. The growing polypeptide chain is transferred to the tRNA in the A site. Translocation occurs, moving the tRNA to the P/E site, now without an amino acid; the tRNA that was in the A site, now charged with the polypeptide chain, is moved to the P/E site and the uncharged tRNA leaves, and another aminoacyl-tRNA enters the A site to repeat the process. After the new amino acid is added to the chain, and after the tRNA is released out of the ribosome and into the cytosol, the energy provided by the hydrolysis of a GTP bound to the translocase EF-G (in bacteria) and a/eEF-2 (in eukaryotes and archaea) moves the ribosome down one codon towards the 3' end. The energy required for translation of proteins is significant. For a protein containing n amino acids, the number of high-energy phosphate bonds required to translate it is 4n-1. The rate of translation varies; it is significantly higher in prokaryotic cells (up to 17–21 amino acid residues per second) than in eukaryotic cells (up to 6–9 amino acid residues per second). Initiation and termination of translation Initiation involves the small subunit of the ribosome binding to the 5' end of mRNA with the help of initiation factors (IF). In bacteria and a minority of archaea, initiation of protein synthesis involves the recognition of a purine-rich initiation sequence on the mRNA called the Shine–Dalgarno sequence. The Shine–Dalgarno sequence binds to a complementary pyrimidine-rich sequence on the 3' end of the 16S rRNA part of the 30S ribosomal subunit. The binding of these complementary sequences ensures that the 30S ribosomal subunit is bound to the mRNA and is aligned such that the initiation codon is placed in the 30S portion of the P-site. Once the mRNA and 30S subunit are properly bound, an initiation factor brings the initiator tRNA–amino acid complex, f-Met-tRNA, to the 30S P site. The initiation phase is completed once a 50S subunit joins the 30S subunit, forming an active 70S ribosome. Termination of the polypeptide occurs when the A site of the ribosome is occupied by a stop codon (UAA, UAG, or UGA) on the mRNA, creating the primary structure of a protein. tRNA usually cannot recognize or bind to stop codons. Instead, the stop codon induces the binding of a release factor protein (RF1 & RF2) that prompts the disassembly of the entire ribosome/mRNA complex by the hydrolysis of the polypeptide chain from the peptidyl transferase center of the ribosome. Drugs or special sequence motifs on the mRNA can change the ribosomal structure so that near-cognate tRNAs are bound to the stop codon instead of the release factors. In such cases of 'translational readthrough', translation continues until the ribosome encounters the next stop codon. Errors in translation Even though the ribosomes are usually considered accurate and processive machines, the translation process is subject to errors that can lead either to the synthesis of erroneous proteins or to the premature abandonment of translation, either because a tRNA couples to a wrong codon or because a tRNA is coupled to the wrong amino acid. The rate of error in synthesizing proteins has been estimated to be between 1 in 105 and 1 in 103 misincorporated amino acids, depending on the experimental conditions. The rate of premature translation abandonment, instead, has been estimated to be of the order of magnitude of 10−4 events per translated codon. Regulation The process of translation is highly regulated in both eukaryotic and prokaryotic organisms. Regulation of translation can impact the global rate of protein synthesis which is closely coupled to the metabolic and proliferative state of a cell. To study this process, scientists have used a wide variety of methods such as structural biology, analytical chemistry (mass-spectrometry based), imaging of reporter mRNA translation (in which the translation of a mRNA is linked to an output, such as luminescence or fluorescence), and next-generation sequencing based methods . Other methods such as toeprinting assay can also be used to determine to determine the location of ribosomes of a particular mRNA in vitro, and footprints of other proteins regulating translation. In particular, ribosome profiling, which is a powerful method, enables researchers to take a snapshot of all the proteins being translated at a given time, showing which parts of the mRNA are being translated into proteins by ribosomes at a given time. This method is useful because it looks at all the mRNAs instead of using reporters that would typically look at one specific mRNA at a time. Ribosome profiling provides valuable insights into translation dynamics, revealing the complex interplay between gene sequence, mRNA structure, and translation regulation. For example, research utilizing this method has revealed that genetic differences and their subsequent expression as mRNAs can also impact translation rate in an RNA-specific manner. Expanding on this concept, a more recent development is single-cell ribosome profiling, a technique that allows us to study the translation process at the resolution of individual cells. This is particularly significant as cells, even those of the same type, can exhibit considerable variability in their protein synthesis. Single-cell ribosome profiling has the potential to shed light on the heterogeneous nature of cells, leading to a more nuanced understanding of how translation regulation can impact cell behavior, metabolic state, and responsiveness to various stimuli or conditions. Clinical significance Translational control is critical for the development and survival of cancer. Cancer cells must frequently regulate the translation phase of gene expression, though it is not fully understood why translation is targeted over steps like transcrion. While cancer cells often have genetically altered translation factors, it is much more common for cancer cells to modify the levels of existing translation factors. Several major oncogenic signaling pathways, including the RAS–MAPK, PI3K/AKT/mTOR, MYC, and WNT–β-catenin pathways, ultimately reprogram the genome via translation. Cancer cells also control translation to adapt to cellular stress. During stress, the cell translates mRNAs that can mitigate the stress and promote survival. An example of this is the expression of AMPK in various cancers; its activation triggers a cascade that can ultimately allow the cancer to escape apoptosis (programmed cell death) triggered by nutrition deprivation. Future cancer therapies may involve disrupting the translation machinery of the cell to counter the downstream effects of cancer. Mathematical modeling of translation The transcription-translation process description, mentioning only the most basic "elementary" processes, consists of: production of mRNA molecules (including splicing), initiation of these molecules with help of initiation factors (e.g., the initiation can include the circularization step though it is not universally required), initiation of translation, recruiting the small ribosomal subunit, assembly of full ribosomes, elongation, (i.e. movement of ribosomes along mRNA with production of protein), termination of translation, degradation of mRNA molecules, degradation of proteins. The process of amino acid building to create protein in translation is a subject of various physic models for a long time starting from the first detailed kinetic models such as or others taking into account stochastic aspects of translation and using computer simulations. Many chemical kinetics-based models of protein synthesis have been developed and analyzed in the last four decades. Beyond chemical kinetics, various modeling formalisms such as Totally Asymmetric Simple Exclusion Process, Probabilistic Boolean Networks, Petri Nets and max-plus algebra have been applied to model the detailed kinetics of protein synthesis or some of its stages. A basic model of protein synthesis that takes into account all eight 'elementary' processes has been developed, following the paradigm that "useful models are simple and extendable". The simplest model M0 is represented by the reaction kinetic mechanism (Figure M0). It was generalised to include 40S, 60S and initiation factors (IF) binding (Figure M1'). It was extended further to include effect of microRNA on protein synthesis. Most of models in this hierarchy can be solved analytically. These solutions were used to extract 'kinetic signatures' of different specific mechanisms of synthesis regulation. Genetic code It is also possible to translate either by hand (for short sequences) or by computer (after first programming one appropriately, see section below); this allows biologists and chemists to draw out the chemical structure of the encoded protein on paper. First, convert each template DNA base to its RNA complement (note that the complement of A is now U), as shown below. Note that the template strand of the DNA is the one the RNA is polymerized against; the other DNA strand would be the same as the RNA, but with thymine instead of uracil. DNA -> RNA A -> U T -> A C -> G G -> C A=T-> A=U Then split the RNA into triplets (groups of three bases). Note that there are 3 translation "windows", or reading frames, depending on where you start reading the code. Finally, use the table at Genetic code to translate the above into a structural formula as used in chemistry. This will give the primary structure of the protein. However, proteins tend to fold, depending in part on hydrophilic and hydrophobic segments along the chain. Secondary structure can often still be guessed at, but the proper tertiary structure is often very hard to determine. Whereas other aspects such as the 3D structure, called tertiary structure, of protein can only be predicted using sophisticated algorithms, the amino acid sequence, called primary structure, can be determined solely from the nucleic acid sequence with the aid of a translation table. This approach may not give the correct amino acid composition of the protein, in particular if unconventional amino acids such as selenocysteine are incorporated into the protein, which is coded for by a conventional stop codon in combination with a downstream hairpin (SElenoCysteine Insertion Sequence, or SECIS). There are many computer programs capable of translating a DNA/RNA sequence into a protein sequence. Normally this is performed using the Standard Genetic Code, however, few programs can handle all the "special" cases, such as the use of the alternative initiation codons which are biologically significant. For instance, the rare alternative start codon CTG codes for Methionine when used as a start codon, and for Leucine in all other positions. Example: Condensed translation table for the Standard Genetic Code (from the NCBI Taxonomy webpage). AAs = FFLLSSSSYY**CC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = ---M---------------M---------------M---------------------------- Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG The "Starts" row indicate three start codons, UUG, CUG, and the very common AUG. It also indicates the first amino acid residue when interpreted as a start: in this case it is all methionine. Translation tables Even when working with ordinary eukaryotic sequences such as the Yeast genome, it is often desired to be able to use alternative translation tables—namely for translation of the mitochondrial genes. Currently the following translation tables are defined by the NCBI Taxonomy Group for the translation of the sequences in GenBank: The standard code The vertebrate mitochondrial code The yeast mitochondrial code The mold, protozoan, and coelenterate mitochondrial code and the mycoplasma/spiroplasma code The invertebrate mitochondrial code The ciliate, dasycladacean and hexamita nuclear code The kinetoplast code The echinoderm and flatworm mitochondrial code The euplotid nuclear code The bacterial, archaeal and plant plastid code The alternative yeast nuclear code The ascidian mitochondrial code The alternative flatworm mitochondrial code The Blepharisma nuclear code The chlorophycean mitochondrial code The trematode mitochondrial code The Scenedesmus obliquus mitochondrial code The Thraustochytrium mitochondrial code The Pterobranchia mitochondrial code The candidate division SR1 and gracilibacteria code The Pachysolen tannophilus nuclear code The karyorelict nuclear code The Condylostoma nuclear code The Mesodinium nuclear code The peritrich nuclear code The Blastocrithidia nuclear code The Cephalodiscidae mitochondrial code See also Cell (biology) Cell division DNA codon table Epigenetics Expanded genetic code Gene expression Gene regulation Gene Genome Life Protein methods Start codon References Further reading External links Virtual Cell Animation Collection: Introducing Translation Translate tool (from DNA or RNA sequence) Molecular biology Protein biosynthesis Gene expression Cellular processes
Translation (biology)
Chemistry,Biology
4,402
34,051,196
https://en.wikipedia.org/wiki/Organic%20photorefractive%20materials
Organic photorefractive materials are materials that exhibit a temporary change in refractive index when exposed to light. The changing refractive index causes light to change speed throughout the material and produce light and dark regions in the crystal. The buildup can be controlled to produce holographic images for use in biomedical scans and optical computing. The ease with which the chemical composition can be changed in organic materials makes the photorefractive effect more controllable. History Although the physics behind the photorefractive effect were known for quite a while, the effect was first observed in 1967 in LiNbO3. For more than thirty years, the effect was observed and studied exclusively in inorganic materials, until 1990, when a nonlinear organic crystal 2-(cyclooctylamino)-5-nitropyridine (COANP) doped with 7,7,8,8-tetracyanoquinodimethane (TCNQ) exhibited the photorefractive effect. Even though inorganic material-based electronics dominate the current market, organic PR materials have been improved greatly since then and are currently considered to be an equal alternative to inorganic crystals. Theory There are two phenomena that, when combined together, produce the photorefractive effect. These are photoconductivity, first observed in selenium by Willoughby Smith in 1873, and the Pockels Effect, named after Friedrich Carl Alwin Pockels who studied it in 1893. Photoconductivity is the property of a material that describes the capability of incident light of adequate wavelength to produce electric charge carriers. The Fermi level of an intrinsic semiconductor is exactly in the middle of the band gap. The densities of free electrons n in the conduction band and free holes h in the valence band can be found through equations: and where Nc and Nv are the densities of states at the bottom of the conduction band and the top of the valence band, respectively, Ec and Ev are the corresponding energies, EF is the Fermi level, kB is the Boltzmann constant and T is the absolute temperature. Addition of impurities into the semiconductor, or doping, produces excess holes or electrons, which, with sufficient density, may pin the Fermi level to the impurities' position. A sufficiently energetic light can excite charge carriers so much that they will populate the initially empty localized levels. Then, the density of free carriers in the conduction and/or the valence band will increase. To account for these changes, steady-state Fermi levels are defined for electrons to be EFn and, for holes, EFp. The densities n and h are, then equal to The localized states between EFn and EFp are known as 'photoactive centers'. The charge carriers remain in these states for a long time until they recombine with an oppositely charged carrier. The states outside the EFn − EFp energy, however, relax their charge carriers to the nearest extended states. The effect of incident light on the conductivity of the material depends on the energy of light and material. Differently-doped materials may have several different types of photoactive centers, each of which requires a different mathematical treatment. However, it is not very difficult to show the relationship between incident light and conductivity in a material with only one type of charge carrier and one type of a photoactive center. The dark conductivity of such a material is given by where σd is the conductivity, e = elementary charge, Nd and N are the densities of total photoactive centers and ionized empty electron acceptor states, respectively, β is the thermal photoelectron generation coefficient, μ is the mobility constant and τ is the photoelectron lifetime. The equation for photoconductivity substitutes the parameters of the incident light for β and is in which s is the effective cross-section for photoelectron generation, h is the Planck constant, ν is the frequency of incident light, and the term I = I0e−αz in which I0 is the incident irradiance, z is the coordinate along the crystal thickness and α is the light intensity loss coefficient. The electro-optic effect is a change of the optical properties of a given material in response to an electric field. There are many different occurrences, all of which are in the subgroup of the electro-optic effect, and Pockels effect is one of these occurrences. Essentially, the Pockels effect is the change of the material's refractive index induced by an applied electric field. The refractive index of a material is the factor by which the phase velocity is decreased relative to the velocity of light in vacuum. At a microscale, such a decrease occurs because of a disturbance in the charges of each atom after being subjected to the electromagnetic field of the incident light. As the electrons move around energy levels, some energy is released as an electromagnetic wave at the same frequency but with a phase delay. The apparent light in a medium is a superposition of all of the waves released in such way, and so the resulting light wave has shorter wavelength but the same frequency and the light wave's phase speed is slowed down. Whether the material will exhibit Pockels effect depends on its symmetry. Both centrosymmetric and non-centrosymmetric media will exhibit an effect similar to Pockels, the Kerr effect. The refractive index change will be proportional to the square of the electric field strength and will therefore be much weaker than the Pockels effect. It is only the non-centrosymmetric materials that can exhibit the Pockels effect: for instance, lithium tantalite (trigonal crystal) or gallium arsenide (zinc-blende crystal); as well as poled polymers with specifically designed organic molecules. It is possible to describe the Pockels effect mathematically by first introducing the index ellipsoid – a concept relating the orientation and relative magnitude of the material's refractive indices. The ellipsoid is defined by in which εi is the relative permittivity along the x, y, or z axis, and R is the reduced displacement vector defined as Di/ in which Di is the electric displacement vector and W is the field energy. The electric field will induce a deformation in Ri as according to: in which E is the applied electric field, and rij is a coefficient that depends on the crystal symmetry and the orientation of the coordinate system with respect to the crystal axes. Some of these coefficients will usually be equal to zero. Organic photorefractive materials In general, photorefractive materials can be classified into the following categories, the border between categories may not be sharp in each case Inorganic crystal and compound semiconductor Multiple quantum well structures Organic crystalline materials Polymer dispersed Liquid crystalline materials (PDLC) Organic amorphous materials In the field of this research, initial investigations were mainly carried out with inorganic semiconductors. There have been huge varieties of inorganic crystals such as BaTiO3, KNbO3, LiNbO3 and inorganic compound semiconductors such as GaAs, InP, CdTe are reported in literature. First photorefractive (PR) effect in organic materials was reported in 1991 and then, research of organic photorefractive materials has drawn major attention in recent years compare to inorganic PR semiconductors. This is due to mainly cost effectiveness, relatively easy synthetic procedure, and tunable properties through modifications of chemical or compositional changes. Polymer or polymer composite materials have shown excellent photorefractive properties of 100% diffraction efficiency. Most recently, amorphous composites of low glass transition temperature have emerged as highly efficient PR materials. These two classes of organic PR materials are also mostly investigated field. These composite materials have four components -conducting materials, sensitizer, chromophore, and other dopant molecules to be discussed in terms of PR effect. According to the literature, design strategy of hole conductors is mainly p-type based and the issues on the sensitizing are accentuated on n-type electron accepting materials, which are usually of very low content in the blends and thus do not provide a complementary path for electron conduction. In recent publications on organic PR materials, it is common to incorporate a polymeric material with charge transport units in its main or side chain. In this way, the polymer also serves as a host matrix to provide the resultant composite material with a sufficient viscosity for reasons of processing. Most guest-host composites demonstrated in the literature so far are based on hole conducting polymeric materials. The vast majority of the polymers are based on carbazole containing polymers like poly-(N-vinyl carbazole) (PVK) and polysiloxanes (PSX). PVK is the well studied system for huge varieties of applications. In polymers, charge is transported through the HOMO and the mobility is influenced by the nature of the dopant mixed into the polymer, also it depends on the amount of dopant which may exceed 50 weight percent of the composite for guest-host materials. The mobility decreases as the concentration of charge-transport moieties decreases, and the dopant's polarity and concentration increases. Besides the mobility, the ionization potential of the polymer and the respective dopant has also significant importance. The relative position of the polymer HOMO with respect to the ionization potential of the other components of the blends determines the extent of extrinsic hole traps in the material. TPD (tetraphenyldiaminophenyl) based materials are known to exhibit higher charge carrier mobilities and lower ionization potentials compare to carbazole based (PVK) materials. The low ionization potentials of the TPD based materials greatly enhance the photoconductivity of the materials. This is partly due to the enhanced complexation of the hole conductor, which is an electron donor, with the sensitizing agents, which is an electron acceptor. It was reported a dramatic increase of the photogeneration efficiency from 0.3% to 100% by lowering the ionization potential from 5.90 eV (PVK) to 5.39 eV ( TPD derivative PATPD). This is schematically explained in the diagram using the electronic states of PVK and PATPD. Applications As of 2011, no commercial products utilizing organic photorefractive materials exist. All applications described are speculative or performed in research laboratories. Large DC fields required to produce holograms lead to dielectric breakdown not suitable outside the laboratory. Reusable Holographic Displays Many materials exist for recording static, permanent holograms including photopolymers, silver halide films, photoresists, dichromated gelatin, and photorefractives. Materials vary in their maximum diffraction efficiency, required power consumption, and resolution. Photorefractives have a high diffraction efficiency, an average-low power consumption, and a high resolution. Updatable holograms that do not require glasses are attractive for medical and military imaging. The materials properties required to produce updatable holograms are 100% diffraction efficiency, fast writing time, long image persistence, fast erasing time, and large area. Inorganic materials capable of rapid updating exist but are difficult to grow larger than a cubic centimeter. Liquid crystal 3D displays exist but require complex computation to produce images which limits their refresh rate and size. Blanche et al. demonstrated in 2008 a 4 in x 4 in display that refreshed every few minutes and lasted several hours. Organic photorefractive materials are capable of kHz refresh rates though it is limited by material sensitivity and laser power. Material sensitivity demonstrated in 2010 require kW pulsed lasers. Tunable color filter White light passed through an organic photorefractive diffraction grating, leads to the absorption of wavelengths generated by surface plasmon resonance and the reflection of complementary wavelengths. The period of the diffraction grating may be adjusted by modifying to control the wavelengths of the reflected light. This could be used for filter channels, optical attenuators, and optical color filters Optical communications Free-space optical communications (FSO) can be used for high-bandwidth communication of data by utilizing high frequency lasers. Phase distortions created by the atmosphere can be corrected by a four-wave mixing process utilizing organic photorefractive holograms. The nature of FSO allows images to be transmitted at near original quality in real-time. The correction also corrects for moving images. Image and signal processing Organic photorefractive materials are a nonlinear medium in which large amounts of information can be recorded and read. Holograms due to the inherent parallel nature of optical recording are able to quickly process large amounts of data. Holograms that can be quickly produced and read can be used to verify the authenticity of documents similar to a watermark Organic photorefractive correlators use matched filter and Joint Fourier Transform configurations. Logical functions (AND, OR, NOR, XOR, NOT) were carried out using two-wave signal processing. High diffraction efficiency allowed a CCD detector to distinguish between light pixels (1 bit) and dark pixels (0 bits). References Holography Nonlinear optical materials Organic semiconductors Semiconductor material types
Organic photorefractive materials
Chemistry
2,735
7,701
https://en.wikipedia.org/wiki/Cocaine
Cocaine (, , ultimately ) is a tropane alkaloid that acts as a central nervous system stimulant. As an extract, it is mainly used recreationally and often illegally for its euphoric and rewarding effects. It is also used in medicine by Indigenous South Americans for various purposes and rarely, but more formally, as a local anaesthetic or diagnostic tool by medical practitioners in more developed countries. It is primarily obtained from the leaves of two Coca species native to South America: Erythroxylum coca and E. novogranatense. After extraction from the plant, and further processing into cocaine hydrochloride (powdered cocaine), the drug is administered by being either snorted, applied topically to the mouth, or dissolved and injected into a vein. It can also then be turned into free base form (typically crack cocaine), in which it can be heated until sublimated and then the vapours can be inhaled. Cocaine stimulates the mesolimbic pathway in the brain. Mental effects may include an intense feeling of happiness, sexual arousal, loss of contact with reality, or agitation. Physical effects may include a fast heart rate, sweating, and dilated pupils. High doses can result in high blood pressure or high body temperature. Onset of effects can begin within seconds to minutes of use, depending on method of delivery, and can last between five and ninety minutes. As cocaine also has numbing and blood vessel constriction properties, it is occasionally used during surgery on the throat or inside of the nose to control pain, bleeding, and vocal cord spasm. Cocaine crosses the blood–brain barrier via a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. Cocaine blocks the dopamine transporter, inhibiting reuptake of dopamine from the synaptic cleft into the pre-synaptic axon terminal; the higher dopamine levels in the synaptic cleft increase dopamine receptor activation in the post-synaptic neuron, causing euphoria and arousal. Cocaine also blocks the serotonin transporter and norepinephrine transporter, inhibiting reuptake of serotonin and norepinephrine from the synaptic cleft into the pre-synaptic axon terminal and increasing activation of serotonin receptors and norepinephrine receptors in the post-synaptic neuron, contributing to the mental and physical effects of cocaine exposure. A single dose of cocaine induces tolerance to the drug's effects. Repeated use is likely to result in addiction. Addicts who abstain from cocaine may experience prolonged craving lasting for many months. Abstaining addicts also experience modest drug withdrawal symptoms lasting up to 24 hours, with sleep disruption, anxiety, irritability, crashing, depression, decreased libido, decreased ability to feel pleasure, and fatigue being common. Use of cocaine increases the overall risk of death, and intravenous use potentially increases the risk of trauma and infectious diseases such as blood infections and HIV through the use of shared paraphernalia. It also increases risk of stroke, heart attack, cardiac arrhythmia, lung injury (when smoked), and sudden cardiac death. Illicitly sold cocaine can be adulterated with fentanyl, local anesthetics, levamisole, cornstarch, quinine, or sugar, which can result in additional toxicity. In 2017, the Global Burden of Disease study found that cocaine use caused around 7,300 deaths annually. Uses Coca leaves have been used by Andean civilizations since ancient times. In ancient Wari culture, Inca culture, and through modern successor indigenous cultures of the Andes mountains, coca leaves are chewed, taken orally in the form of a tea, or alternatively, prepared in a sachet wrapped around alkaline burnt ashes, and held in the mouth against the inner cheek; it has traditionally been used to combat the effects of cold, hunger, and altitude sickness. Cocaine was first isolated from the leaves in 1860. Globally, in 2019, cocaine was used by an estimated 20 million people (0.4% of adults aged 15 to 64 years). The highest prevalence of cocaine use was in Australia and New Zealand (2.1%), followed by North America (2.1%), Western and Central Europe (1.4%), and South and Central America (1.0%). Since 1961, the Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. In the United States, cocaine is regulated as a Schedule II drug under the Controlled Substances Act, meaning that it has a high potential for abuse but has an accepted medical use. While rarely used medically today, its accepted uses are as a topical local anesthetic for the upper respiratory tract as well as to reduce bleeding in the mouth, throat and nasal cavities. Medical Cocaine eye drops are frequently used by neurologists when examining people suspected of having Horner syndrome. In Horner syndrome, sympathetic innervation to the eye is blocked. In a healthy eye, cocaine will stimulate the sympathetic nerves by inhibiting norepinephrine reuptake, and the pupil will dilate; if the patient has Horner syndrome, the sympathetic nerves are blocked, and the affected eye will remain constricted or dilate to a lesser extent than the opposing (unaffected) eye which also receives the eye drop test. If both eyes dilate equally, the patient does not have Horner syndrome. Topical cocaine is sometimes used as a local numbing agent and vasoconstrictor to help control pain and bleeding with surgery of the nose, mouth, throat or lacrimal duct. Although some absorption and systemic effects may occur, the use of cocaine as a topical anesthetic and vasoconstrictor is generally safe, rarely causing cardiovascular toxicity, glaucoma, and pupil dilation. Occasionally, cocaine is mixed with adrenaline and sodium bicarbonate and used topically for surgery, a formulation called Moffett's solution. Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020. The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia. Recreational Cocaine is a central nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder and has a bitter taste. Crack cocaine is a smokeable form of cocaine made into small "rocks" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as "crack" because of the crackling sounds it makes when heated. Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality. Analysis of the correlation between the use of 18 various psychoactive substances shows that cocaine use correlates with other "party drugs" (such as ecstasy or amphetamines), as well as with heroin and benzodiazepines use, and can be considered as a bridge between the use of different groups of drugs. Coca leaves It is legal for people to use coca leaves in some Andean nations, such as Peru and Bolivia, where they are chewed, consumed in the form of tea, or are sometimes incorporated into food products. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the buccal pouch (mouth between gum and cheek, much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. Its actual effectiveness has never been systematically studied. In 1986 an article in the Journal of the American Medical Association revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as "Health Inca Tea". While the packaging claimed it had been "decocainized", no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless. Insufflation Nasal insufflation (known colloquially as "snorting", "sniffing", or "blowing") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60% In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is due to cocaine constricting blood vessels — and therefore restricting blood and oxygen/nutrient flow — to that area. Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. The cocaine typically is poured onto a flat, hard surface (such as a mobile phone screen, mirror, CD case or book) and divided into "bumps", "lines" or "rails", and then insufflated. A 2001 study reported that the sharing of straws used to "snort" cocaine can spread blood diseases such as hepatitis C. Injection Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when over 120 milligrams) lasting 2 to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also the danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. An injected mixture of cocaine and heroin, known as "speedball", is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. Inhalation The onset of cocaine's euphoric effects is fastest with inhalation, beginning after 3–5 seconds. This gives the briefest euphoria (5–15 minutes). Cocaine is smoked by inhaling the vapor produced when crack cocaine is heated to the point of sublimation. In a 2000 Brookhaven National Laboratory medical department study, based on self-reports of 32 people who used cocaine who participated in the study, "peak high" was found at a mean of 1.4 ± 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, i.e. anhydroecgonine methyl ester, when co-administered with cocaine, increases the dopamine in CPu and NAc brain regions, and has M1 — and M3 — receptor affinity. People often freebase crack with a pipe made from a small glass tube, often taken from "love roses", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring padoften called a "brillo" (actual Brillo Pads contain soap, and are not used) or "chore" (named for Chore Boy brand copper scouring pads)serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects felt almost immediately after smoking, are very intense and do not last long — usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Effects Acute Acute exposure to cocaine has many effects on humans, including euphoria, increases in heart rate and blood pressure, and increases in cortisol secretion from the adrenal gland. In humans with acute exposure followed by continuous exposure to cocaine at a constant blood concentration, the acute tolerance to the chronotropic cardiac effects of cocaine begins after about 10 minutes, while acute tolerance to the euphoric effects of cocaine begins after about one hour. With excessive or prolonged use, the drug can cause itching, fast heart rate, and paranoid delusions or sensations of insects crawling on the skin. Intranasal cocaine and crack use are both associated with pharmacological violence. Aggressive behavior may be displayed by both addicts and casual users. Cocaine can induce psychosis characterized by paranoia, impaired reality testing, hallucinations, irritability, and physical aggression. Cocaine intoxication can cause hyperawareness, hypervigilance, and psychomotor agitation and delirium. Consumption of large doses of cocaine can cause violent outbursts, especially by those with preexisting psychosis. Crack-related violence is also systemic, relating to disputes between crack dealers and users. Acute exposure may induce cardiac arrhythmias, including atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, and ventricular fibrillation. Acute exposure may also lead to angina, heart attack, and congestive heart failure. Cocaine overdose may cause seizures, abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Incidental exposure of the eye to sublimated cocaine while smoking crack cocaine can cause serious injury to the cornea and long-term loss of visual acuity. Chronic Although it has been commonly asserted, the available evidence does not show that chronic use of cocaine is associated with broad cognitive deficits. Research is inconclusive on age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective or neurodegenerative properties for dopamine neurons. Exposure to cocaine may lead to the breakdown of the blood–brain barrier. Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. Cocaine can cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, people who use cocaine over a long period of time who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets. Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Illicitly-sold cocaine may be contaminated with levamisole. Levamisole may accentuate cocaine's effects. Levamisole-adulterated cocaine has been associated with autoimmune disease. Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack. Addiction Relatives of persons with cocaine addiction have an increased risk of cocaine addiction. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. DNA damage is increased in the brain of rodents by administration of cocaine. During DNA repair of such damages, persistent chromatin alterations may occur such as methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in cocaine addiction. In humans, cocaine abuse may cause structural changes in brain connectivity, though it is unclear to what extent these changes are permanent. Dependence and withdrawal Cocaine dependence develops after even brief periods of regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use. During pregnancy Crack baby is a term for a child born to a mother who used crack cocaine during her pregnancy. The threat that cocaine use during pregnancy poses to the fetus is now considered exaggerated. Studies show that prenatal cocaine exposure (independent of other effects such as, for example, alcohol, tobacco, or physical environment) has no appreciable effect on childhood growth and development. In 2007, he National Institute on Drug Abuse of the United States warned about health risks while cautioning against stereotyping: There are also warnings about the threat of breastfeeding: The March of Dimes said "it is likely that cocaine will reach the baby through breast milk," and advises the following regarding cocaine use during pregnancy: Mortality Persons with regular or problematic use of cocaine have a significantly higher rate of death, and are specifically at higher risk of traumatic deaths and deaths attributable to infectious disease. Pharmacology Pharmacokinetics The extent of absorption of cocaine into the systemic circulation after nasal insufflation is similar to that after oral ingestion. The rate of absorption after nasal insufflation is limited by cocaine-induced vasoconstriction of capillaries in the nasal mucosa. Onset of absorption after oral ingestion is delayed because cocaine is a weak base with a pKa of 8.6, and is thus in an ionized form that is poorly absorbed from the acidic stomach and easily absorbed from the alkaline duodenum. The rate and extent of absorption from inhalation of cocaine is similar or greater than with intravenous injection, as inhalation provides access directly to the pulmonary capillary bed. The delay in absorption after oral ingestion may account for the popular belief that cocaine bioavailability from the stomach is lower than after insufflation. Compared with ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained. Cocaine crosses the blood–brain barrier via both a proton-coupled organic cation antiporter and (to a lesser extent) via passive diffusion across cell membranes. As of September 2022, the gene or genes encoding the human proton-organic cation antiporter had not been identified. Cocaine has a short elimination half life of 0.7–1.5 hours and is extensively metabolized by plasma esterases and also by liver cholinesterases, with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is more euphoric, and has a higher cardiovascular toxicity than cocaine by itself. Depending on liver and kidney functions, cocaine metabolites are detectable in urine between three and eight days. Generally speaking benzoylecgonine is eliminated from someone's urine between three and five days. In urine from heavy cocaine users, benzoylecgonine can be detected within four hours after intake and in concentrations greater than 150 ng/mL for up to eight days later. Detection of cocaine metabolites in hair is possible in regular users until after the sections of hair grown during the period of cocaine use are cut or fall out. Pharmacodynamics The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine neurotransmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine. Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. An overabundance of 5-HT3 receptors is reported in cocaine-conditioned rats, though 5-HT3's role is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2A, 5-HT2B and 5-HT2C) are involved in the locomotor-activating effects of cocaine. Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in the habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds. Conflicting findings have challenged the widely accepted view that cocaine functions solely as a reuptake inhibitor. To induce euphoria an intravenous dose of 0.3-0.6 mg/kg of cocaine is required, which blocks 66-70% of dopamine transporters (DAT) in the brain. Re-administering cocaine beyond this threshold does not significantly increase DAT occupancy but still results in an increase of euphoria which cannot be explained by reuptake inhibition alone. This discrepancy is not shared with other dopamine reuptake inhbitors like bupropion, sibutramine, mazindol or tesofensine, which have similar or higher potencies than cocaine as dopamine reuptake inhibitors. These findings have evoked a hypothesis that cocaine may also function as a so-called "DAT inverse agonist" or "negative allosteric modifier of DAT" resulting in dopamine transporter reversal, and subsequent dopamine release into the synaptic cleft from the axon terminal in a manner similar to but distinct from amphetamines. Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this, cocaine has some target binding to the site of the κ-opioid receptor. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. Cocaine is known to suppress hunger and appetite by increasing co-localization of sigma σ1R receptors and ghrelin GHS-R1a receptors at the neuronal cell surface, thereby increasing ghrelin-mediated signaling of satiety and possibly via other effects on appetitive hormones. Chronic users may lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. Chemistry Appearance Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or "cut" with cheaper substances to increase bulk, including talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, levamisole, and amphetamine. Fentanyl has been increasingly found in cocaine samples, although it is unclear if this is primarily due to intentional adulteration or cross contamination. Crack cocaine looks like irregular shaped white rocks. Forms Salts Cocaine — a tropane alkaloid — is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO42−) and the nitrate (NO3−) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents — the hydrochloride salt is polar in character and is quite soluble in water. Base As the name implies, "freebase" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble. Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction. Crack cocaine Crack is usually smoked in a glass pipe, and once inhaled, it passes from the lungs directly to the central nervous system, producing an almost immediate "high" that can be very powerful – this initial crescendo of stimulation is known as a "rush". This is followed by an equally intense low, leaving the user craving more of the drug. Addiction to crack usually occurs within four to six weeks - much more rapidly than regular cocaine. Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at , which is below the boiling point of water. Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities. The origin of the name "crack" comes from the "crackling" sound (and hence the onomatopoeic moniker "crack") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization. Coca leaf infusions Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are mild stimulation and mood lift. It has also been promoted as an adjuvant for the treatment of cocaine dependence. One study on coca leaf infusion used with counseling in the treatment of 23 addicted coca-paste smokers in Lima, Peru found that the relapses rate fell from 4.35 times per month on average before coca tea treatment to one during treatment. The duration of abstinence increased from an average of 32 days before treatment to 217.2 days during treatment. This suggests that coca leaf infusion plus counseling may be effective at preventing relapse during cocaine addiction treatment. There is little information on the pharmacological and toxicological effects of consuming coca tea. A chemical analysis by solid-phase extraction and gas chromatography–mass spectrometry (SPE-GC/MS) of Peruvian and Bolivian tea bags indicated the presence of significant amounts of cocaine, the metabolite benzoylecgonine, ecgonine methyl ester and trans-cinnamoylcocaine in coca tea bags and coca tea. Urine specimens were also analyzed from an individual who consumed one cup of coca tea and it was determined that enough cocaine and cocaine-related metabolites were present to produce a positive drug test. Synthesis Biosynthesis The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3) The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the N-methyl-Δ1-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile toward the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In cocaine biosynthesis, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine. N-methyl-pyrrolinium cation The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In some animals, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives N-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent N-methylation of putrescine gives the N-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the N-methyl-Δ1-pyrrolinium cation. Robert Robinson's acetonedicarboxylate The biosynthesis of the tropane alkaloid is still not understood. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of N-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation. Reduction of tropinone The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. Illegal clandestine chemistry In 1991, the United States Department of Justice released a report detailing the typical process in which leaves from coca plants were ultimately converted into cocaine hydrochloride by Latin American drug cartels: the exact species of coca to be planted was determined by the location of its cultivation, with Erythroxylum coca being grown in tropical high altitude climates of the eastern Andes in Peru and Bolivia, while Erythroxylum novogranatense was favoured in drier lowland areas of Colombia the average cocaine alkaloid content of a sample of coca leaf varied between 0.1 and 0.8 percent, with coca from higher altitudes containing the largest percentages of cocaine alkaloids the typical farmer will plant coca on a sloping hill so rainfall will not drown the plants as they reach full maturity over 12 to 24 months after being planted the main harvest of coca leaves takes place after the traditional wet season in March, with additional harvesting also taking place in July and November the leaves are then taken to a flat area and spread out on tarpaulins to dry in the hot sun for approximately 6 hours, and afterwards placed in sacks to be transported to market or to a cocaine processing facility depending on location in the early 1990s, Peru and Bolivia were the main locations for converting coca leaf to coca paste and cocaine base, while Colombia was the primary location for the final conversion for these products into cocaine hydrochloride the conversion of coca leaf into coca paste was typically done very close to the coca fields to minimize the need to transport the coca leaves, with a plastic lined pit in the ground used as a "pozo" the leaves are added to the pozo along with fresh water from a nearby river, along with kerosene and sodium carbonate, then a team of several people will repeatedly stomp on the mixture in their bare feet for several hours to help turn the leaves into paste the cocaine alkaloids and kerosene eventually separate from the water and coca leaves, which are then drained off / scooped out of the mixture the cocaine alkaloids are then extracted from the kerosene and added into a dilute acidic solution, to which more sodium carbonate is added to cause a precipitate to form the acid and water are afterwards drained off and the precipitate is filtered and dried to produce an off-white putty-like substance, which is coca paste ready for transportation to cocaine base processing facility at the processing facility, coca paste is dissolved in a mixture of sulfuric acid and water, to which potassium permanganate is then added and the solution is left to stand for 6 hours to allow to unwanted alkaloids to break down the solution is then filtered and the precipitate is discarded, after which ammonia water is added and another precipitate is formed when the solution has finished reacting the liquid is drained, then the remaining precipitate is dried under heating lamps, and resulting powder is cocaine base ready for transfer to a cocaine hydrochloride laboratory at the laboratory, acetone is added to the cocaine base and after it has dissolved the solution is filtered to remove undesired material hydrochloric acid diluted in ether is added to the solution, which causes the cocaine to precipitate out of the solution as cocaine hydrochloride crystals the cocaine hydrochloride crystals are finally dried under lamps or in microwave ovens, then pressed into blocks and wrapped in plastic ready for export GMO synthesis Research In 2022, a GMO produced N. benthamiana were discovered that were able to produce 25% of the amount of cocaine found in a coca plant. Detection in body fluids Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for use, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration. Field analysis Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test. Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives. Usage According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%). Europe Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland. Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018). Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, the Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country. In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities. United States Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. Dependence treatment History Discovery Indigenous peoples of South America have chewed the leaves of Erythroxylon coca—a plant that contains vital nutrients as well as numerous alkaloids, including cocaine—for over a thousand years. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. When the Spanish arrived in South America, the conquistadors at first banned coca as an "evil agent of devil". But after discovering that without the coca the locals were barely able to work, the conquistadors legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment": In 1609, Padre Blas Valera wrote: Isolation and naming Although the stimulant and hunger-suppressant properties of coca leaves had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient, and conditions of sea-shipping from South America at the time would often degrade the quality of the cocaine in the plant samples available to European chemists by the time they arrived. However, by 1855, the German chemist Friedrich Gaedcke successfully isolated the cocaine alkaloid for the first time. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal Archiv der Pharmazie. In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the Novara (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process. Niemann described every step he took to isolate cocaine in his dissertation titled Über eine neue organische Base in den Cocablättern (On a New Organic Base in the Coca Leaves), which was published in 1860 and earned him his Ph.D. He wrote of the alkaloid's "colourless transparent prisms" and said that "Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue." Niemann named the alkaloid "cocaine" from "coca" (from Quechua "kúka") + suffix "ine". The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps. Because of the former use of cocaine as a local anesthetic, a suffix "-caine" was later extracted and used to form names of synthetic local anesthetics. Medicalization With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely saltwater. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in saltwater. Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia. Popularization In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan, he wrote a paper in which he described the effects. In this paper, he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of "a furred tongue in the morning, flatulence, and whitening of the teeth." A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become coca wine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A "pinch of coca leaves" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work Über Coca, in which he wrote that cocaine causes: By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection. The company promised that its cocaine products would "supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain." By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case. In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took "Forced March" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. In the 1931 song "Minnie the Moocher", Cab Calloway heavily references cocaine use. He uses the phrase "kicking the gong around", slang for cocaine use; describes titular character Minnie as "tall and skinny;" and describes Smokey Joe as "cokey". In the 1932 comedy musical film The Big Broadcast, Cab Calloway performs the song with his orchestra and mimes snorting cocaine in between verses. During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX. In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users "feel rich and beautiful". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug. Modern usage In many countries, cocaine is a popular recreational drug. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. The use of the powder form has stayed relatively constant, experiencing a new height of use across the 1980s and 1990s in the U.S. However, from 2006 to 2010 cocaine use in the US declined by roughly half before again rising once again from 2017 onwards. In the UK, cocaine use increased significantly between the 1990s and late 2000s, with a similar high consumption in some other European countries, including Spain. The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. Cocaine's status as a club drug shows its immense popularity among the "party crowd". In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. An American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example, "that occasional cocaine use does not typically lead to severe or even minor physical or social problems." In the sixth meeting of the B committee, the US representative threatened that "If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable . In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stepping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine, or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants, however the purity of the cocaine is subsequently lowered. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows the sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure. Society and culture Legal status The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act. Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only. Australia Cocaine is a Schedule 8 controlled drug in Australia under the Poisons Standard. It is the second most popular illicit recreational drug in Australia behind cannabis. In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking. United States The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin, or morphine to such individuals was outlawed leading a 1915 editorial in the journal American Medicine to remark that the addict "is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law." The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures. Before the early 1900s, the primary problem caused by cocaine use was portrayed by newspapers to be addiction, not violence or crime, and the cocaine user was represented as an upper or middle class White person. In 1914, The New York Times published an article titled "Negro Cocaine 'Fiends' Are a New Southern Menace", portraying Black cocaine users as dangerous and able to withstand wounds that would normally be fatal. The Anti-Drug Abuse Act of 1986 mandated the same prison sentences for distributing 500 grams of powdered cocaine and just 5 grams of crack cocaine. In the National Survey on Drug Use and Health, white respondents reported a higher rate of powdered cocaine use, and Black respondents reported a higher rate of crack cocaine use. Interdiction In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t. Production Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia and from locally grown coca. There was a 28% increase in the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in of gasoline for a day, then the gasoline was removed and the leaves were pressed for the remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of "murky-looking smelly liquid". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed pasta and sold by the farmer. The yearly harvest of leaves from a hectare produced of pasta, approximately 40–60% cocaine. Repeated recrystallization from solvents, producing pasta lavada and eventually crystalline cocaine were performed at specialized laboratories after the sale. Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca-growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations. The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand. The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates. Synthesis Synthesizing cocaine could eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine, but is rarely done. Natural cocaine remains the lowest cost and highest quality supply of cocaine. Formation of inactive stereoisomers (cocaine has four chiral centres – 1R 2R, 3S, and 5S, two of them dependent, hence eight possible stereoisomers) plus synthetic by-products limits the yield and purity. Trafficking and distribution Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams ( of an ounce, or an "eight ball"). Caribbean and Mexican routes The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States. Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of , and the commercial shipment of tonnes of cocaine through the port of Miami. Chilean route Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants. Techniques Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as "mules" (or "mulas"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body (by swallowing or placement inside an orifice), typically known as 'bodypacking. If the mule gets through without being caught, the gangs will receive most of the profits. If the mule caught, gangs may sever all links and the mule will usually stand trial for trafficking alone. In many cases, mules are often forced into the role, as result of coercion, violence, threats or extreme poverty. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, like those used by the local populations. Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Sales to consumers Cocaine is readily available in all major countries' metropolitan areas. According to the Summer 1998 Pulse Check, published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": for example, $10 might purchase a "dime bag", a very small amount (0.1–0.15 g) of cocaine. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region. In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden, and Turkey reported much higher values. Consumption World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010, down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland. The 2010 UN World Drug Report concluded that "it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable". See also Black cocaine Coca alkaloids Coca eradication Cocaine and amphetamine regulated transcript Cocaine Anonymous Cocaine paste Crack epidemic Illegal drug trade in Latin America Coca production in Colombia Legal status of cocaine List of cocaine analogues List of countries by prevalence of cocaine use Methylphenidate Modafinil Prenatal cocaine exposure Ypadu References General and cited references Further reading External links 1855 introductions 1855 in science Alkaloids found in Erythroxylum Anorectics Benzoate esters Carboxylate esters Cardiac stimulants CYP2D6 inhibitors Euphoriants German inventions Glycine receptor agonists Local anesthetics Methyl esters Otologicals Powders Secondary metabolites Serotonin–norepinephrine–dopamine reuptake inhibitors Sigma agonists Stimulants Sympathomimetic amines Teratogens Tropane alkaloids found in Erythroxylum coca Vasoconstrictors Wikipedia medicine articles ready to translate Obsolete medications
Cocaine
Physics,Chemistry
14,956
28,022,785
https://en.wikipedia.org/wiki/Serial%20block-face%20scanning%20electron%20microscopy
Serial block-face scanning electron microscopy is a method to generate high resolution three-dimensional images from small samples. The technique was developed for brain tissue, but it is widely applicable for any biological samples. A serial block-face scanning electron microscope consists of an ultramicrotome mounted inside the vacuum chamber of a scanning electron microscope. Samples are prepared by methods similar to that in transmission electron microscopy (TEM), typically by fixing the sample with aldehyde, staining with heavy metals such as osmium and uranium then embedding in an epoxy resin. The surface of the block of resin-embedded sample is imaged by detection of back-scattered electrons. Following imaging the ultramicrotome is used to cut a thin section (typically around 30 nm) from the face of the block. After the section is cut, the sample block is raised back to the focal plane and imaged again. This sequence of sample imaging, section cutting and block raising can acquire many thousands of images in perfect alignment in an automated fashion. Practical serial block-face scanning electron microscopy was invented in 2004 by Winfried Denk at the Max-Planck-Institute in Heidelberg and is commercially available from Gatan Inc., Thermo Fisher Scientific (VolumeScope) and ConnectomX. Applications One of the first applications of serial block-face scanning electron microscopy was to analyze the connectivity of axons in the brain. The resolution is sufficient to trace even the thinnest axons and to identify synapses. By now, serial block face imaging contributed to many fields, like developmental biology, plant biology, cancer research, studying neuro-degenerative diseases etc. The technique can generate extremely large data sets, and development of algorithms for automatic segmentation of the very large data sets generated is still a challenge. However much work is being done on this area currently. The EyeWire project harnesses human computation in a game to trace neurons through images of a volume of retina obtained using serial block-face scanning electron microscopy. Many different samples can be prepared for serial block-face scanning electron microscopy and the ultramicrotome is able to cut many materials, therefore this technique has wider applicability. It is starting to find applications in many other areas ranging from cell and developmental biology to materials science. Advantages and disadvantages A disadvantage encountered with the SBEM method is that the thickness of the slice which can be removed with the ultra-microtome is limited (~25 nm), thus the resolution in the depth direction is limited. An advantage of the SBEM technique is that the specimen is stationary what improves the alignment in the stacks of images. Another advantage of the SBEM technique is the ability to acquire large data sets with a high level of detail. Because cutting by the ultra-microtome is extremely fast (comparing to the milling process in FIB-SEM), it can expose a wide area of the material (x and y directions) every sectioning. Additionally, by fast cutting, we can acquire many images in z-direction in a short period of time. See also Focused ion beam References External links Original Publication in PLOS Biology Gatan's 3View Cell Centered Data Base, SBEM datasets Electron microscopy
Serial block-face scanning electron microscopy
Chemistry
660
75,957,413
https://en.wikipedia.org/wiki/Villa%20rustica%20%28M%C3%B6ckenlohe%29
The Villa rustica in Möckenlohe is the remains of a villa rustica from the 2nd or 3rd century near Möckenlohe, a part of the municipality of Adelschlag in the Landkreis Eichstätt in Bavaria. Ancient remains were known here from the beginning of the 20th century. Aerial photos from 1983 show the plan of the villa, but also outbuildings. The main house was excavated from 1987 to 1989 and rebuilt in 1992/93. The construction of the villa consisted of limestone masonry, which had an avant-corps on the front in the south, where the main entrance was. There was once also a colonnade here. The entire front was once around 30.5 meters long. A large courtyard opened behind the portico. Rooms were located to the west of the courtyard and to the south, where the portico also stood. One room had a basement, and at least two rooms had hypocaust underfloor heating. The villa was probably built under Emperor Hadrian and was abandoned after a fire in the middle of the third century. The modern replica does not correspond in every detail to the archaeological findings and is connected to a petting zoo. See also Villa rustica References Wolfgang Czysz, Karlheinz Dietz, Hans-Jörg Kellner, Thomas Fischer: Die Römer in Bayern. Stuttgart 1995, ,pp. 479–480. Andreas A. Schaflitzl: "Der römische Gutshof von Möckenlohe, Lkr. Eichstätt". In: Bericht der Bayerischen Bodendenkmalpflege 53 (2012), pp. 85–229. Galya Rosenstein: "Römische Gläser aus der Villa rustica von Möckenloh". In: Bericht der Bayerischen Bodendenkmalpflege, 53 (2012). External links Verein Römervilla Möckenlohe e.V. Architectural history Buildings and structures in Eichstätt (district) Roman villas in Germany
Villa rustica (Möckenlohe)
Engineering
424
3,025,636
https://en.wikipedia.org/wiki/Organic%20search%20results
In web search engines, organic search results are the query results which are calculated strictly algorithmically, and not affected by advertiser payments. They are distinguished from various kinds of sponsored results, whether they are explicit pay per click advertisements, shopping results, or other results where the search engine is paid either for showing the result, or for clicks on the result. Background The Google, Yahoo!, Bing, and Sogou search engines insert advertising on their search results pages. In U.S. law, advertising must be distinguished from organic results. This is done with various differences in background, text, link colors, and/or placement on the page. However, a 2004 survey found that a majority of search engine users could not distinguish the two. Because so few ordinary users (38% according to Pew Research Center) realized that many of the highest placed "results" on search engine results pages (SERPs) were ads, the search engine optimization industry began to distinguish between ads and natural results. The perspective among general users was that all results were, in fact, "results." So the qualifier "organic" was invented to distinguish non-ad search results from ads. It has been used since at least 2004. Because the distinction is important (and because the word "organic" has many metaphorical uses) the term is now in widespread use within the search engine optimization and web marketing industry. As of July 2009, the term "organic search" is now commonly used outside the specialist web marketing industry, even used frequently by Google (throughout the Google Analytics site, for instance). Google claims their users click (organic) search results more often than ads, essentially rebutting the research cited above. A 2012 Google study found that 81% of ad impressions and 66% of ad clicks happen when there is no associated organic search result on the first page. Research has shown that searchers may have a bias against ads, unless the ads are relevant to the searcher's need or intent. The same report and others going back to 1997 by Pew show that users avoid clicking "results" they know to be ads. According to a June 2013 study by Chitika, 9 out of 10 searchers don't go beyond Google's first page of organic search results, a claim often cited by the search engine optimization (SEO) industry to justify optimizing websites for organic search. Organic SEO describes the use of certain strategies or tools to elevate a website's content in the "free" search results. Users can prevent ads in search results and list only organic results by using browser add-ons and plugins. Other browsers may have different tools developed for blocking ads. Organic search engine optimization is the process of improving web sites' rank in organic search results. See also Internet marketing References Search engine optimization Internet terminology Online advertising
Organic search results
Technology
577
25,191,344
https://en.wikipedia.org/wiki/FIPS%20140-3
The Federal Information Processing Standard Publication 140-3 (FIPS PUB 140-3) is a U.S. government computer security standard used to approve cryptographic modules. The title is Security Requirements for Cryptographic Modules. Initial publication was on March 22, 2019 and it supersedes FIPS 140-2. Purpose The National Institute of Standards and Technology (NIST) issued the FIPS 140 Publication Series to coordinate the requirements and standards for cryptography modules that include both hardware and software components. Federal agencies and departments can validate that the module in use is covered by an existing FIPS 140 certificate that specifies the exact module name, hardware, software, firmware, and/or applet version numbers. The cryptographic modules are produced by the private sector or open source communities for use by the U.S. government and other regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminate sensitive but unclassified (SBU) information. History Efforts to update the FIPS 140 standard date back to the early 2000s. The FIPS 140-3 (2013 Draft) was scheduled for signature by the Secretary of Commerce in August 2013, however that never happened and the draft was subsequently abandoned. In 2014, NIST released a substantially different draft of FIPS 140-3, this version effectively directing the use of an International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) standard, 19790:2012, as the replacement for FIPS 140-2. The 2014 draft of FIPS 140-3 was also abandoned, although the use of ISO/IEC 19790 did ultimately come to fruition. On August 12, 2015, NIST formally released a statement on the Federal Register asking for comments on the potential use of portions of ISO/IEC 19790:2014 in the update of FIPS 140-2. The reference to a 2014-version of ISO/IEC 19790 was an inadvertent error in the Federal Registry posting, as 2012 is the most recent version. ISO/IEC 19790 has been reviewed and re-confirmed as recently as 2018, but without changes, hence retaining the 2012 version nomenclature. The update process for FIPS 140 was hamstrung by deep technical issues in topics such as hardware security and apparent disagreement in the US government over the path forward. The now abandoned 2013 draft of FIPS 140-3 had required mitigation of non-invasive attacks when validating at higher security levels, introduced the concept of public security parameter, allowed the deference of certain self-tests until specific conditions are met, and strengthened the requirements on user authentication and integrity testing. Cryptographic Module Validation Program The FIPS 140 standard established the Cryptographic Module Validation Program (CMVP) as a joint effort by the NIST and the Communications Security Establishment (CSEC) for the Canadian government, now handled by the CCCS, the Canadian Centre for Cyber Security, a new centralized initiative within the CSEC agency. Security programs overseen by NIST and CCCS focus on working with government and industry to establish more secure systems and networks by developing, managing and promoting security assessment tools, techniques, services, and supporting programs for testing, evaluation and validation; and addresses such areas as: development and maintenance of security metrics, security evaluation criteria and evaluation methodologies, tests and test methods; security-specific criteria for laboratory accreditation; guidance on the use of evaluated and tested products; research to address assurance methods and system-wide security and assessment methodologies; security protocol validation activities; and appropriate coordination with assessment-related activities of voluntary industry standards bodies and other assessment regimes. Approval and issuance On March 22, 2019, the United States Secretary of Commerce Wilbur Ross approved FIPS 140-3, Security Requirements for Cryptographic Modules to succeed FIPS 140-2. FIPS 140-3 became effective on September 22, 2019. FIPS 140-3 testing began on September 22, 2020, and a small number of validation certificates have been issued. FIPS 140-2 testing was available until September 21, 2021, creating an overlapping transition period of one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date. See also Common Criteria Tamperproofing FIPS 140 FIPS 140-2 Hardware security module References External links Cryptography standards Computer security standards Standards of the United States
FIPS 140-3
Technology,Engineering
928
15,941
https://en.wikipedia.org/wiki/Jean-Jacques%20Rousseau
Jean-Jacques Rousseau (, ; ; 28 June 1712 – 2 July 1778) was a Genevan philosopher (philosophe), writer, and composer. His political philosophy influenced the progress of the Age of Enlightenment throughout Europe, as well as aspects of the French Revolution and the development of modern political, economic, and educational thought. His Discourse on Inequality, which argues that private property is the source of inequality, and The Social Contract, which outlines the basis for a legitimate political order, are cornerstones in modern political and social thought. Rousseau's sentimental novel Julie, or the New Heloise (1761) was important to the development of preromanticism and romanticism in fiction. His Emile, or On Education (1762) is an educational treatise on the place of the individual in society. Rousseau's autobiographical writings—the posthumously published Confessions (completed in 1770), which initiated the modern autobiography, and the unfinished Reveries of the Solitary Walker (composed 1776–1778)—exemplified the late 18th-century "Age of Sensibility", and featured an increased focus on subjectivity and introspection that later characterized modern writing. Biography Youth Rousseau was born in the Republic of Geneva, which was at the time a city-state and a Protestant associate of the Swiss Confederacy (now a canton of Switzerland). Since 1536, Geneva had been a Huguenot republic and the seat of Calvinism. Five generations before Rousseau, his ancestor Didier, a bookseller who may have published Protestant tracts, had escaped persecution from French Catholics by fleeing to Geneva in 1549, where he became a wine merchant. Rousseau was proud that his family, of the moyen order (or middle-class), had voting rights in the city. Throughout his life, he generally signed his books "Jean-Jacques Rousseau, Citizen of Geneva". Geneva, in theory, was governed democratically by its male voting citizens. The citizens were a minority of the population when compared to the immigrants (inhabitants) and their descendants (natives). In fact, rather than being run by vote of the citizens, the city was ruled by a small number of wealthy families that made up the Council of Two Hundred; they delegated their power to a 25-member executive group from among them called the "Small Council". There was much political debate within Geneva, extending down to the tradespeople. Much discussion was over the idea of the sovereignty of the people, of which the ruling class oligarchy was making a mockery. In 1707, democratic reformer Pierre Fatio protested this situation, saying "A sovereign that never performs an act of sovereignty is an imaginary being". He was shot by order of the Small Council. Jean-Jacques Rousseau's father, Isaac, was not in the city then, but Jean-Jacques's grandfather supported Fatio and was penalized for it. Rousseau's father, Isaac Rousseau, followed his grandfather, father and brothers into the watchmaking business. He also taught dance for a short period. Isaac, notwithstanding his artisan status, was well-educated and a lover of music. Rousseau wrote that "A Genevan watchmaker is a man who can be introduced anywhere; a Parisian watchmaker is only fit to talk about watches". In 1699, Isaac ran into political difficulty by entering a quarrel with visiting English officers, who in response drew their swords and threatened him. After local officials stepped in, it was Isaac who was punished, as Geneva was concerned with maintaining its ties to foreign powers. Rousseau's mother, Suzanne Bernard Rousseau, was from an upper-class family. She was raised by her uncle Samuel Bernard, a Calvinist preacher. He cared for Suzanne after her father, Jacques, who had run into trouble with the legal and religious authorities for fornication and having a mistress, died in his early 30s. In 1695, Suzanne had to answer charges that she had attended a street theatre disguised as a peasant woman so she could gaze upon M. Vincent Sarrasin, whom she fancied despite his continuing marriage. After a hearing, she was ordered by the Genevan Consistory to never interact with him again. She married Rousseau's father at the age of 31. Isaac's sister had married Suzanne's brother eight years earlier, after she had become pregnant and they had been chastised by the Consistory. The child died at birth. The young Rousseau was told a fabricated story about the situation in which young love had been denied by a disapproving patriarch but later prevailed, resulting in two marriages uniting the families on the same day. Rousseau never learnt the truth. Rousseau was born on 28 June 1712, and he would later relate: "I was born almost dying, they had little hope of saving me". He was baptized on 4 July 1712, in the great cathedral. His mother died of puerperal fever nine days after his birth, which he later described as "the first of my misfortunes". He and his older brother François were brought up by their father and a paternal aunt, also named Suzanne. When Rousseau was five, his father sold the house the family had received from his mother's relatives. While the idea was that his sons would inherit the principal when grown up and he would live off the interest in the meantime, in the end, the father took most of the substantial proceeds. With the selling of the house, the Rousseau family moved out of the upper-class neighbourhood and into an apartment house in a neighbourhood of craftsmen—silversmiths, engravers, and other watchmakers. Growing up around craftsmen, Rousseau would later contrast them favourably to those who produced more aesthetic works, writing "those important persons who are called artists rather than artisans, work solely for the idle and rich, and put an arbitrary price on their baubles". Rousseau was also exposed to class politics in this environment, as the artisans often agitated in a campaign of resistance against the privileged class running Geneva. Rousseau had no recollection of learning to read, but he remembered how when he was five or six his father encouraged his love of reading: Rousseau's reading of escapist stories (such as L'Astrée by Honoré d'Urfé) affected him; he later wrote that they "gave me bizarre and romantic notions of human life, which experience and reflection have never been able to cure me of". After they had finished reading the novels, they began to read a collection of ancient and modern classics left by his mother's uncle. Of these, his favourite was Plutarch's Lives of the Noble Greeks and Romans, which he would read to his father while he made watches. Rousseau saw Plutarch's work as another kind of novel—the noble actions of heroes—and he would act out the deeds of the characters he was reading about. In his Confessions, Rousseau stated that the reading of Plutarch's works and "the conversations between my father and myself to which it gave rise, formed in me the free and republican spirit". Witnessing the local townsfolk participate in militias made a big impression on Rousseau. Throughout his life, he would recall one scene where, after the volunteer militia had finished its manoeuvres, they began to dance around a fountain and most of the people from neighbouring buildings came out to join them, including him and his father. Rousseau would always see militias as the embodiment of popular spirit in opposition to the armies of the rulers, whom he saw as disgraceful mercenaries. When Rousseau was ten, his father, an avid hunter, got into a legal quarrel with a wealthy landowner on whose lands he had been caught trespassing. To avoid certain defeat in the courts, he moved away to Nyon in the territory of Bern, taking Rousseau's aunt Suzanne with him. He remarried, and from that point, Jean-Jacques saw little of him. Jean-Jacques was left with his maternal uncle, who packed him and his son, Abraham Bernard, away to board for two years with a Calvinist minister in a hamlet outside Geneva. Here, the boys picked up the elements of mathematics and drawing. Rousseau, who was always deeply moved by religious services, for a time even dreamed of becoming a Protestant minister. Virtually all our information about Rousseau's youth has come from his posthumously published Confessions, in which the chronology is somewhat confused, though recent scholars have combed the archives for confirming evidence to fill in the blanks. At age 13, Rousseau was apprenticed first to a notary and then to an engraver who beat him. At 15, he ran away from Geneva (on 14 March 1728) after returning to the city and finding the city gates locked due to the curfew. In adjoining Savoy he took shelter with a Roman Catholic priest, who introduced him to Françoise-Louise de Warens, age 29. She was a noblewoman of a Protestant background who was separated from her husband. As a professional lay proselytizer, she was paid by the King of Piedmont to help bring Protestants to Catholicism. They sent the boy to Turin, the capital of Savoy (which included Piedmont, in what is now Italy), to complete his conversion. This resulted in his having to give up his Genevan citizenship, although he would later revert to Calvinism to regain it. In converting to Catholicism, both de Warens and Rousseau were likely reacting to Calvinism's insistence on the total depravity of man. Leo Damrosch writes: "An eighteenth-century Genevan liturgy still required believers to declare 'that we are miserable sinners, born in corruption, inclined to evil, incapable by ourselves of doing good. De Warens, a deist by inclination, was attracted to Catholicism's doctrine of forgiveness of sins. Finding himself on his own, since his father and uncle had more or less disowned him, the teenage Rousseau supported himself for a time as a servant, secretary, and tutor, wandering in Italy (Piedmont and Savoy) and France. Among his students was Stéphanie Louise de Bourbon-Conti. During this time, he lived on and off with de Warens, whom he idolized. Maurice Cranston notes, "Madame de Warens [...] took him into her household and mothered him; he called her 'maman' and she called him 'petit.'" Flattered by his devotion, de Warens tried to get him started in a profession, and arranged formal music lessons for him. At one point, he briefly attended a seminary with the idea of becoming a priest. Early adulthood When Rousseau reached 20, de Warens took him as her lover, while intimate also with the steward of her house. The sexual aspect of their relationship (a ménage à trois) confused Rousseau and made him uncomfortable, but he always considered de Warens the greatest love of his life. A rather profligate spender, she had a large library and loved to entertain and listen to music. She and her circle, comprising educated members of the Catholic clergy, introduced Rousseau to the world of letters and ideas. Rousseau had been an indifferent student, but during his 20s, which were marked by long bouts of hypochondria, he applied himself in earnest to the study of philosophy, mathematics, and music. At 25, he came into a small inheritance from his mother and used a portion of it to repay de Warens for her financial support of him. At 27, he took a job as a tutor in Lyon. In 1742, Rousseau moved to Paris to present the Académie des Sciences with a new system of numbered musical notation he believed would make his fortune. His system, intended to be compatible with typography, is based on a single line, displaying numbers representing intervals between notes and dots and commas indicating rhythmic values. Believing the system was impractical, the Academy rejected it, though they praised his mastery of the subject, and urged him to try again. He befriended Denis Diderot that year, connecting over the discussion of literary endeavors. From 1743 to 1744, Rousseau had an honorable but ill-paying post as a secretary to the Comte de Montaigue, the French ambassador to Venice. This awoke in him a lifelong love for Italian music, particularly opera: Rousseau's employer routinely received his stipend as much as a year late and paid his staff irregularly. After 11 months, Rousseau quit, taking from the experience a profound distrust of government bureaucracy. Return to Paris Returning to Paris, the penniless Rousseau befriended and became the lover of Thérèse Levasseur, a seamstress who was the sole support of her mother and numerous ne'er-do-well siblings. At first, they did not live together, though later Rousseau took Thérèse and her mother in to live with him as his servants, and himself assumed the burden of supporting her large family. According to his Confessions, before she moved in with him, Thérèse bore him a son and as many as four other children (there is no independent verification for this number). Rousseau wrote that he persuaded Thérèse to give each of the newborns up to a foundling hospital, for the sake of her "honor". "Her mother, who feared the inconvenience of a brat, came to my aid, and she [Thérèse] allowed herself to be overcome" (Confessions). In his letter to Madame de Francueil in 1751, he first pretended that he was not rich enough to raise his children, but in Book IX of the Confessions he gave the true reasons of his choice: "I trembled at the thought of intrusting them to a family ill brought up, to be still worse educated. The risk of the education of the foundling hospital was much less". Ten years later, Rousseau made inquiries about the fate of his son, but unfortunately no record could be found. When Rousseau subsequently became celebrated as a theorist of education and child-rearing, his abandonment of his children was used by his critics, including Voltaire and Edmund Burke, as the basis for arguments ad hominem. Beginning with some articles on music in 1749, Rousseau contributed numerous articles to Diderot and D'Alembert's great Encyclopédie, the most famous of which was an article on political economy written in 1755. Rousseau's ideas were the result of an almost obsessive dialogue with writers of the past, filtered in many cases through conversations with Diderot. In 1749, Rousseau was paying daily visits to Diderot, who had been thrown into the fortress of Vincennes under a lettre de cachet for opinions in his "Lettre sur les aveugles", that hinted at materialism, a belief in atoms, and natural selection. According to science historian Conway Zirkle, Rousseau saw the concept of natural selection "as an agent for improving the human species." Rousseau had read about an essay competition sponsored by the Académie de Dijon to be published in the Mercure de France on the theme of whether the development of the arts and sciences had been morally beneficial. He wrote that while walking to Vincennes (about three miles from Paris), he had a revelation that the arts and sciences were responsible for the moral degeneration of mankind, who were basically good by nature. Rousseau's 1750 Discourse on the Arts and Sciences was awarded the first prize and gained him significant fame. Rousseau continued his interest in music. He wrote both the words and music of his opera Le devin du village (The Village Soothsayer), which was performed for King Louis XV in 1752. The king was so pleased by the work that he offered Rousseau a lifelong pension. To the exasperation of his friends, Rousseau turned down the great honor, bringing him notoriety as "the man who had refused a king's pension". He also turned down several other advantageous offers, sometimes with a brusqueness bordering on truculence that gave offense and caused him problems. The same year, the visit of a troupe of Italian musicians to Paris, and their performance of Giovanni Battista Pergolesi's La serva padrona, prompted the Querelle des Bouffons, which pitted protagonists of French music against supporters of the Italian style. Rousseau, as noted above, was an enthusiastic supporter of the Italians against Jean-Philippe Rameau and others, making an important contribution with his Letter on French Music. Return to Geneva On returning to Geneva in 1754, Rousseau reconverted to Calvinism and regained his official Genevan citizenship. In 1755, Rousseau completed his second major work, the Discourse on the Origin and Basis of Inequality Among Men (the Discourse on Inequality), which elaborated on the arguments of the Discourse on the Arts and Sciences. He also pursued an unconsummated romantic attachment with the 25-year-old Sophie d'Houdetot, which partly inspired his epistolary novel Julie, ou la nouvelle Héloïse (also based on memories of his idyllic youthful relationship with Mme de Warens). Sophie was the cousin and houseguest of Rousseau's patroness and landlady Madame d'Épinay, whom he treated rather high-handedly. He resented being at Mme. d'Épinay's beck and call and detested what he viewed as the insincere conversation and shallow atheism of the Encyclopédistes whom he met at her table. Wounded feelings gave rise to a bitter three-way quarrel between Rousseau and Madame d'Épinay; her lover, the journalist Grimm; and their mutual friend, Diderot, who took their side against Rousseau. Diderot later described Rousseau as being "false, vain as Satan, ungrateful, cruel, hypocritical, and wicked... He sucked ideas from me, used them himself, and then affected to despise me". Rousseau's break with the Encyclopédistes coincided with the composition of his three major works, in all of which he emphasized his fervent belief in a spiritual origin of man's soul and the universe, in contradistinction to the materialism of Diderot, La Mettrie and D'Holbach. During this period, Rousseau enjoyed the support and patronage of Charles II François Frédéric de Montmorency-Luxembourg and the Prince de Conti, two of the richest and most powerful nobles in France. These men truly liked Rousseau and enjoyed his ability to converse on any subject, but they also used him as a way of getting back at Louis XV and the political faction surrounding his mistress, Madame de Pompadour. Even with them, however, Rousseau went too far, courting rejection when he criticized the practice of tax farming, in which some of them engaged. Rousseau's 800-page novel of sentiment, Julie, ou la nouvelle Héloïse, was published in 1761 to immense success. The book's rhapsodic descriptions of the natural beauty of the Swiss countryside struck a chord in the public and may have helped spark the subsequent nineteenth-century craze for Alpine scenery. In 1762, Rousseau published Du Contrat Social, Principes du droit politique (in English, literally Of the Social Contract, Principles of Political Right) in April. Even his friend Antoine-Jacques Roustan felt impelled to write a polite rebuttal of the chapter on Civil Religion in the Social Contract, which implied that the concept of a Christian republic was paradoxical since Christianity taught submission rather than participation in public affairs. Rousseau helped Roustan find a publisher for the rebuttal. Rousseau published Emile, or On Education in May. A famous section of Emile, "The Profession of Faith of a Savoyard Vicar", was intended to be a defense of religious belief. Rousseau's choice of a Catholic vicar of humble peasant background (plausibly based on a kindly prelate he had met as a teenager) as a spokesman for the defense of religion was in itself a daring innovation for the time. The vicar's creed was that of Socinianism (or Unitarianism as it is called today). Because it rejected original sin and divine revelation, both Protestant and Catholic authorities took offense. Moreover, Rousseau advocated the opinion that, insofar as they lead people to virtue, all religions are equally worthy, and that people should therefore conform to the religion in which they have been brought up. This religious indifferentism caused Rousseau and his books to be banned from France and Geneva. He was condemned from the pulpit by the Archbishop of Paris, his books were burned and warrants were issued for his arrest. Former friends such as Jacob Vernes of Geneva could not accept his views and wrote violent rebuttals. A sympathetic observer, David Hume "professed no surprise when he learned that Rousseau's books were banned in Geneva and elsewhere". Rousseau, he wrote, "has not had the precaution to throw any veil over his sentiments; and, as he scorns to dissemble his contempt for established opinions, he could not wonder that all the zealots were in arms against him. The liberty of the press is not so secured in any country... as not to render such an open attack on popular prejudice somewhat dangerous." Voltaire and Frederick the Great After Rousseau's Emile had outraged the French parliament, an arrest order was issued by parliament against him, causing him to flee to Switzerland. Subsequently, when the Swiss authorities also proved unsympathetic to him—condemning both Emile, and also The Social Contract—Voltaire issued an invitation to Rousseau to come and reside with him, commenting that: "I shall always love the author of the 'Vicaire savoyard' whatever he has done, and whatever he may do...Let him come here [to Ferney]! He must come! I shall receive him with open arms. He shall be master here more than I. I shall treat him like my own son." Rousseau later expressed regret that he had not replied to Voltaire's invitation. In July 1762, after Rousseau was informed that he could not continue to reside in Bern, D'Alembert advised him to move to the Principality of Neuchâtel, ruled by Frederick the Great of Prussia. Subsequently, Rousseau accepted an invitation to reside in Môtiers, fifteen miles from Neuchâtel. On 11 July 1762, Rousseau wrote to Frederick, describing how he had been driven from France, from Geneva, and from Bern; and seeking Frederick's protection. He also mentioned that he had criticized Frederick in the past and would continue to be critical of Frederick in the future, stating however: "Your Majesty may dispose of me as you like." Frederick, still in the middle of the Seven Years' War, then wrote to the local governor of Neuchâtel, Marischal Keith, who was a mutual friend of theirs: Rousseau, touched by the help he received from Frederick, stated that from then onwards he took a keen interest in Frederick's activities. As the Seven Years' War was about to end, Rousseau wrote to Frederick again, thanking him for the help received and urging him to put an end to military activities and to endeavor to keep his subjects happy instead. Frederick made no known reply but commented to Keith that Rousseau had given him a "scolding". Fugitive For more than two years (1762–1765) Rousseau lived at Môtiers, spending his time in reading and writing and meeting visitors such as James Boswell (December 1764). (Boswell recorded his private discussions with Rousseau, in both direct quotation and dramatic dialog, over several pages of his 1764 journal.) In the meantime, the local ministers had become aware of the apostasies in some of his writings and resolved not to let him stay in the vicinity. The Neuchâtel Consistory summoned Rousseau to answer a charge of blasphemy. He wrote back asking to be excused due to his inability to sit for a long time due to his ailment. Subsequently, Rousseau's own pastor, Frédéric-Guillaume de Montmollin, started denouncing him publicly as an Antichrist. In one inflammatory sermon, Montmollin quoted Proverbs 15:8: "The sacrifice of the wicked is an abomination to the Lord, but the prayer of the upright is his delight"; this was interpreted by everyone to mean that Rousseau's taking communion was detested by the Lord. The ecclesiastical attacks inflamed the parishioners, who proceeded to pelt Rousseau with stones when he would go out for walks. Around midnight of 6–7 September 1765, stones were thrown at the house Rousseau was staying in, and some glass windows were shattered. When a local official, Martinet, arrived at Rousseau's residence he saw so many stones on the balcony that he exclaimed "My God, it's a quarry!" At this point, Rousseau's friends in Môtiers advised him to leave the town. Since he wanted to remain in Switzerland, Rousseau decided to accept an offer to move to a tiny island, the Île de St.-Pierre, having a solitary house. Although it was within the Canton of Bern, from where he had been expelled two years previously, he was informally assured that he could move into this island house without fear of arrest, and he did so (10 September 1765). Here, despite the remoteness of his retreat, visitors sought him out as a celebrity. However, on 17 October 1765, the Senate of Bern ordered Rousseau to leave the island and all Bernese territory within fifteen days. He replied, requesting permission to extend his stay, and offered to be incarcerated in any place within their jurisdiction with only a few books in his possession and permission to walk occasionally in a garden while living at his own expense. The Senate's response was to direct Rousseau to leave the island, and all Bernese territory, within twenty-four hours. On 29 October 1765 he left the Île de St.-Pierre and moved to Strasbourg. At this point he received invitations from several parties in Europe, and soon decided to accept Hume's invitation to go to England. On 9 December 1765, having secured a passport from the French government, Rousseau left Strasbourg for Paris where he arrived a week later and lodged in a palace of his friend, the Prince of Conti. Here he met Hume, and also numerous friends and well-wishers, and became a conspicuous figure in the city. At this time, Hume wrote: "It is impossible to express or imagine the enthusiasm of this nation in Rousseau's favor...No person ever so much enjoyed their attention...Voltaire and everybody else are quite eclipsed. Although Diderot at this time desired a reconciliation with Rousseau, both of them expected an initiative by the other, and the two did not meet. Letter of Walpole On 1 January 1766, Grimm included in his "Correspondance littéraire" a letter said to have been written by Frederick the Great to Rousseau. It had actually been composed by Horace Walpole as a playful hoax. Walpole had never met Rousseau, but he was well acquainted with Diderot and Grimm. The letter soon found wide publicity; Hume is believed to have been present, and to have participated in its creation. On 16 February 1766, Hume wrote to the Marquise de Brabantane: "The only pleasantry I permitted myself in connection with the pretended letter of the King of Prussia was made by me at the dinner table of Lord Ossory." This letter was one of the reasons for the later rupture in Hume's relations with Rousseau. In Britain On 4 January 1766 Rousseau left Paris with Hume, the merchant De Luze (an old friend of Rousseau), and Rousseau's pet dog Sultan. After a four-day journey to Calais, where they stayed for two nights, the travelers embarked on a ship to Dover. On 13 January 1766 they arrived in London. Soon after their arrival, David Garrick arranged a box at the Drury Lane Theatre for Hume and Rousseau on a night when the King and Queen also attended. Garrick was himself performing in a comedy by himself, and also in a tragedy by Voltaire. Rousseau became so excited during the performance that he leaned too far and almost fell out of the box; Hume observed that the King and Queen were looking at Rousseau more than at the performance. Afterwards, Garrick served supper for Rousseau, who commended Garrick's acting: "Sir, you have made me shed tears at your tragedy, and smile at your comedy, though I scarce understood a word of your language." At this time, Hume had a favorable opinion of Rousseau; in a letter to Madame de Brabantane, Hume wrote that after observing Rousseau carefully he had concluded that he had never met a more affable and virtuous person. According to Hume, Rousseau was "gentle, modest, affectionate, disinterested, of extreme sensitivity". Initially, Hume lodged Rousseau in the house of Madam Adams in London, but Rousseau began receiving so many visitors that he soon wanted to move to a quieter location. An offer came to lodge him in a Welsh monastery, and he was inclined to accept it, but Hume persuaded him to move to Chiswick. Rousseau now asked for Thérèse to rejoin him. Meanwhile, James Boswell, then in Paris, offered to escort Thérèse to Rousseau. (Boswell had earlier met Rousseau and Thérèse at Motiers; he had subsequently also sent Thérèse a garnet necklace and had written to Rousseau seeking permission to communicate occasionally with her.) Hume foresaw what was going to happen: "I dread some event fatal to our friend's honor." Boswell and Thérèse were together for more than a week, and as per notes in Boswell's diary they consummated the relationship, having intercourse several times. On one occasion, Thérèse told Boswell: "Don't imagine you are a better lover than Rousseau." Since Rousseau was keen to relocate to a more remote location, Richard Davenport—a wealthy and elderly widower who spoke French—offered to accommodate Thérèse and Rousseau at Wootton Hall in Staffordshire. On 22 March 1766 Rousseau and Thérèse set forth for Wootton, against Hume's advice. Hume and Rousseau would never meet again. Initially Rousseau liked his new accommodation at Wootton Hall and wrote favorably about the natural beauty of the place, and how he was feeling reborn, forgetting past sorrows. Quarrel with Hume On 3 April 1766 a daily newspaper published the letter constituting Horace Walpole's hoax on Rousseau—without mentioning Walpole as the actual author; that the editor of the publication was Hume's personal friend compounded Rousseau's grief. Gradually articles critical of Rousseau started appearing in the British press; Rousseau felt that Hume, as his host, ought to have defended him. Moreover, in Rousseau's estimate, some of the public criticism contained details to which only Hume was privy. Further, Rousseau was aggrieved to find that Hume had been lodging in London with François Tronchin, son of Rousseau's enemy in Geneva. About this time, Voltaire anonymously (as always) published his Letter to Dr. J.-J. Pansophe in which he gave extracts from many of Rousseau's prior statements which were critical of life in England; the most damaging portions of Voltaire's writeup were reprinted in a London periodical. Rousseau now decided that there was a conspiracy afoot to defame him. A further cause for Rousseau's displeasure was his concern that Hume might be tampering with his mail. The misunderstanding had arisen because Rousseau tired of receiving voluminous correspondence whose postage he had to pay. Hume offered to open Rousseau's mail himself and to forward the important letters to Rousseau; this offer was accepted. However, there is some evidence of Hume intercepting even Rousseau's outgoing mail. After some correspondence with Rousseau, which included an eighteen-page letter from Rousseau describing the reasons for his resentment, Hume concluded that Rousseau was losing his mental balance. On learning that Rousseau had denounced him to his Parisian friends, Hume sent a copy of Rousseau's long letter to Madame de Boufflers. She replied stating that, in her estimate, Hume's alleged participation in the composition of Horace Walpole's faux letter was the reason for Rousseau's anger. When Hume learnt that Rousseau was writing the Confessions, he assumed that the present dispute would feature in the book. Adam Smith, Turgot, Marischal Keith, Horace Walpole, and Mme de Boufflers advised Hume not to make his quarrel with Rousseau public; however, many members of Holbach's coterie—particularly D'Alembert—urged him to reveal his version of the events. In October 1766 Hume's version of the quarrel was translated into French and published in France; in November it was published in England. Grimm included it in his Correspondance littéraire; ultimately: After the dispute became public, due in part to comments from notable publishers like Andrew Millar, Walpole told Hume that quarrels such as this only end up becoming a source of amusement for Europe. Diderot took a charitable view of the mess: "I knew these two philosophers well. I could write a play about them that would make you weep, and it would excuse them both." Amidst the controversy surrounding his quarrel with Hume, Rousseau maintained a public silence; but he resolved now to return to France. To encourage him to do so swiftly, Thérèse advised him that the servants at Wootton Hall sought to poison him. On 22 May 1767 Rousseau and Thérèse embarked from Dover for Calais. In Grenoble On 22 May 1767, Rousseau reentered France even though an arrest warrant against him was still in place. He had taken an assumed name, but was recognized, and a banquet in his honor was held by the city of Amiens. French nobles offered him a residence at this time. Initially, Rousseau decided to stay in an estate near Paris belonging to Mirabeau. Subsequently, on 21 June 1767, he moved to a chateau of the Prince of Conti in Trie. Around this time, Rousseau started developing feelings of paranoia, anxiety, and of a conspiracy against him. Most of this was just his imagination at work, but on 29 January 1768, the theatre at Geneva was destroyed through burning, and Voltaire mendaciously accused Rousseau of being the culprit. In June 1768, Rousseau left Trie, leaving Thérèse behind, and went first to Lyon, and subsequently to Bourgoin. He now invited Thérèse to this place and married her, under his alias "Renou" in a faux civil ceremony in Bourgoin on 30 August 1768. In January 1769, Rousseau and Thérèse went to live in a farmhouse near Grenoble. Here he practiced botany and completed the Confessions. At this time he expressed regret for placing his children in an orphanage. On 10 April 1770, Rousseau and Thérèse left for Lyon where he befriended Horace Coignet, a fabric designer and amateur musician. At Rousseau's suggestion, Coignet composed musical interludes for Rousseau's prose poem Pygmalion; this was performed in Lyon together with Rousseau's romance The Village Soothsayer to public acclaim. On 8 June, Rousseau and Thérèse left Lyon for Paris; they reached Paris on 24 June. In Paris, Rousseau and Thérèse lodged in an unfashionable neighborhood of the city, the Rue Platrière—now called the Rue Jean-Jacques Rousseau. He now supported himself financially by copying music, and continued his study of botany. At this time also, he wrote his Letters on the Elements of Botany. These consisted of a series of letters Rousseau wrote to Mme Delessert in Lyon to help her daughters learn the subject. These letters received widespread acclaim when they were eventually published posthumously. "It's a true pedagogical model, and it complements Emile," commented Goethe. In order to defend his reputation against hostile gossip, Rousseau had begun writing the Confessions in 1765. In November 1770, these were completed, and although he did not wish to publish them at this time, he began to offer group readings of certain portions of the book. Between December 1770, and May 1771, Rousseau made at least four group readings of his book with the final reading lasting seventeen hours. A witness to one of these sessions, Claude Joseph Dorat, wrote: After May 1771, there were no more group readings because Madame d'Épinay wrote to the chief of police, who was her friend, to put a stop to Rousseau's readings so as to safeguard her privacy. The police called on Rousseau, who agreed to stop the readings. His Confessions were finally published posthumously in 1782. In 1772, Rousseau was invited to present recommendations for a new constitution for the Polish–Lithuanian Commonwealth, resulting in the Considerations on the Government of Poland, which was to be his last major political work. Also in 1772, Rousseau began writing Rousseau, Judge of Jean-Jacques, which was another attempt to reply to his critics. He completed writing it in 1776. The book is in the form of three dialogues between two characters; a "Frenchman" and "Rousseau", who argue about the merits and demerits of a third character—an author called Jean-Jacques. It has been described as his most unreadable work; in the foreword to the book, Rousseau admits that it may be repetitious and disorderly, but he begs the reader's indulgence on the grounds that he needs to defend his reputation from slander before he dies. Final years In 1766, Rousseau had impressed Hume with his physical prowess by spending ten hours at night on the deck in severe weather during the journey by ship from Calais to Dover while Hume was confined to his bunk. "When all the seamen were almost frozen to death...he caught no harm...He is one of the most robust men I have ever known," Hume noted. His urinary disease had also been greatly alleviated after he stopped listening to the advice of doctors. At that time, notes Damrosch, it was often better to let nature take its own course rather than subject oneself to medical procedures. His general health had also improved. However, on 24 October 1776, as he was walking on a narrow street in Paris, a nobleman's carriage came rushing by from the opposite direction; flanking the carriage was a galloping Great Dane belonging to the nobleman. Rousseau was unable to dodge both the carriage and the dog and was knocked down by the Great Dane. He seems to have suffered a concussion and neurological damage. His health began to decline; Rousseau's friend Corancez described the appearance of certain symptoms which indicate that Rousseau started suffering from epileptic seizures after the accident. In 1777, Rousseau received a royal visitor, when the Holy Roman Emperor Joseph II came to meet him. His free entry to the Opera had been renewed by this time and he would go there occasionally. At this time also (1777–1778), he composed one of his finest works, Reveries of a Solitary Walker, ultimately interrupted by his death. In the spring of 1778, the Marquis Girardin invited Rousseau to live in a cottage in his château at Ermenonville. Rousseau and Thérèse went there on 20 May. Rousseau spent his time at the château in collecting botanical specimens, and teaching botany to Girardin's son. He ordered books from Paris on grasses, mosses and mushrooms and made plans to complete his unfinished Emile and Sophie and Daphnis and Chloe. On 1 July, a visitor commented that "men are wicked," to which Rousseau replied with "men are wicked, yes, but man is good"; in the evening there was a concert in the château in which Rousseau played on the piano his own composition of the Willow Song from Othello. On this day also, he had a hearty meal with Girardin's family; the next morning, as he was about to go teach music to Girardin's daughter, he died of cerebral bleeding resulting in an apoplectic stroke. It is now believed that repeated falls, including the accident involving the Great Dane, may have contributed to Rousseau's stroke. Following his death, Grimm, Madame de Staël and others spread the false news that Rousseau had committed suicide; according to other gossip, Rousseau was insane when he died. All those who met him in his last days agree that he was in a serene frame of mind at this time. On 4 July 1778, Rousseau was buried on the Île des Peupliers, a tiny, wooded island in a lake at Ermenonville, which became a place of pilgrimage for his many admirers. On 11 October 1794, his remains were moved to the Panthéon, where they were placed near those of Voltaire. Philosophy Influences Rousseau later noted, that when he read the question for the essay competition of the Academy of Dijon, which he would go on to win: "Has the rebirth of the arts and sciences contributed to the purification of the morals?", he felt that "the moment I read this announcement I saw another universe and became a different man". The essay he wrote in response led to one of the central themes of Rousseau's thought, which was that perceived social and cultural progress had in fact led only to the moral degradation of humanity. His influences to this conclusion included Montesquieu, François Fénelon, Michel de Montaigne, Seneca the Younger, Plato, and Plutarch. Rousseau based his political philosophy on contract theory and his reading of Thomas Hobbes. Reacting to the ideas of Samuel von Pufendorf and John Locke was also driving his thought. All three thinkers had believed that humans living without central authority were facing uncertain conditions in a state of mutual competition. In contrast, Rousseau believed that there was no explanation for why this would be the case, as there would have been no conflict or property. Rousseau especially criticized Hobbes for asserting that since man in the "state of nature... has no idea of goodness he must be naturally wicked; that he is vicious because he does not know virtue". On the contrary, Rousseau holds that "uncorrupted morals" prevail in the "state of nature". Human nature In common with other philosophers of the day, Rousseau looked to a hypothetical "state of nature" as a normative guide. In the original condition, humans would have had "no moral relations with or determinate obligations to one another". Because of their rare contact with each other, differences between individuals would have been of little significance. Living separately, there would have been no feelings of envy or distrust, and no existence of property or conflict. According to Rousseau, humans have two traits in common with other animals: the amour de soi, which describes the self-preservation instinct; and pitié, which is empathy for the rest of one's species, both of which precede reason and sociability. Only humans who are morally deprived would care only about their relative status to others, leading to amour-propre, or vanity. He did not believe humans to be innately superior to other species. However, human beings did have the unique ability to change their nature through free choice, instead of being confined to natural instincts. Another aspect separating humans from other animals is the ability of perfectability, which allows humans to choose in a way that improves their condition. These improvements could be lasting, leading not only to individual, but also collective change for the better. Together with human freedom, the ability to improve makes possible the historic evolution of humanity. However, there is no guarantee that this evolution will be for the better. Human development Rousseau asserted that the stage of human development associated with what he called "savages" was the best or optimal in human development, between the less-than-optimal extreme of brute animals on the one hand and the extreme of decadent civilization on the other. ... nothing is so gentle as man in his primitive state, when placed by nature at an equal distance from the stupidity of brutes and the fatal enlightenment of civil man. This has led some critics to attribute to Rousseau the invention of the idea of the noble savage, which Arthur Lovejoy claimed misrepresents Rousseau's thought. According to Rousseau, as savages had grown less dependent on nature, they had instead become dependent on each other, with society leading to the loss of freedom through the misapplication of perfectibility. When living together, humans would have gone from a nomadic lifestyle to a settled one, leading to the invention of private property. However, the resulting inequality was not a natural outcome, but rather the product of human choice. Rousseau's ideas of human development were highly interconnected with forms of mediation or the processes that individual humans use to interact with themselves and others while using an alternate perspective or thought process. According to Rousseau, these were developed through the innate perfectibility of humanity. These include a sense of self, morality, pity, and imagination. Rousseau's writings are purposely ambiguous concerning the formation of these processes to the point that mediation is always intrinsically part of humanity's development. An example of this is the notion that an individual needs an alternative perspective to realize that he or she is a 'self'. As long as differences in wealth and status among families were minimal, the first coming together in groups was accompanied by a fleeting golden age of human flourishing. The development of agriculture, metallurgy, private property, and the division of labour and resulting dependency on one another, however, led to economic inequality and conflict. As population pressures forced them to associate more and more closely, they underwent a psychological transformation: they began to see themselves through the eyes of others and came to value the good opinions of others as essential to their self-esteem. As humans started to compare themselves with each other, they began to notice that some had qualities differentiating them from others. However, only when moral significance was attached to these qualities did they start to create esteem and envy, and thereby, social hierarchies. Rousseau noted that whereas "the savage lives within himself, sociable man, always outside himself, can only live in the opinion of others". This then resulted in the corruption of humankind, "producing combinations fatal to innocence and happiness". Following the attachment of importance to human difference, they would have started forming social institutions, according to Rousseau. Metallurgy and agriculture would have subsequently increased the inequalities between those with and without property. After all land had been converted into private properties, a zero-sum game would have resulted in competition for it, leading to conflict. This would have led to the creation and perpetuation of the 'hoax' of the political system by the rich, which perpetuated their power. Political theory According to Rousseau, the original forms of government to emerge: monarchy, aristocracy, democracy, were all products of the differing levels of inequality in their societies. However, they would always end up with ever worse levels of inequality, until a revolution would have overthrown it and new leaders would have emerged with further extremes of injustice. Nevertheless, the human capacity for self-improvement remained. As the problems of humanity were the product of political choice, they could also be improved by a better political system. The Social Contract outlines the basis for a legitimate political order within a framework of classical republicanism. Published in 1762, it became one of the most influential works of political philosophy in the Western tradition. It developed some of the ideas mentioned in an earlier work, the article Économie Politique (Discourse on Political Economy), featured in Diderot's Encyclopédie. In the book, Rousseau sketched the image of a new political system for regaining human freedom. Rousseau claimed that the state of nature was a primitive condition without law or morality, which human beings left for the benefits and necessity of cooperation. As society developed, the division of labor and private property required the human race to adopt institutions of law. In the degenerate phase of society, man is prone to be in frequent competition with his fellow men while also becoming increasingly dependent on them. This double pressure threatens both his survival and his freedom. According to Rousseau, by joining together into civil society through the social contract and abandoning their claims of natural right, individuals can both preserve themselves and remain free. This is because submission to the authority of the general will of the people as a whole guarantees individuals against being subordinated to the wills of others and also ensures that they obey themselves because they are, collectively, the authors of the law. Although Rousseau argues that sovereignty (or the power to make the laws) should be in the hands of the people, he also makes a sharp distinction between the sovereign and the government. The government is composed of magistrates, charged with implementing and enforcing the general will. The "sovereign" is the rule of law, ideally decided on by direct democracy in an assembly. Rousseau opposed the idea that the people should exercise sovereignty via a representative assembly (Book III, chapter XV). He approved the form of republican government of the city-state, for which Geneva provided a model—or would have done if renewed on Rousseau's principles. France could not meet Rousseau's criterion of an ideal state because it was too big. Much subsequent controversy about Rousseau's work has hinged on disagreements concerning his claims that citizens constrained to obey the general will are thereby rendered free: The notion of the general will is wholly central to Rousseau's theory of political legitimacy. ... It is, however, an unfortunately obscure and controversial notion. Some commentators see it as no more than the dictatorship of the proletariat or the tyranny of the urban poor (such as may perhaps be seen in the French Revolution). Such was not Rousseau's meaning. This is clear from the Discourse on Political Economy, where Rousseau emphasizes that the general will exists to protect individuals against the mass, not to require them to be sacrificed to it. He is, of course, sharply aware that men have selfish and sectional interests which will lead them to try to oppress others. It is for this reason that loyalty to the good of all alike must be a supreme (although not exclusive) commitment by everyone, not only if a truly general will is to be heeded but also if it is to be formulated successfully in the first place. A remarkable peculiarity of Social Contract is its logical rigor, which Rousseau had learned in his twenties from mathematics: Economic theory Rousseau offers a wealth of economic thought in his writings, especially the Discourse on Inequality, Discourse on Political Economy, the Social Contract, as well as his constitutional projects for Corsica and Poland. Rousseau's economic theory has been criticised as sporadic and unrigorous by later economists such as Joseph Schumpeter, but has been praised by historians of economic thought for its nuanced view of finance and mature thought on development. Scholars generally accept that Rousseau offers a critique of modern wealth and luxury. Moreover, Rousseau's economic thought is associated with agrarianism and Autarkism. Historian Istvan Hont modifies this reading, however, by suggesting that Rousseau is both a critic and a thinker of commerce, leaving room for well-regulated commerce within a well-governed civil space. Political theorists Ryan Hanley and Hansong Li further argue that as a modern legislator, Rousseau seeks not to reject, but to tame utility, self-love, and even trade, finance, and luxury to serve the health of the republic. Education and child rearing Rousseau's philosophy of education concerns itself not with particular techniques of imparting information and concepts, but rather with developing the pupil's character and moral sense, so that he may learn to practice self-mastery and remain virtuous even in the unnatural and imperfect society in which he will have to live. A hypothetical boy, Émile, is to be raised in the countryside, which, Rousseau believes, is a more natural and healthy environment than the city, under the guardianship of a tutor who will guide him through various learning experiences arranged by the tutor. Today we would call this the disciplinary method of "natural consequences". Rousseau felt that children learn right and wrong through experiencing the consequences of their acts rather than through physical punishment. The tutor will make sure that no harm results to Émile through his learning experiences. Rousseau became an early advocate of developmentally appropriate education; his description of the stages of child development mirrors his conception of the evolution of culture. He divides childhood into stages: The first to the age of about 12, when children are guided by their emotions and impulses During the second stage, from 12 to about 16, reason starts to develop Finally the third stage, from the age of 16 onwards, when the child develops into an adult Rousseau recommends that the young adult learn a manual skill such as carpentry, which requires creativity and thought, will keep him out of trouble, and will supply a fallback means of making a living in the event of a change of fortune (the most illustrious aristocratic youth to have been educated this way may have been Louis XVI, whose parents had him learn the skill of locksmithing). Rousseau was a believer in the moral superiority of the patriarchal family on the antique Roman model. Sophie, the young woman Émile is destined to marry, as his representative of ideal womanhood, is educated to be governed by her husband while Émile, as his representative of the ideal man, is educated to be self-governing. This is not an accidental feature of Rousseau's educational and political philosophy; it is essential to his account of the distinction between private, personal relations and the public world of political relations. The private sphere, as Rousseau imagines it, depends on the subordination of women for both it and the public political sphere (upon which it depends) to function as Rousseau imagines it could and should. Rousseau anticipated the modern idea of the bourgeois nuclear family, with the mother at home taking responsibility for the household and for childcare and early education. Feminists, beginning in the late 18th century with Mary Wollstonecraft in 1792, have criticized Rousseau for his confinement of women to the domestic sphere. Unless women were domesticated and constrained by modesty and shame, he feared "men would be tyrannized by women ... For, given the ease with which women arouse men's senses—men would finally be their victims ..." Rousseau also believed that Mothers were to breastfeed their children rather than use wet-nurses. Marmontel wrote that his wife often said, "We must pardon him something, who has taught us to be mothers" (meaning Rousseau). Rousseau's ideas have influenced progressive "child-centered" education. John Darling's 1994 book Child-Centered Education and its Critics portrays the history of modern educational theory as a series of footnotes to Rousseau, a development he regards as bad. The theories of educators such as Rousseau's near contemporaries Pestalozzi, Mme. de Genlis and, later, Maria Montessori and John Dewey, which have directly influenced modern educational practices, have significant points in common with those of Rousseau. Religion Having converted to Roman Catholicism early in life and returned to the austere Calvinism of his native Geneva as part of his period of moral reform, Rousseau maintained a profession of that religious philosophy and of John Calvin as a modern lawgiver throughout the remainder of his life. Unlike many of the more agnostic Enlightenment philosophers, Rousseau affirmed the necessity of religion. His views on religion presented in his works of philosophy, however, may strike some as discordant with the doctrines of both Catholicism and Calvinism. Rousseau's strong endorsement of religious toleration, as expounded in Émile, was interpreted as advocating indifferentism, a heresy, and led to the condemnation of the book in both Calvinist Geneva and Catholic Paris. Although he praised the Bible, he was disgusted by the Christianity of his day. Rousseau's assertion in The Social Contract that true followers of Christ would not make good citizens may have been another reason for his condemnation in Geneva. He also repudiated the doctrine of original sin, which plays a large part in Calvinism. In his "Letter to Beaumont", Rousseau wrote, "there is no original perversity in the human heart." In the 18th century, many deists viewed God merely as an abstract and impersonal creator of the universe, likened to a giant machine. Rousseau's deism differed from the usual kind in its emotionality. He saw the presence of God in the creation as good, and separate from the harmful influence of society. Rousseau's attribution of a spiritual value to the beauty of nature anticipates the attitudes of 19th-century Romanticism towards nature and religion. (Historians—notably William Everdell, Graeme Garrard, and Darrin McMahon—have additionally situated Rousseau within the Counter-Enlightenment.) Rousseau was upset that his deism was so forcefully condemned, while those of the more atheistic philosophers were ignored. He defended himself against critics of his religious views in his "Letter to Mgr de Beaumont, the Archbishop of Paris", "in which he insists that freedom of discussion in religious matters is essentially more religious than the attempt to impose belief by force." Composer Rousseau was a moderately successful composer of music, who wrote seven operas as well as music in other forms, and contributed to music theory. As a composer, his music was a blend of the late Baroque style and the emergent Classical fashion, i.e. Galant, and he belongs to the same generation of transitional composers as Christoph Willibald Gluck and C. P. E. Bach. One of his more well-known works is the one-act opera The Village Soothsayer. It contains the duet "Non, Colette n'est point trompeuse," which was later rearranged as a standalone song by Beethoven, and the gavotte in scene no. 8 is the source of the tune of the folk song "Go Tell Aunt Rhody". He also composed several noted motets, some of which were sung at the Concert Spirituel in Paris. Rousseau's Aunt Suzanne was passionate about music and heavily influenced Rousseau's interest in music. In his Confessions, Rousseau claims he is "indebted" to her for his passion of music. Rousseau took formal instruction in music at the house of Françoise-Louise de Warens. She housed Rousseau on and off for about 13 years, giving him jobs and responsibilities. In 1742, Rousseau developed a system of musical notation that was compatible with typography and numbered. He presented his invention to the Academie Des Sciences, but they rejected it, praising his efforts and pushing him to try again. In 1743, Rousseau wrote his first opera, , which was first performed in 1745. Rousseau also developed a style of "boustrophedon" notation which would have music read in alternating directions (right to left for a second staff, and then left to right for the next staff for example) in an effort to allow musicians to not have to "jump" staffs while reading. Rousseau and Jean-Philippe Rameau argued over the superiority of Italian music over French. Rousseau argued that Italian music was superior based on the principle that melody must have priority over harmony. Rameau argued that French music was superior based on the principle that harmony must have priority over melody. Rousseau's plea for melody introduced the idea that in art, the free expression of a creative person is more important than the strict adherence to traditional rules and procedures. This is known today as a characteristic of Romanticism. Rousseau argued for musical freedom and changed people's attitudes towards music. His works were acknowledged by composers such as Christoph Willibald Gluck and Wolfgang Amadeus Mozart. After composing The Village Soothsayer in 1752, Rousseau felt he could not go on working for the theater because he was a moralist who had decided to break from worldly values. Musical compositions (1743) Les Fetes de Remire (1745) Symphonie à Cors de Chasse (1751) Le Devin du village (1752) – opera in 1 act Salve Regina (1752) – antiphone Chansons de Bataille (1753) Pygmalion (1762/1770) – melodrama Avril – aire a poesía de Rémy Belleau Les Consolations des Misères de Ma Vie (1781) Daphnis et Chloé Que le jour me dure! Le Printemps de Vivaldi (1775) Legacy General will Rousseau's idea of the volonté générale ("general will") was not original but rather belonged to a well-established technical vocabulary of juridical and theological writings in use at the time. The phrase was used by Diderot and also by Montesquieu (and by his teacher, the Oratorian friar Nicolas Malebranche). It served to designate the common interest embodied in legal tradition, as distinct from and transcending people's private and particular interests at any particular time. It displayed a rather democratic ideology, as it declared that the citizens of a given nation should carry out whatever actions they deem necessary in their own sovereign assembly. Rousseau believed in a legislative process that necessitates the active involvement of every citizen in decision-making through discussion and voting. He coined this process as the “general will”, the collective will of a society as a whole, even if it may not necessarily coincide with the individual desires of each member. The concept was also an important aspect of the more radical 17th-century republican tradition of Spinoza, from whom Rousseau differed in important respects, but not in his insistence on the importance of equality: French Revolution Robespierre and Saint-Just, during the Reign of Terror, regarded themselves to be principled egalitarian republicans, obliged to do away with superfluities and corruption; in this they were inspired most prominently by Rousseau. According to Robespierre, the deficiencies in individuals were rectified by upholding the 'common good' which he conceptualized as the collective will of the people; this idea was derived from Rousseau's General Will. The revolutionaries were also inspired by Rousseau to introduce Deism as the new official civil religion of France: Rousseau's influence on the French Revolution was noted by Edmund Burke, who critiqued Rousseau in Reflections on the Revolution in France, and this critique reverberated throughout Europe, leading Catherine the Great to ban his works. This connection between Rousseau and the French Revolution (especially the Terror) persisted through the next century. As François Furet notes that "we can see that for the whole of the nineteenth century Rousseau was at the heart of the interpretation of the Revolution for both its admirers and its critics." Effect on the American Revolution One of Rousseau's most important American followers was Noah Webster (1758–1843). In 1785, two years before America's constitutional convention, Webster relied heavily on Rousseau's Social Contract while writing Sketches of American Policy, one of the earliest, widely-published arguments for a strong central government in America. George Washington, James Madison, and likely other founders read it before the convention. Webster also wrote two "fan-fiction" sequels to Rousseau's Emile, or On Education (1762) and included them in his 1785 Reader for schoolchildren. Webster's 1787 Reader, and later Readers, also contain an idealized word-portrait of Sophie, the girl in Rousseau's Emile, and Webster used Rousseau's theories in Emile to argue for the civic necessity of broad-based female education. According to some scholars, Rousseau exercised minimal influence on the Founding Fathers of the United States, despite similarities between their ideas. They shared beliefs regarding the self-evidence that "all men are created equal," and the conviction that citizens of a republic be educated at public expense. A parallel can be drawn between the United States Constitution's concept of the "general welfare" and Rousseau's concept of the "general will". Further commonalities exist between Jeffersonian democracy and Rousseau's praise of Switzerland and Corsica's economies of isolated and independent homesteads, and his endorsement of a well-regulated civic militia, such as a navy for Corsica, and the militia of the Swiss cantons. However, Will and Ariel Durant have opined that Rousseau had a definite political influence on America. According to them: Rousseau's writings perhaps had an indirect influence on American literature through the writings of Wordsworth and Kant, whose works were important to the New England transcendentalist Ralph Waldo Emerson, as well as on Unitarians such as theologian William Ellery Channing. The Last of the Mohicans and other American novels reflect republican and egalitarian ideals present alike in Thomas Paine and in English Romantic primitivism. Criticisms of Rousseau The first to criticize Rousseau were his fellow Philosophes, above all, Voltaire. According to Jacques Barzun, Voltaire was annoyed by the first discourse and outraged by the second. Voltaire's reading of the second discourse was that Rousseau would like the reader to "walk on all fours" befitting a savage. Samuel Johnson told his biographer James Boswell, "I think him one of the worst of men; a rascal, who ought to be hunted out of society, as he has been". Jean-Baptiste Blanchard was his leading Catholic opponent. Blanchard rejects Rousseau's negative education, in which one must wait until a child has grown to develop reason. The child would find more benefit from learning in his earliest years. He also disagreed with his ideas about female education, declaring that women are a dependent lot. So, removing them from their motherly path is unnatural, as it would lead to the unhappiness of both men and women. Historian Jacques Barzun states that, contrary to myth, Rousseau was no primitivist; for him:<blockquote>The model man is the independent farmer, free of superiors and self-governing. This was cause enough for the philosophes''' hatred of their former friend. Rousseau's unforgivable crime was his rejection of the graces and luxuries of civilized existence. Voltaire had sung "The superfluous, that most necessary thing." For the high bourgeois standard of living Rousseau would substitute the middling peasant's. It was the country versus the city—an exasperating idea for them, as was the amazing fact that every new work of Rousseau's was a huge success, whether the subject was politics, theater, education, religion, or a novel about love.</blockquote> As early as 1788, Madame de Staël published her Letters on the works and character of J.-J. Rousseau. In 1819, in his famous speech "On Ancient and Modern Liberty", the political philosopher Benjamin Constant, a proponent of constitutional monarchy and representative democracy, criticized Rousseau, or rather his more radical followers (specifically the Abbé de Mably), for allegedly believing that "everything should give way to collective will, and that all restrictions on individual rights would be amply compensated by participation in social power." Frédéric Bastiat severely criticized Rousseau in several of his works, most notably in "The Law", in which, after analyzing Rousseau's own passages, he stated that: And what part do persons play in all this? They are merely the machine that is set in motion. In fact, are they not merely considered to be the raw material of which the machine is made? Thus the same relationship exists between the legislator and the prince as exists between the agricultural expert and the farmer; and the relationship between the prince and his subjects is the same as that between the farmer and his land. How high above mankind, then, has this writer on public affairs been placed? Bastiat believed that Rousseau wished to ignore forms of social order created by the people—viewing them as a thoughtless mass to be shaped by philosophers. Bastiat, who is considered by thinkers associated with the Austrian School of Economics to be one of the precursors of the "spontaneous order", presented his own vision of what he considered to be the "Natural Order" in a simple economic chain in which multiple parties might interact without necessarily even knowing each other, cooperating and fulfilling each other's needs in accordance with basic economic laws such as supply and demand. In such a chain, to produce clothing, multiple parties have to act independently—e.g., farmers to fertilize and cultivate land to produce fodder for the sheep, people to shear them, transport the wool, turn it into cloth, and another to tailor and sell it. Those persons engage in economic exchange by nature, and don't need to be ordered to, nor do their efforts need to be centrally coordinated. Such chains are present in every branch of human activity, in which individuals produce or exchange goods and services, and together, naturally create a complex social order that does not require external inspiration, central coordination of efforts, or bureaucratic control to benefit society as a whole. Bastiat also believed that Rousseau contradicted himself when presenting his views concerning human nature; if nature is "sufficiently invincible to regain its empire", why then would it need philosophers to direct it back to a natural state? Another point of criticism Bastiat raised was that living purely in nature would doom mankind to suffer unnecessary hardships. The Marquis de Sade's Justine, or the Misfortunes of Virtue (1791) partially parodied and used as inspiration Rousseau's sociological and political concepts in the Discourse on Inequality and The Social Contract. Concepts such as the state of nature, civilization being the catalyst for corruption and evil, and humans "signing" a contract to mutually give up freedoms for the protection of rights, particularly referenced. The Comte de Gernande in Justine, for instance, after Thérèse asks him how he justifies abusing and torturing women, states: The necessity mutually to render one another happy cannot legitimately exist save between two persons equally furnished with the capacity to do one another hurt and, consequently, between two persons of commensurate strength: such an association can never come into being unless a contract [un pacte] is immediately formed between these two persons, which obligates each to employ against each other no kind of force but what will not be injurious to either. . . [W]hat sort of a fool would the stronger have to be to subscribe to such an agreement? Edmund Burke formed an unfavorable impression of Rousseau when the latter visited England with Hume and later drew a connection between Rousseau's egoistic philosophy and his personal vanity, saying Rousseau "entertained no principle... but vanity. With this vice he was possessed to a degree little short of madness". Thomas Carlyle said that Rousseau possessed "the face of what is called a Fanatic . . . his Ideas possessed him like demons". He continued:The fault and misery of Rousseau was what we easily name by a single word, Egoism . . . He had not perfected himself into victory over mere Desire; a mean Hunger, in many sorts, was still the motive principle of him. I am afraid he was a very vain man; hungry for the praises of men. . . . His Books, like himself, are what I call unhealthy; not the good sort of Books. There is a sensuality in Rousseau. Combined with such an intellectual gift as his, it makes pictures of a certain gorgeous attractiveness: but they are not genuinely poetical. Not white sunlight: something operatic; a kind of rose-pink, artificial bedizenment.Charles Dudley Warner wrote about Rousseau in his essay, Equality; "Rousseau borrowed from Hobbes as well as from Locke in his conception of popular sovereignty; but this was not his only lack of originality. His discourse on primitive society, his unscientific and unhistoric notions about the original condition of man, were those common in the middle of the eighteenth century." In 1919, Irving Babbitt, founder of a movement called the "New Humanism", wrote a critique of what he called "sentimental humanitarianism", for which he blamed Rousseau. Babbitt's depiction of Rousseau was countered in a celebrated and much reprinted essay by A.O. Lovejoy in 1923. In France, conservative theorist Charles Maurras, founder of Action Française, "had no compunctions in laying the blame for both Romantisme et Révolution firmly on Rousseau in 1922." During the Cold War, Rousseau was criticized for his association with nationalism and its attendant abuses, for example in . This came to be known among scholars as the "totalitarian thesis". Political scientist J.S. Maloy states that "the twentieth century added Nazism and Stalinism to Jacobinism on the list of horrors for which Rousseau could be blamed. ... Rousseau was considered to have advocated just the sort of invasive tampering with human nature which the totalitarian regimes of mid-century had tried to instantiate." But he adds that "The totalitarian thesis in Rousseau studies has, by now, been discredited as an attribution of real historical influence." Arthur Melzer, however, while conceding that Rousseau would not have approved of modern nationalism, observes that his theories do contain the "seeds of nationalism", insofar as they set forth the "politics of identification", which are rooted in sympathetic emotion. Melzer also believes that in admitting that people's talents are unequal, Rousseau therefore tacitly condones the tyranny of the few over the many. For Stephen T. Engel, on the other hand, Rousseau's nationalism anticipated modern theories of "imagined communities" that transcend social and religious divisions within states. On similar grounds, one of Rousseau's strongest critics during the second half of the 20th century was political philosopher Hannah Arendt. Using Rousseau's thought as an example, Arendt identified the notion of sovereignty with that of the general will. According to her, it was this desire to establish a single, unified will based on the stifling of opinion in favor of public passion that contributed to the excesses of the French Revolution. Appreciation and influence The book Rousseau and Revolution, by Will and Ariel Durant, begins with the following words about Rousseau: The German writers Goethe, Schiller, and Herder have stated that Rousseau's writings inspired them. Herder regarded Rousseau to be his "guide", and Schiller compared Rousseau to Socrates. Goethe, in 1787, stated: "Emile and its sentiments had a universal influence on the cultivated mind." The elegance of Rousseau's writing is held to have inspired a significant transformation in French poetry and drama—freeing them from rigid literary norms. Other writers who were influenced by Rousseau's writings included Leopardi in Italy; Pushkin and Tolstoy in Russia; Wordsworth, Southey, Coleridge, Byron, Shelley, Keats, and Blake in England; and Hawthorne and Thoreau in America. According to Tolstoy: "At sixteen I carried around my neck, instead of the usual cross, a medallion with Rousseau's portrait." Rousseau's Discourse on the Arts and Sciences, emphasizing individualism and repudiating "civilization", was appreciated by, among others, Thomas Paine, William Godwin, Shelley, Tolstoy, and Edward Carpenter. Rousseau's contemporary Voltaire appreciated the section in Emile titled Profession of Faith of the Savoyard Vicar. Despite his criticisms, Carlyle admired Rousseau's sincerity: "with all his drawbacks, and they are many, he has the first and chief characteristic of a Hero: he is heartily in earnest. In earnest, if ever man was; as none of these French Philosophers were." He also admired his repudiation of atheism:Strangely through all that defacement, degradation and almost madness, there is in the inmost heart of poor Rousseau a spark of real heavenly fire. Once more, out of the element of that withered mocking Philosophism, Scepticism and Persiflage, there has arisen in this man the ineradicable feeling and knowledge that this Life of ours is true: not a Scepticism, Theorem, or Persiflage, but a Fact, an awful Reality. Nature had made that revelation to him; had ordered him to speak it out. He got it spoken out; if not well and clearly, then ill and dimly,—as clearly as he could.Modern admirers of Rousseau include John Dewey and Claude Lévi-Strauss. According to Matthew Josephson, Rousseau has remained controversial for more than two centuries, and has continued to gain admirers and critics down to the present time. However, in their own way, both critics and admirers have served to underscore the significance of the man, while those who have evaluated him with fairness have agreed that he was the finest thinker of his time on the question of civilization. Works Major works , 1743 Discourse on the Arts and Sciences (Discours sur les sciences et les arts), 1750 Narcissus, or The Self-Admirer: A Comedy, 1752 Discourse on the Origin and Basis of Inequality Among Men (Discours sur l'origine et les fondements de l'inégalité parmi les hommes), 1754 Letter on French Music, 1753 () Discourse on Political Economy, 1755 () Letter to M. D'Alembert on Spectacles, 1758 (Lettre à D'Alembert sur les spectacles) Julie; or, The New Heloise (Julie ou la nouvelle Héloïse), 1761 Emile or On Education (Émile ou de l'éducation), 1762 (includes "The Creed of a Savoyard Priest") The Social Contract, or Principles of Political Right (Du contrat social), 1762 Four Letters to M. de Malesherbes, 1762 Letters Written from the Mountain, 1764 () Dictionary of Music. 1767 (Dictionnaire de la musique) Confessions of Jean-Jacques Rousseau (Les Confessions), 1770, published 1782 Constitutional Project for Corsica, 1765, published 1768 Considerations on the Government of Poland, 1772 Letters on the Elements of Botany Essay on the Origin of Languages, published 1781 (Essai sur l'origine des langues) Rousseau Judge of Jean-Jacques, published 1782 (Rousseau juge de Jean-Jacques) Reveries of the Solitary Walker, incomplete, published 1782 (Rêveries du promeneur solitaire) Editions in English Basic Political Writings, trans. Donald A. Cress. Indianapolis: Hackett, 1987. Collected Writings, ed. Roger Masters and Christopher Kelly, Dartmouth: University Press of New England, 1990–2010, 13 vols. The Confessions, trans. Angela Scholar. Oxford: Oxford University Press, 2000. Émile or On Education, trans. with an introd. by Allan Bloom, New York: Basic Books, 1979. "On the Origin of Language", trans. John H. Moran. In On the Origin of Language: Two Essays. Chicago: University of Chicago Press, 1986. Reveries of a Solitary Walker, trans. Peter France. London: Penguin Books, 1980. 'The Discourses' and Other Early Political Writings, trans. Victor Gourevitch. Cambridge: Cambridge University Press, 1997. 'The Social Contract' and Other Later Political Writings, trans. Victor Gourevitch. Cambridge: Cambridge University Press, 1997. 'The Social Contract, trans. Maurice Cranston. Penguin: Penguin Classics Various Editions, 1968–2007. The Political writings of Jean-Jacques Rousseau, edited with introduction and notes by C.E.Vaughan, Blackwell, Oxford, 1962. (In French but the introduction and notes are in English). Rousseau on Women, Love, and Family, Christopher Kelly and Eve Grace (eds.), Dartmouth College Press, 2009. See also Boustrophedon Château de Chenonceau Eat the rich, a saying attributed to Rousseau Georges Hébert, a physical culturist influenced by Rousseau's teachings Let them eat cake, a saying of Rousseau's List of abolitionist forerunners List of political systems in France Rousseau Institute Rousseau's educational philosophy Schutterij – civil militia Notes, references and sources Notes References Sources . . . . Reprinted in Essays in the History of Ideas (Baltimore: Johns Hopkins Press). "A classic treatment of the Second Discourse" – Nicholas Dent. Further reading Raymond Birn, "Forging Rousseau: print, commerce and cultural manipulation in the late Enlightenment" (SVEC 2001:08). Cooper, Laurence (1999). Rousseau, Nature and the Problem of the Good Life. Pennsylvania: Pennsylvania State University Press Cranston, Maurice (1982). Jean-Jacques: The Early Life and Work. New York: Norton Dent, Nicholas J. H. (1988). Rousseau: An Introduction to his Psychological, Social, and Political Theory. Oxford: Blackwell . Derathé, Robert (1948). Le Rationalism de J.-J. Rousseau. Press Universitaires de France Derrida, Jacques (1976). Of Grammatology, trans. Gayatri Chakravorty Spivak. Baltimore: Johns Hopkins Press Farrell, John (2006). Paranoia and Modernity: Cervantes to Rousseau. New York: Cornell University Press Garrard, Graeme (2003). Rousseau's Counter-Enlightenment: A Republican Critique of the Philosophes. Albany: State University of New York Press Garrard, Graeme (2014). "Rousseau, Happiness and Human Nature," Political Studies, Vol. 62, No. 1, pp. 70–82. Garrard, Graeme (2021). "Children of the State: Rousseau's Republican Educational Theory and Child Abandonment," Educational History, Vol. 50, No. 2, pp. 147–160. Gauthier, David (2006). Rousseau: The Sentiment of Existence. Cambridge: Cambridge University Press Hendel, Charles W. (1934). Jean-Jacques Rousseau: Moralist. 2 Vols. (1934) Indianapolis, Indiana: Bobbs Merrill Kanzler, Peter. The Leviathan (1651), The Two Treatises of Government (1689), The Social Contract (1762), The Constitution of Pennsylvania (1776), 2020. Kateb, George (1961). "Aspects of Rousseau's Political Thought", Political Science Quarterly, December 1961 Christopher Kelly, Rousseau's Exemplary Life: the "Confessions" as political philosophy, Ithaca: Cornell, 1987. Christopher Kelly, Rousseau as Author, University of Chicago Press, 2003. Kitsikis, Dimitri (2006). Jean-Jacques Rousseau et les origines françaises du fascisme. Nantes: Ars Magna Editions LaFreniere, Gilbert F. (1990). "Rousseau and the European Roots of Environmentalism." Environmental History Review 14 (No. 4): 41–72 Lange, Lynda (2002). Feminist Interpretations of Jean-Jacques Rousseau. University Park: Penn State University Press Maguire, Matthew (2006). The Conversion of the Imagination: from Pascal through Rousseau to Tocqueville. Harvard University Press Marks, Jonathan (2005). Perfection and Disharmony in the Thought of Jean-Jacques Rousseau. Cambridge: Cambridge University Press Masters, Roger (ed.), 1964. The First and Second Discourses by Jean-Jacques Rousseau, translated by Roger D. Masters and Judith R. Masters. New York: St. Martin's Press. Masters, Roger 1968. The Political Philosophy of Rousseau. Princeton, New Jersey, Princeton University Press (), also available in French () Christie McDonald and Stanley Hoffman (eds.), Rousseau and Freedom, Cambridge University Press, 2010. Melzer, Arthur (1990). The Natural Goodness of Man: On the System of Rousseau's Thought. Chicago: University of Chicago Press Paiva, Wilson (2019). Discussing human connectivity in Rousseau as a pedagogical issue. Article available at: Pateman, Carole (1979). The Problem of Political Obligation: A Critical Analysis of Liberal Theory. Chichester: John Wiley & Sons Riley, Patrick (ed.) (2001). The Cambridge Companion to Rousseau. Cambridge: Cambridge University Press Robinson, Dave & Groves, Judy (2003). Introducing Political Philosophy. Icon Books. Schaeffer, Denise. (2014) Rousseau on Education, Freedom, and Judgment. Pennsylvania State University Press Simpson, Matthew (2006). Rousseau's Theory of Freedom. London: Continuum Books Starobinski, Jean (1988). Jean-Jacques Rousseau: Transparency and Obstruction. Chicago: University of Chicago Press Strauss, Leo (1953). Natural Right and History. Chicago: University of Chicago Press, chap. 6A Strong, Tracy B. (2002). Jean Jacques Rousseau and the Politics of the Ordinary. Lanham, MD: Rowman and Littlefield Talmon, Jacob R. (1952). The Origins of Totalitarian Democracy. New York: W.W. Norton. Williams, David Lay (2007). Rousseau's Platonic Enlightenment. Pennsylvania State University Press Wokler, Robert. (1995). Rousseau. Oxford: Oxford University Press. Wraight, Christopher D. (2008), Rousseau's The Social Contract: A Reader's Guide''. London: Continuum Books. External links 1712 births 1778 deaths 18th-century classical composers 18th-century male musicians 18th-century memoirists 18th-century novelists 18th-century philosophers 18th-century writers from the Republic of Geneva Age of Enlightenment Autobiographers Baroque composers Burials at the Panthéon, Paris Catholic philosophers Classical-period composers Contributors to the Encyclopédie (1751–1772) Converts to Roman Catholicism from Calvinism Deist philosophers Enlightenment philosophers French political philosophers People with hypochondriasis Music copyists Music theorists Musicians from the Republic of Geneva Philosophers from the Republic of Geneva Philosophers of art Philosophers of culture Philosophers of economics Philosophers of education Philosophers of literature Philosophers of mind Philosophers of science Philosophes Protestants Proto-evolutionary biologists Republicans Romantic philosophers Simple living advocates Social philosophers Writers about activism and social change
Jean-Jacques Rousseau
Biology
17,704
48,249,441
https://en.wikipedia.org/wiki/Phase%20stretch%20transform
Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index). The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase-contrast microscopy, but on digital images. PST can be applied to digital images and temporal (time series) data. It is a physics-based feature engineering algorithm. Operation principle Here the principle is described in the context of feature enhancement in digital images. The image is first filtered with a spatial kernel followed by application of a nonlinear frequency-dependent phase. The output of the transform is the phase in the spatial domain. The main step is the 2-D phase function which is typically applied in the frequency domain. The amount of phase applied to the image is frequency dependent, with higher amount of phase applied to higher frequency features of the image. Since sharp transitions, such as edges and corners, contain higher frequencies, PST emphasizes the edge information. Features can be further enhanced by applying thresholding and morphological operations. PST is a pure phase operation whereas conventional edge detection algorithms operate on amplitude. Physical and mathematical foundations of phase stretch transform Photonic time stretch technique can be understood by considering the propagation of an optical pulse through a dispersive fiber. By disregarding the loss and non-linearity in fiber, the non-linear Schrödinger equation governing the optical pulse propagation in fiber upon integration reduces to: (1) where = GVD parameter, z is propagation distance, is the reshaped output pulse at distance z and time t. The response of this dispersive element in the time-stretch system can be approximated as a phase propagator as presented in (2) Therefore, Eq. 1 can be written as following for a pulse that propagates through the time-stretch system and is reshaped into a temporal signal with a complex envelope given by (3) The time stretch operation is formulated as generalized phase and amplitude operations, (4) where is the phase filter and is the amplitude filter. Next the operator is converted to discrete domain, (5) where is the discrete frequency, is the phase filter, is the amplitude filter and FFT is fast Fourier transform. The stretch operator for a digital image is then (6) In the above equations, is the input image, and are the spatial variables, is the two-dimensional fast Fourier transform, and and are spatial frequency variables. The function is the warped phase kernel and the function is a localization kernel implemented in frequency domain. PST operator is defined as the phase of the Warped Stretch Transform output as follows (7) where is the angle operator. PST kernel implementation The warped phase kernel can be described by a nonlinear frequency dependent phase While arbitrary phase kernels can be considered for PST operation, here we study the phase kernels for which the kernel phase derivative is a linear or sublinear function with respect to frequency variables. A simple example for such phase derivative profiles is the inverse tangent function. Consider the phase profile in the polar coordinate system From we have Therefore, the PST kernel is implemented as where and are real-valued numbers related to the strength and warp of the phase profile Applications PST has been used for edge detection in biological and biomedical images as well as synthetic-aperture radar (SAR) image processing. PST has also been applied to improve the point spread function for single molecule imaging in order to achieve super-resolution. The transform exhibits intrinsic superior properties compared to conventional edge detectors for feature detection in low contrast visually impaired images. The PST function can also be performed on 1-D temporal waveforms in the analog domain to reveal transitions and anomalies in real time. Open source code release On February 9, 2016, a UCLA Engineering research group has made public the computer code for PST algorithm that helps computers process images at high speeds and "see" them in ways that human eyes cannot. The researchers say the code could eventually be used in face, fingerprint, and iris recognition systems for high-tech security, as well as in self-driving cars' navigation systems or for inspecting industrial products. The Matlab implementation for PST can also be downloaded from Matlab Files Exchange. However, it is provided for research purposes only, and a license must be obtained for any commercial applications. The software is protected under a US patent. The code was then significantly refactored and improved to support GPU acceleration. In May 2022, it became one algorithm in PhyCV: the first physics-inspired computer vision library. See also Edge detection Feature detection (computer vision) Time stretch analog-to-digital converter Time stretch dispersive Fourier transform Phase-stretch Adaptive Gradient-field Extractor PhyCV References External links Github repository for MATLAB and python implementation for PST Image processing Computational physics
Phase stretch transform
Physics
1,047
14,812,118
https://en.wikipedia.org/wiki/Histone%20H2B%20type%201-C
Histone H2B type 1-C/E/F/G/I is a protein that in humans is encoded by the HIST1H2BC gene. Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H2B family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6. References Further reading External links PDBe-KB provides an overview of all the structure information available in the PDB for Human Histone H2B type 1-C
Histone H2B type 1-C
Chemistry
239
46,267,731
https://en.wikipedia.org/wiki/Geastrum%20britannicum
Geastrum britannicum is a fungal species in the family Geastraceae. Its recommended English name is vaulted earthstar. Like other earthstars, the basidiocarps (fruit bodies) are initially globose. Their thick outer skin splits open at maturity to expose the puffball-like spore sac surrounded by the split rays of the outer skin. In the vaulted earthstar, the rays split apart and form an arch, raising the spore sac upwards. Taxonomy Geastrum britannicum was described in 2015 from England by Spanish mycologist Juan Carlos Zamora, based on a holotype found on a roadside verge in Cockley Cley under pine trees in 2000 by Jonathan Revett, with paratypes from New Milton and Surlingham. The species was already the subject of research at the Royal Botanic Gardens, Kew, where more than a dozen additional collections had been studied from England and Wales, the earliest dating back to 1994. The new species had previously been confused with G. quadrifidum and G. fornicatum, both of which have a similar vaulted or arched appearance. Geastrum britannicum was distinguished on the basis of morphology and DNA sequence analysis. Distribution The fungus has proved to be very widespread in England and Wales. It was more recently found in the Czech Republic and Slovakia. Since the species is not known to have occurred in Europe before 1994, it may be a recent arrival. References britannicum Fungi of Europe Fungi described in 2015 Fungus species
Geastrum britannicum
Biology
313
491,962
https://en.wikipedia.org/wiki/Respiration%20%28physiology%29
In physiology, respiration is the transport of oxygen from the outside environment to the cells within tissues, and the removal of carbon dioxide in the opposite direction to the environment by a respiratory system. The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment. Exchange of gases in the lung occurs by ventilation and perfusion. Ventilation refers to the in-and-out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths. Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries. Contraction of the diaphragm muscle causes a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system. In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation. Speaking and singing in humans requires sustained breath control that many mammals are not capable of performing. The process of breathing does not fill the alveoli with atmospheric air during each inhalation (about 350 ml per breath), but the inhaled air is carefully diluted and thoroughly mixed with a large volume of gas (about 2.5 liters in adult humans) known as the functional residual capacity which remains in the lungs after each exhalation, and whose gaseous composition differs markedly from that of the ambient air. Physiological respiration involves the mechanisms that ensure that the composition of the functional residual capacity is kept constant, and equilibrates with the gases dissolved in the pulmonary capillary blood, and thus throughout the body. Thus, in precise usage, the words breathing and ventilation are hyponyms, not synonyms, of respiration; but this prescription is not consistently followed, even by most health care providers, because the term respiratory rate (RR) is a well-established term in health care, even though it would need to be consistently replaced with ventilation rate if the precise usage were to be followed. During respiration the C-H bonds are broken by oxidation-reduction reaction and so carbon dioxide and water are also produced. The cellular energy-yielding process is called cellular respiration. Classifications of respiration There are several ways to classify the physiology of respiration: By species Aquatic respiration Buccal pumping Cutaneous respiration Intestinal respiration Respiratory system By mechanism Breathing Gas exchange Arterial blood gas Control of respiration Apnea By experiments Huff and puff apparatus Spirometry Selected ion flow tube mass spectrometry By intensive care and emergency medicine CPR Mechanical ventilation Intubation Iron lung Intensive care medicine Liquid breathing ECMO Oxygen toxicity Medical ventilator Life support General anaesthesia Laryngoscope By other medical topics Respiratory therapy Breathing gases Hyperbaric oxygen therapy Hypoxia Gas embolism Decompression sickness Barotrauma Oxygen equivalent Oxygen toxicity Nitrogen narcosis Carbon dioxide poisoning Carbon monoxide poisoning HPNS Additional images See also References Nelsons VCE Units 1–2 Physical Education. 2010 Cengage Copyright. External links Overview at Johns Hopkins University Further reading , human biology 146149 C.Michael Hogan. 2011. Respiration. Encyclopedia of Earth. Eds. Mark McGinley and C.J.Cleveland. National Council for Science and the Environment. Washington DC Excretion hr:Disanje
Respiration (physiology)
Biology
877
44,818,023
https://en.wikipedia.org/wiki/Load%20%28unit%29
The load, also known as a fodder, fother, and charrus (,  "cartload"), is a historic English unit of weight or mass of various amounts, depending on the era, the substance being measured, and where it was being measured. The term was in use by the 13th century, and disappeared with legislation from the 1820s onwards. Modern equivalents of historical weights and measures are often very difficult to determine, and figures given here should be treated with caution. Etymology According to the Oxford English Dictionary, the word "fother" (noun) is derived from: Lead load In very general terms, a "load" or "fother" of metallic lead was approximately or exactly equal to one long ton of 2240 lbs (1016 kg), also equal to approximately one tonne. Fothers have been recorded from 2184 lbs (991 kg) to 2520 lbs (1143 kg). According to the Tractatus de Ponderibus et Mensuris, a memorandum of Edward I (reigned 1272–1307), the load of metallic lead was 30 fotmals, 175 stone, or 2,100 Merchant pounds (approx. 1016 kg). In Derbyshire up to the 13th century a fother of lead is recorded of 1680 lbs or 15 long hundredweight (cwt.) (approx. 762 kg), and likewise in Devon a load of lead weighed the same. An Act of Parliament (12 Cha. 2. c. 4) (1660) stated that a fodder or fother of lead was one long ton, or 20 cwt. (1016 kg) Miners of lead ore in Yorkshire in the late 17th century used a fodder of , on the assumption that the ore when smelted weighed about 65% less (about 2240 lbs or one long ton). Other measures were also used for lead ore, e.g. the volumetric "dish" used in the Low Peak district of Derbyshire was 14 pints (weighing 58 lbs, 26 kg), but in the High Peak it was 15 or 16 pints. Fothers were not used in all districts; for example in the Mendip Hills and in Burnley, Lancashire, tons, hundredweights and pounds were used in the first half of the 17th century. Vivant-Léon Moissenet, a French mineralogist who studied and wrote about English mining in the mid 19th-century stated that in Shropshire 200 lbs were added to each ton of concentrate at the smelt works to make a ton of . By the early 19th century there was a vast multiplicity of local measurements of all types of goods, which a parliamentary report of 1820 made clear. For plumbers, and in London, a fodder was 19½ cwt (now about 990 kg), and with miners generally 22½ cwt (now about 1140 kg). In Derbyshire a "mill fodder" was 2820 lbs (1280 kg), but when shipped at Stockwith-on-Trent, 2408 lbs (now about 1092 kg). In Hull it was 2340 lbs (1060 kg). In Northumberland a fother of pig lead was 21 cwt. (1066 kg), and in Newcastle sometimes 22 cwt (now about 1120 kg). The fother was generally used by miners, shippers and smelters. When the metallic lead finally came to be sold it was weighed precisely; its value was calculated to the nearest pound weight and the price adjusted accordingly. Straw load The load of hay or straw was 36 trusses or 1,296 pounds (now about 588 kg). Wood load The American load of stacked firewood varied. A load of unhewn wood came to cord-feet or cubic feet (now about 0.75 m³), while a load of hewn wood came to cord-feet or 43 cubic feet (now about 1.2 m³). Wool load The load of wool was 12 wey or 108.13 sacks (now about 1372 kg). Dung and lime In Northumberland in the 1820s, a fodder of dung or of lime was equal to a cartload pulled by two horses. See also Imperial units US customary units Derbyshire lead mining history References Citations Bibliography Customary units of measurement Standards of the United Kingdom Lead mining in the United Kingdom
Load (unit)
Mathematics
901
18,777,003
https://en.wikipedia.org/wiki/XCT-790
XCT-790 is a potent and selective inverse agonist ligand of the estrogen-related receptor alpha (ERRα). Independent of its inhibition of ERRα, XCT-790 is a potent mitochondrial electron transport chain uncoupler. Mitochondrial electron transport chain uncoupling effect XCT-790 has been shown to uncouple oxygen consumption from ATP production in mitochondria at very low, nanomolar-range doses independently of ERRα expression. Its effects are similar to proton ionophores such as FCCP, which disrupt mitochondrial transmembrane electrochemical gradients. This uncoupling leads to a fast drop in ATP production and, consequently, a prompt activation of AMPK. References External links Nitriles Trifluoromethyl compounds Thiadiazoles Uncouplers
XCT-790
Chemistry
178
1,172,094
https://en.wikipedia.org/wiki/Radiosurgery
Radiosurgery is surgery using radiation, that is, the destruction of precisely selected areas of tissue using ionizing radiation rather than excision with a blade. Like other forms of radiation therapy (also called radiotherapy), it is usually used to treat cancer. Radiosurgery was originally defined by the Swedish neurosurgeon Lars Leksell as "a single high dose fraction of radiation, stereotactically directed to an intracranial region of interest". In stereotactic radiosurgery (SRS), the word "stereotactic" refers to a three-dimensional coordinate system that enables accurate correlation of a virtual target seen in the patient's diagnostic images with the actual target position in the patient. Stereotactic radiosurgery may also be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR) when used outside the central nervous system (CNS). History Stereotactic radiosurgery was first developed in 1949 by the Swedish neurosurgeon Lars Leksell to treat small targets in the brain that were not amenable to conventional surgery. The initial stereotactic instrument he conceived used probes and electrodes. The first attempt to supplant the electrodes with radiation was made in the early fifties, with x-rays. The principle of this instrument was to hit the intra-cranial target with narrow beams of radiation from multiple directions. The beam paths converge in the target volume, delivering a lethal cumulative dose of radiation there, while limiting the dose to the adjacent healthy tissue. Ten years later significant progress had been made, due in considerable measure to the contribution of the physicists Kurt Liden and Börje Larsson. At this time, stereotactic proton beams had replaced the x-rays. The heavy particle beam presented as an excellent replacement for the surgical knife, but the synchrocyclotron was too clumsy. Leksell proceeded to develop a practical, compact, precise and simple tool which could be handled by the surgeon himself. In 1968 this resulted in the Gamma Knife, which was installed at the Karolinska Institute and consisted of several cobalt-60 radioactive sources placed in a kind of helmet with central channels for irradiation with gamma rays. This prototype was designed to produce slit-like radiation lesions for functional neurosurgical procedures to treat pain, movement disorders, or behavioral disorders that did not respond to conventional treatment. The success of this first unit led to the construction of a second device, containing 179 cobalt-60 sources. This second Gamma Knife unit was designed to produce spherical lesions to treat brain tumors and intracranial arteriovenous malformations (AVMs). Additional units were installed in the 1980s all with 201 cobalt-60 sources. In parallel to these developments, a similar approach was designed for a linear particle accelerator or Linac. Installation of the first 4 MeV clinical linear accelerator began in June 1952 in the Medical Research Council (MRC) Radiotherapeutic Research Unit at the Hammersmith Hospital, London. The system was handed over for physics and other testing in February 1953 and began to treat patients on 7 September that year. Meanwhile, work at the Stanford Microwave Laboratory led to the development of a 6 MeV accelerator, which was installed at Stanford University Hospital, California, in 1956. Linac units quickly became favored devices for conventional fractionated radiotherapy but it lasted until the 1980s before dedicated Linac radiosurgery became a reality. In 1982, the Spanish neurosurgeon J. Barcia-Salorio began to evaluate the role of cobalt-generated and then Linac-based photon radiosurgery for the treatment of AVMs and epilepsy. In 1984, Betti and Derechinsky described a Linac-based radiosurgical system. Winston and Lutz further advanced Linac-based radiosurgical prototype technologies by incorporating an improved stereotactic positioning device and a method to measure the accuracy of various components. Using a modified Linac, the first patient in the United States was treated in Boston Brigham and Women's Hospital in February 1986. 21st century Technological improvements in medical imaging and computing have led to increased clinical adoption of stereotactic radiosurgery and have broadened its scope in the 21st century. The localization accuracy and precision that are implicit in the word "stereotactic" remain of utmost importance for radiosurgical interventions and are significantly improved via image-guidance technologies such as the N-localizer and Sturm-Pastyr localizer that were originally developed for stereotactic surgery. In the 21st century the original concept of radiosurgery expanded to include treatments comprising up to five fractions, and stereotactic radiosurgery has been redefined as a distinct neurosurgical discipline that utilizes externally generated ionizing radiation to inactivate or eradicate defined targets, typically in the head or spine, without the need for a surgical incision. Irrespective of the similarities between the concepts of stereotactic radiosurgery and fractionated radiotherapy the mechanism to achieve treatment is subtly different, although both treatment modalities are reported to have identical outcomes for certain indications. Stereotactic radiosurgery has a greater emphasis on delivering precise, high doses to small areas, to destroy target tissue while preserving adjacent normal tissue. The same principle is followed in conventional radiotherapy although lower dose rates spread over larger areas are more likely to be used (for example as in VMAT treatments). Fractionated radiotherapy relies more heavily on the different radiosensitivity of the target and the surrounding normal tissue to the total accumulated radiation dose. Historically, the field of fractionated radiotherapy evolved from the original concept of stereotactic radiosurgery following discovery of the principles of radiobiology: repair, reassortment, repopulation, and reoxygenation. Today, both treatment techniques are complementary, as tumors that may be resistant to fractionated radiotherapy may respond well to radiosurgery, and tumors that are too large or too close to critical organs for safe radiosurgery may be suitable candidates for fractionated radiotherapy. Today, both Gamma Knife and Linac radiosurgery programs are commercially available worldwide. While the Gamma Knife is dedicated to radiosurgery, many Linacs are built for conventional fractionated radiotherapy and require additional technology and expertise to become dedicated radiosurgery tools. There is not a clear difference in efficacy between these different approaches. The major manufacturers, Varian and Elekta offer dedicated radiosurgery Linacs as well as machines designed for conventional treatment with radiosurgery capabilities. Systems designed to complement conventional Linacs with beam-shaping technology, treatment planning, and image-guidance tools to provide. An example of a dedicated radiosurgery Linac is the CyberKnife, a compact Linac mounted onto a robotic arm that moves around the patient and irradiates the tumor from a large set of fixed positions, thereby mimicking the Gamma Knife concept. Mechanism of action The fundamental principle of radiosurgery is that of selective ionization of tissue, by means of high-energy beams of radiation. Ionization is the production of ions and free radicals which are damaging to the cells. These ions and radicals, which may be formed from the water in the cell or biological materials, can produce irreparable damage to DNA, proteins, and lipids, resulting in the cell's death. Thus, biological inactivation is carried out in a volume of tissue to be treated, with a precise destructive effect. The radiation dose is usually measured in grays (one gray (Gy) is the absorption of one joule of energy per kilogram of mass). A unit that attempts to take into account both the different organs that are irradiated and the type of radiation is the sievert, a unit that describes both the amount of energy deposited and the biological effectiveness. Clinical applications When used outside the CNS it may be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR). Brain and spine Radiosurgery is performed by a multidisciplinary team of neurosurgeons, radiation oncologists and medical physicists to operate and maintain highly sophisticated, highly precise and complex instruments, including medical linear accelerators, the Gamma Knife unit and the Cyberknife unit. The highly precise irradiation of targets within the brain and spine is planned using information from medical images that are obtained via computed tomography, magnetic resonance imaging, and angiography. Radiosurgery is indicated primarily for the therapy of tumors, vascular lesions and functional disorders. Significant clinical judgment must be used with this technique and considerations must include lesion type, pathology if available, size, location and age and general health of the patient. General contraindications to radiosurgery include excessively large size of the target lesion, or lesions too numerous for practical treatment. Patients can be treated within one to five days as outpatients. By comparison, the average hospital stay for a craniotomy (conventional neurosurgery, requiring the opening of the skull) is about 15 days. The radiosurgery outcome may not be evident until months after the treatment. Since radiosurgery does not remove the tumor but inactivates it biologically, lack of growth of the lesion is normally considered to be treatment success. General indications for radiosurgery include many kinds of brain tumors, such as acoustic neuromas, germinomas, meningiomas, metastases, trigeminal neuralgia, arteriovenous malformations, and skull base tumors, among others. Stereotatic radiosurgery of the spinal metastasis is efficient in controlling pain in up to 90% of the cases and ensures stability of the tumours on imaging evaluation in 95% of the cases, and is more efficient for spinal metastasis involving one or two segments. Meanwhile, conventional external beam radiotherapy is more suitable for multiple spinal involvement. Combination therapy SRS may be administered alone or in combination with other therapies. For brain metastases, these treatment options include whole brain radiation therapy (WBRT), surgery, and systemic therapies. However, a recent systematic review found no difference in the affects on overall survival or deaths due to brain metastases when comparing SRS treatment alone to SRS plus WBRT treatment or WBRT alone. Other bodily organs Expansion of stereotactic radiotherapy to other lesions is increasing, and includes liver cancer, lung cancer, pancreatic cancer, etc. Risks The New York Times reported in December 2010 that radiation overdoses had occurred with the linear accelerator method of radiosurgery, due in large part to inadequate safeguards in equipment retrofitted for stereotactic radiosurgery. In the U.S. the Food and Drug Administration (FDA) regulates these devices, whereas the Gamma Knife is regulated by the Nuclear Regulatory Commission. This is evidence that immunotherapy may be useful for treatment of radiation necrosis following stereotactic radiotherapy. Types of radiation source The selection of the proper kind of radiation and device depends on many factors including lesion type, size, and location in relation to critical structures. Data suggest that similar clinical outcomes are possible with all of the various techniques. More important than the device used are issues regarding indications for treatment, total dose delivered, fractionation schedule and conformity of the treatment plan. Gamma Knife A Gamma Knife (also known as the Leksell Gamma Knife) is used to treat brain tumors by administering high-intensity gamma radiation therapy in a manner that concentrates the radiation over a small volume. The device was invented in 1967 at the Karolinska Institute in Stockholm, Sweden, by Lars Leksell, Romanian-born neurosurgeon Ladislau Steiner, and radiobiologist Börje Larsson from Uppsala University, Sweden. A Gamma Knife typically contains 201 cobalt-60 sources of approximately 30 curies each (1.1 TBq), placed in a hemispheric array in a heavily shielded assembly. The device aims gamma radiation through a target point in the patient's brain. The patient wears a specialized helmet that is surgically fixed to the skull, so that the brain tumor remains stationary at the target point of the gamma rays. An ablative dose of radiation is thereby sent through the tumor in one treatment session, while surrounding brain tissues are relatively spared. Gamma Knife therapy, like all radiosurgery, uses doses of radiation to kill cancer cells and shrink tumors, delivered precisely to avoid damaging healthy brain tissue. Gamma Knife radiosurgery is able to accurately focus many beams of gamma radiation on one or more tumors. Each individual beam is of relatively low intensity, so the radiation has little effect on intervening brain tissue and is concentrated only at the tumor itself. Gamma Knife radiosurgery has proven effective for patients with benign or malignant brain tumors up to in size, vascular malformations such as an arteriovenous malformation (AVM), pain, and other functional problems. For treatment of trigeminal neuralgia the procedure may be used repeatedly on patients. Acute complications following Gamma Knife radiosurgery are rare, and complications are related to the condition being treated. Linear accelerator-based therapies A linear accelerator (linac) produces x-rays from the impact of accelerated electrons striking a high z target, usually tungsten. The process is also referred to as "x-ray therapy" or "photon therapy." The emission head, or "gantry", is mechanically rotated around the patient in a full or partial circle. The table where the patient is lying, the "couch", can also be moved in small linear or angular steps. The combination of the movements of the gantry and of the couch allow the computerized planning of the volume of tissue that is going to be irradiated. Devices with a high energy of 6 MeV are commonly used for the treatment of the brain, due to the depth of the target. The diameter of the energy beam leaving the emission head can be adjusted to the size of the lesion by means of collimators. They may be interchangeable orifices with different diameters, typically varying from 5 to 40 mm in 5 mm steps, or multileaf collimators, which consist of a number of metal leaflets that can be moved dynamically during treatment in order to shape the radiation beam to conform to the mass to be ablated. Linacs were capable of achieving extremely narrow beam geometries, such as 0.15 to 0.3 mm. Therefore, they can be used for several kinds of surgeries which hitherto had been carried out by open or endoscopic surgery, such as for trigeminal neuralgia. Long-term follow-up data has shown it to be as effective as radiofrequency ablation, but inferior to surgery in preventing the recurrence of pain. The first such systems were developed by John R. Adler, a Stanford University professor of neurosurgery and radiation oncology, and Russell and Peter Schonberg at Schonberg Research, and commercialized under the brand name CyberKnife. Proton beam therapy Protons may also be used in radiosurgery in a procedure called Proton Beam Therapy (PBT) or proton therapy. Protons are extracted from proton donor materials by a medical synchrotron or cyclotron, and accelerated in successive transits through a circular, evacuated conduit or cavity, using powerful magnets to shape their path, until they reach the energy required to just traverse a human body, usually about 200 MeV. They are then released toward the region to be treated in the patient's body, the irradiation target. In some machines, which deliver protons of only a specific energy, a custom mask made of plastic is interposed between the beam source and the patient to adjust the beam energy to provide the appropriate degree of penetration. The phenomenon of the Bragg peak of ejected protons gives proton therapy advantages over other forms of radiation, since most of the proton's energy is deposited within a limited distance, so tissue beyond this range (and to some extent also tissue inside this range) is spared from the effects of radiation. This property of protons, which has been called the "depth charge effect" by analogy to the explosive weapons used in anti-submarine warfare, allows for conformal dose distributions to be created around even very irregularly shaped targets, and for higher doses to targets surrounded or backstopped by radiation-sensitive structures such as the optic chiasm or brainstem. The development of "intensity modulated" techniques allowed similar conformities to be attained using linear accelerator radiosurgery. there was no evidence that proton beam therapy is better than any other types of treatment in most cases, except for a "handful of rare pediatric cancers". Critics, responding to the increasing number of very expensive PBT installations, spoke of a "medical arms race" and "crazy medicine and unsustainable public policy". References External links Treating Tumors that Move with Respiration Book on Radiosurgery to moving targets (July 2007) Shaped Beam Radiosurgery Book on LINAC-based radiosurgery using multileaf collimation (March 2011) Neurology procedures Radiobiology Radiation therapy procedures Neurosurgery
Radiosurgery
Chemistry,Biology
3,559
48,188,711
https://en.wikipedia.org/wiki/BDS-1
Blood-depressing substance-1 (BDS-1), also known as kappa-actitoxin-Avd4a, is a polypeptide found in the venom of the snakelocks anemone Anemonia sulcata. BDS-1 is a neurotoxin that modulates voltage-dependent potassium channels, in particular Kv3-family channels, as well as certain sodium channels. This polypeptide belongs to the sea anemone type 3 toxin peptide family. Etymology BDS-1 brings about a decrease in blood pressure by blocking Kv3 potassium channels. Thus, this protein is named after its antihypertensive function. Sources BDS-1 is a toxin secreted by the nematocyst of Anemonia sulcata (Mediterranean snakelocks sea anemone). Chemistry BDS-1 is a 43 amino acids long polypeptide chain, which consists of six cysteines linked by three disulfide bridges. The secondary structure of BDS-1 possesses three-stranded antiparallel β-sheets, along with one more short antiparallel β-sheet at its N-terminus. When viewed along the polypeptide strand, its structure showa a right-handed twist. BDS-1 shares structural homology with the toxin BDS-2, which belongs to the same type-3 peptide family. It also displays around 24–26% identity with toxins AsI (ATX-I), AsII (ATX-II), and AsV (ATX-V) from Anemonia sulcata and AxI (AP-A) from Anthopleura xanthogrammica. Target BDS-1 is an inhibitor of the fast inactivating Kv3-family channels, including Kv3.1, Kv3.2 and Kv3.4 channels. Additionally, BDS-1 affects the inactivation of voltage-gated sodium channels, Nav1.1, Nav1.3, Nav1.6 and Nav1.7. Mode of action BDS-1 modifies the voltage-dependent gating properties of Kv3 potassium channels by binding to the voltage sensitive domains on S3b and S4 subunits. The toxin elicits a depolarizing shift in the conductance-voltage relation, making it more difficult to open, and slows both the activation and inactivation kinetics of these ion channels. In addition, BDS-1 enhances the current flowing through several voltage-gated sodium channels. The toxin binds to the S3-S4 linker of domain IV and slows the inactivation of the channel, resulting in increased current upon depolarization. BDS-1 has a very strong potency for the human Nav1.7 channel. In mice, BDS-1 slows the inactivation of Nav1.3 channels but has smaller effects on the inactivation of Nav1.1 and Nav1.6 channels. This is probably due to a different channel sensitivity for the toxin. Toxicity By targeting Kv3.1a channels, BDS-1 concentrations at or above 3 μM are toxic to mouse fibroblasts. References Ion channel toxins Sea anemone toxins Neurotoxins
BDS-1
Chemistry
697
10,977,940
https://en.wikipedia.org/wiki/Photobiotin
Photobiotin is a derivative of biotin used as a biochemical tool. It is composed of a biotin group, a linker group, and a photoactivatable aryl azide group. The photoactivatable group provides nonspecific labeling of proteins, DNA and RNA probes or other molecules. Biotinylation of DNA and RNA with photoactivatable biotin is easier and less expensive than enzymatic methods since the DNA and RNA does not degrade. Photobiotin is most effectively activated by light at 260-475 nm. References Billingsley, M. and J. Polli. “Preparation, characterization and biological properties of biotinylated derivatives of calmodulin.” Biochem J. 275 Pt 3(1991): 733–743 "EZ-Link Photoactivatable Biotin." Pierce Biotechnology, Inc. Rockford, IL: June, 2003. "Components of Avidin-Biotin Technology: A Handbook." Pierce Biotechnology, Inc. Rockford, IL: June, 2003. "Photobiotin acetate." Sigma-Aldrich, Co. 2006. "Photoprobe biotin", Vector Laboratories, Inc., www.vectorlabs.com. Biotechnology
Photobiotin
Biology
261
11,306,676
https://en.wikipedia.org/wiki/Phyllosticta%20capitalensis
Phyllosticta capitalensis is a cosmopolitan fungal plant pathogen that grows on many hosts either as an endophyte or as a saprobe on dead tissue, including species of Citrus and Musa (bananas). There are some reports of it infecting orchids, such as cattleyas or Cymbidium. References External links USDA ARS Fungal Database Fungal plant pathogens and diseases Orchid diseases capitalensis Fungi described in 1908 Fungus species
Phyllosticta capitalensis
Biology
92
46,597,851
https://en.wikipedia.org/wiki/Strict%20initial%20object
In the mathematical discipline of category theory, a strict initial object is an initial object 0 of a category C with the property that every morphism in C with codomain 0 is an isomorphism. In a Cartesian closed category, every initial object is strict. Also, if C is a distributive or extensive category, then the initial object 0 of C is strict. References External links Objects (category theory)
Strict initial object
Mathematics
89
64,365,600
https://en.wikipedia.org/wiki/Sulfite%20sulfate
A sulfite sulfate is a chemical compound that contains both sulfite and sulfate anions [SO3]2− [SO4]2−. These compounds were discovered in the 1980s as calcium and rare earth element salts. Minerals in this class were later discovered. Minerals may have sulfite as an essential component, or have it substituted for another anion as in alloriite. The related ions [O3SOSO2]2− and [(O2SO)2SO2]2− may be produced in a reaction between sulfur dioxide and sulfate and exist in the solid form as tetramethyl ammonium salts. They have a significant partial pressure of sulfur dioxide. Related compounds are selenate selenites and tellurate tellurites with a varying chalcogen. They can be classed as mixed valent compounds. Production Europium and cerium rare earth sulfite sulfates are produced when heating the metal sulfite trihydrate in air. Ce2(SO3)3.3H2O + O2 → Ce2(SO3)2SO4 + 3H2O Ce2(SO3)3.3H2O + O2 → Ce2SO3(SO4)2 + 3H2O Other rare earth sulfite sulfates can be crystallized as hydrates from a water solution. These sulfite sulfates can be made by at least three methods. One is to dissolve a rare earth oxosulfate in water and then bubble in sulfur dioxide. The second way a rare earth oxide is dissolved in a half equivalent of sulfuric acid. The third way was to bubble sulfur dioxide through a suspension of rare earth oxide in water until it dissolved, then let it sit around for a few days with limited air exposure. To make calcium sulfite sulfate, a soluble calcium salt is added to a mixed solution of sodium sulfite and sodium sulfate. Control of pH is important when attempting to produce solid sulfite compounds. In basic conditions sulfite easily oxidises to sulfate and in acidic conditions it easily turns into sulfur dioxide. Properties In the sulfite sulfates, sulfur has both a +4 and a +6 oxidation state. The crystal structure of sulfite sulfates has been difficult to study, as the crystal symmetry is low, crystals are usually microscopic as they are quite insoluble, and they are mixed with other related phases. So they have been studied via powder X-ray diffraction. Reactions When heated in the absence of oxygen, cerium sulfite sulfate hydrate parts with water by 400 °C. Up to 800° it loses some sulfur dioxide. From 800° to 850 °C it loses sulfur dioxide and disulfur resulting in cerium oxy disulfate, and dioxy sulfate, which loses some further sulfur dioxide as it is heated to 1000 °C. Over 1000° the remaining oxysulfates decompose to sulfur dioxide, oxygen and cerium dioxide. This reaction is studied as a way to convert sulfur dioxide into sulfur and oxygen using only heat. Another thermochemical reaction for cerium sulfite sulfate hydrate involves using iodine to oxidise the sulfite to sulfate, producing hydrogen iodide which can then be used to make hydrogen gas and iodine. When combined with the previous high temperature process, water can be split into oxygen and hydrogen using heat only. This is termed the GA sulfur-iodine water splitting cycle. Applications Calcium sulfite sulfate hydrate is formed in flue gas scrubbers that attempt to remove sulfur dioxide from coal burning facilities. Calcium sulfite sulfate hydrate is also formed in the weathering of limestone, concrete and mortar by sulfur dioxide polluted air. These two would be classed as anthropogenic production as it was not deliberately produced or used. List References Sulfites Sulfates Mixed anion compounds
Sulfite sulfate
Physics,Chemistry
806
9,857,569
https://en.wikipedia.org/wiki/Nursery%20%28room%29
A nursery is a bedroom within a house or other dwelling set aside for an infant or toddler. Historically, European nurseries had little decorations and were away from visitors' sight. An article in the 1842 British Cyclopedia of Domestic Medicine and Surgery instructed the readers to never use a shaded room for a nursery and stressed the importance of ventilation. The author, Thomas Andrew, also suggested using two rooms for the nursery to move between them during the cleaning. He neither encourages nor warns against adding colourful objects into the nursery, simply mentioning that they catch children's attention. Starting from 1870s, authors such as Mary Eliza Haweis started advocating for a more interactive approach: they stressed the importance of visual stimulation for children's development. As a result, colourful patterned wallpapers appeared on the market. The author of a 1900 article on nursery décor was concerned with the idea that spartan conditions with little ornamentation have a positive impact on children's development, suggesting putting colourful pictures on the walls instead. At the same time, he warned against the excessive use of very bright colours in the night nursery where the child slept. Hermann Muthesius suggested covering the nursery walls with wood panels or washable paint, for hygienic reasons. In Edwardian times, for the wealthy and mid-tier classes, a nursery was a suite of rooms at the top of a house, including the night nursery, where the children slept, and a day nursery, where they ate and played, or a combination thereof. The nursery suite would include some bathroom facilities and possibly a small kitchen. The nurse (nanny) and nursemaid (assistant) slept in the suite too, to be within earshot of the sleeping children. A nursery is generally designated for the smallest bedroom in the house, as a baby requires very little space until at least walking age. In 1890, Jane Ellen Panton discouraged organising a nursery in "any small and out-of-the-way chamber", proposing instead to prioritise children's comfort and health by selecting a spacious and well-sunlit room. She highlighted the importance of decorations, suggesting a blue colour palette, simpler furniture and pictures. Patton also wrote that a nursery should contain some medical supplies so that the nurse can tend to the child's ailment before the doctor arrives. The nursery can remain the bedroom of the child into their teenage years, or until a younger sibling is born, and the parents decide to move the older child into another larger bedroom. A typical modern nursery contains a cradle or a crib (or similar type of bed), a table or platform for the purpose of changing diapers (also known as a changing table), a rocking chair, as well as various items required for the care of the child (such as baby powder and medicine). Fictional portrayals of nurseries abound, for example in the writings of Kipling and E. Nesbit, in the 1964 live-action and animated films Mary Poppins and Peter Pan. Notes External links References Rooms
Nursery (room)
Engineering
615
54,313,516
https://en.wikipedia.org/wiki/Heavy%20quark%20effective%20theory
In quantum chromodynamics, heavy quark effective theory (HQET) is an effective field theory describing the physics of heavy (that is, of mass far greater than the QCD scale) quarks. It is used in studying the properties of hadrons containing a single charm or bottom quark. The effective theory was formalised in 1990 by Howard Georgi, Estia Eichten and Christopher Hill, building upon the works of Nathan Isgur and Mark Wise, Voloshin and Shifman, and others. Quantum chromodynamics (QCD) is the theory of strong force, through which quarks and gluons interact. HQET is the limit of QCD with the quark mass taken to infinity while its four-velocity is held fixed. This approximation enables non-perturbative (in the strong interaction coupling) treatment of quarks that are much heavier than the QCD mass scale. The mass scale is of order 200 MeV. Hence the heavy quarks include charm, bottom and top quarks, whereas up, down and strange quarks are considered light. Since the top quark is extremely short-lived, only the charm and bottom quarks are of significant interest to HQET, of which only the latter has mass sufficiently high that the effective theory can be applied without major perturbative corrections. References Further reading Quantum chromodynamics
Heavy quark effective theory
Physics
295
7,392
https://en.wikipedia.org/wiki/Class%20%28computer%20programming%29
In object-oriented programming, a class defines the shared aspects of objects created from the class. The capabilities of a class differ between programming languages, but generally the shared aspects consist of state (variables) and behavior (methods) that are each either associated with a particular object or with all objects of that class. Object state can differ between each instance of the class whereas the class state is shared by all of them. The object methods include access to the object state (via an implicit or explicit parameter that references the object) whereas class methods do not. If the language supports inheritance, a class can be defined based on another class with all of its state and behavior plus additional state and behavior that further specializes the class. The specialized class is a sub-class, and the class it is based on is its superclass. Attributes Object lifecycle As an instance of a class, an object is constructed from a class via instantiation. Memory is allocated and initialized for the object state and a reference to the object is provided to consuming code. The object is usable until it is destroyed its state memory is de-allocated. Most languages allow for custom logic at lifecycle events via a constructor and a destructor. Type An object expresses data type as an interface the type of each member variable and the signature of each member function (method). A class defines an implementation of an interface, and instantiating the class results in an object that exposes the implementation via the interface. In the terms of type theory, a class is an implementationa concrete data structure and collection of subroutineswhile a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system). For example, the type (interface) might be implemented by that is fast for small stacks but scales poorly and that scales well but has high overhead for small stacks. Structure A class contains data field descriptions (or properties, fields, data members, or attributes). These are usually field types and names that will be associated with state variables at program run time; these state variables either belong to the class or specific instances of the class. In most languages, the structure defined by the class determines the layout of the memory used by its instances. Other implementations are possible: for example, objects in Python use associative key-value containers. Some programming languages such as Eiffel support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class. Behavior The behavior of a class or its instances is defined using methods. Methods are subroutines with the ability to operate on objects or classes. These operations may alter the state of an object or simply provide ways of accessing it. Many kinds of methods exist, but support for them varies across languages. Some types of methods are created and called by programmer code, while other special methods—such as constructors, destructors, and conversion operators—are created and called by compiler-generated code. A language may also allow the programmer to define and call these special methods. Class interface Every class implements (or realizes) an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented. There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing an implementation. Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from. For example, if "class A" inherits from "class B" and if "class B" implements the interface "interface B" then "class A" also inherits the functionality(constants and methods declaration) provided by "interface B". In languages that support access specifiers, the interface of a class is considered to be the set of public members of the class, including both methods and attributes (via implicit getter and setter methods); any private members or internal data structures are not intended to be depended on by external code and thus are not part of the interface. Object-oriented programming methodology dictates that the operations of any interface of a class are to be independent of each other. It results in a layered design where clients of an interface use the methods declared in the interface. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client has access to the object. Class interface example The buttons on the front of your television set are the interface between you and the electrical wiring on the other side of its plastic casing. You press the "power" button to toggle the television on and off. In this example, your particular television is the instance, each method is represented by a button, and all the buttons together compose the interface (other television sets that are the same model as yours would have the same interface). In its most common form, an interface is a specification of a group of related methods without any associated implementation of the methods. A television set also has a myriad of attributes, such as size and whether it supports color, which together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface). Getting the total number of televisions manufactured could be a static method of the television class. This method is associated with the class, yet is outside the domain of each instance of the class. A static method that finds a particular instance out of the set of all television objects is another example. Member accessibility The following is a common set of access specifiers: Private (or class-private) restricts access to the class itself. Only methods that are part of the same class can access private members. Protected (or class-protected) allows the class itself and all its subclasses to access the member. Public means that any code can access the member by its name. Although many object-oriented languages support the above access specifiers,their semantics may differ. Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants—constraints on the state of the objects. A common usage of access specifiers is to separate the internal data of a class from its interface: the internal structure is made private, while public accessor methods can be used to inspect or alter such private data. Access specifiers do not necessarily control visibility, in that even private members may be visible to client external code. In some languages, an inaccessible but visible member may be referred to at runtime (for example, by a pointer returned from a member function), but an attempt to use it by referring to the name of the member from the client code will be prevented by the type checker. The various object-oriented programming languages enforce member accessibility and visibility to various degrees, and depending on the language's type system and compilation policies, enforced at either compile time or runtime. For example, the Java language does not allow client code that accesses the private data of a class to compile. In the C++ language, private methods are visible, but not accessible in the interface; however, they may be made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class. Some languages feature other accessibility schemes: Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class. Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected. Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned. Inheritance Conceptually, a superclass is a superset of its subclasses. For example, could be a superclass of and , while would be a subclass of . These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares. A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model an engine or body as a subclass of a car. In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the class would have a property called . would be typed to hold a collection of objects, such as instances of , , , etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code besides the basic data definitions for the objects, such as error checking on get and set methods. One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets, it would be rare to find sets that did not intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid. Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation. However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rationale is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility. Although many class-based languages support inheritance, inheritance is not an intrinsic aspect of classes. An object-based language (i.e. Classic Visual Basic) supports classes yet does not support inheritance. Inter-class relationships A programming language may support various class relationship features. Compositional Classes can be composed of other classes, thereby establishing a compositional relationship between the enclosing class and its embedded classes. Compositional relationship between classes is also commonly known as a has-a relationship. For example, a class "Car" could be composed of and contain a class "Engine". Therefore, a Car has an Engine. One aspect of composition is containment, which is the enclosure of component instances by the instance that has them. If an enclosing object contains component instances by value, the components and their enclosing object have a similar lifetime. If the components are contained by reference, they may not have a similar lifetime. For example, in Objective-C 2.0: @interface Car : NSObject @property NSString *name; @property Engine *engine @property NSArray *tires; @end This class has an instance of (a string object), , and (an array object). Hierarchical Classes can be derived from one or more existing classes, thereby establishing a hierarchical relationship between the derived-from classes (base classes, parent classes or ) and the derived class (child class or subclass) . The relationship of the derived class to the derived-from classes is commonly known as an is-a relationship. For example, a class 'Button' could be derived from a class 'Control'. Therefore, a Button is a Control. Structural and behavioral members of the parent classes are inherited by the child class. Derived classes can define additional structural members (data fields) and behavioral members (methods) in addition to those that they inherit and are therefore specializations of their superclasses. Also, derived classes can override inherited methods if the language allows. Not all languages support multiple inheritance. For example, Java allows a class to implement multiple interfaces, but only inherit from one class. If multiple inheritance is allowed, the hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. Classes in the same level are more likely to be associated than classes in different levels. The levels of this hierarchy are called layers or levels of abstraction. Example (Simplified Objective-C 2.0 code, from iPhone SDK): @interface UIResponder : NSObject //... @interface UIView : UIResponder //... @interface UIScrollView : UIView //... @interface UITableView : UIScrollView //... In this example, a UITableView is a UIScrollView is a UIView is a UIResponder is an NSObject. Modeling In object-oriented analysis and in Unified Modelling Language (UML), an association between two classes represents a collaboration between the classes or their corresponding instances. Associations have direction; for example, a bi-directional association between two classes indicates that both of the classes are aware of their relationship. Associations may be labeled according to their name or purpose. An association role is given end of an association and describes the role of the corresponding class. For example, a "subscriber" role describes the way instances of the class "Person" participate in a "subscribes-to" association with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role in the same association. Association role multiplicity describes how many instances correspond to each instance of the other class of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances. Taxonomy There are many categories of classes, some of which overlap. Abstract and concrete In a language that supports inheritance, an abstract class, or abstract base class (ABC), is a class that cannot be directly instantiated. By contrast, a concrete class is a class that be directly instantiated. Instantiation of an abstract class can occur only indirectly, via a concrete class. An abstract class is either labeled as such explicitly or it may simply specify abstract methods (or virtual methods). An abstract class may provide implementations of some methods, and may also specify virtual methods via signatures that are to be implemented by direct or indirect descendants of the abstract class. Before a class derived from an abstract class can be instantiated, all abstract methods of its parent classes must be implemented by some class in the derivation chain. Most object-oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, C# and PHP, the keyword abstract is used. In C++, an abstract class is a class having at least one abstract method given by the appropriate syntax in that language (a pure virtual function in C++ parlance). A class consisting of only pure virtual methods is called a pure abstract base class (or pure ABC) in C++ and is also known as an interface by users of the language. Other languages, notably Java and C#, support a variant of abstract classes called an interface via a keyword in the language. In these languages, multiple inheritance is not allowed, but a class can implement multiple interfaces. Such a class can only contain abstract publicly accessible methods. Local and inner In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes. An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on the language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). A local class is a class defined within a procedure or function. Such structure limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables. Metaclass A metaclass is a class where instances are classes. A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks. In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language. The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. Sealed A sealed class cannot be subclassed. It is basically the opposite of an abstract class, which must be derived to be used. A sealed class is implicitly concrete. A class declared as sealed via the keyword in C# or in Java or PHP. For example, Java's class is marked as final. Sealed classes may allow a compiler to perform optimizations that are not available for classes that can be subclassed. Open An open class can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterward. Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code. Mixin Some languages have special support for mixins, though, in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class might provide a method called when included in classes and that do not share a common parent. Partial In languages supporting the feature, a partial class is a class whose definition may be split into multiple pieces, within a single source-code file or across multiple files. The pieces are merged at compile time, making compiler output the same as for a non-partial class. The primary motivation for the introduction of partial classes is to facilitate the implementation of code generators, such as visual designers. It is otherwise a challenge or compromise to develop code generators that can manage the generated code when it is interleaved within developer-written code. Using partial classes, a code generator can process a separate file or coarse-grained partial class within a file, and is thus alleviated from intricately interjecting generated code via extensive parsing, increasing compiler efficiency and eliminating the potential risk of corrupting developer code. In a simple implementation of partial classes, the compiler can perform a phase of precompilation where it "unifies" all the parts of a partial class. Then, compilation can proceed as usual. Other benefits and effects of the partial class feature include: Enables separation of a class's interface and implementation code in a unique way. Eases navigation through large classes within an editor. Enables separation of concerns, in a way similar to aspect-oriented programming but without using any extra tools. Enables multiple developers to work on a single class concurrently without the need to merge individual code into one file at a later time. Partial classes have existed in Smalltalk under the name of Class Extensions for considerable time. With the arrival of the .NET framework 2, Microsoft introduced partial classes, supported in both C# 2.0 and Visual Basic 2005. WinRT also supports partial classes. Uninstantiable Uninstantiable classes allow programmers to group together per-class fields and methods that are accessible at runtime without an instance of the class. Indeed, instantiation is prohibited for this kind of class. For example, in C#, a class marked "static" can not be instantiated, can only have static members (fields, methods, other), may not have instance constructors, and is sealed. Unnamed An unnamed class or anonymous class is not bound to a name or identifier upon definition. This is analogous to named versus unnamed functions. Benefits The benefits of organizing software into object classes fall into three categories: Rapid development Ease of maintenance Reuse of code and designs Object classes facilitate rapid development because they lessen the semantic gap between the code and the users. System analysts can talk to both developers and users using essentially the same vocabulary, talking about accounts, customers, bills, etc. Object classes often facilitate rapid development because most object-oriented environments come with powerful debugging and testing tools. Instances of classes can be inspected at run time to verify that the system is performing as expected. Also, rather than get dumps of core memory, most object-oriented environments have interpreted debugging capabilities so that the developer can analyze exactly where in the program the error occurred and can see which methods were called to which arguments and with what arguments. Object classes facilitate ease of maintenance via encapsulation. When developers need to change the behavior of an object they can localize the change to just that object and its component parts. This reduces the potential for unwanted side effects from maintenance enhancements. Software reuse is also a major benefit of using Object classes. Classes facilitate re-use via inheritance and interfaces. When a new behavior is required it can often be achieved by creating a new class and having that class inherit the default behaviors and data of its superclass and then tailoring some aspect of the behavior or data accordingly. Re-use via interfaces (also known as methods) occurs when another object wants to invoke (rather than create a new kind of) some object class. This method for re-use removes many of the common errors that can make their way into software when one program re-uses code from another. Runtime representation As a data type, a class is usually considered as a compile time construct. A language or library may also support prototype or factory metaobjects that represent runtime information about classes, or even represent metadata that provides access to reflective programming (reflection) facilities and ability to manipulate data structure formats at runtime. Many languages distinguish this kind of run-time type information about classes from a class on the basis that the information is not needed at runtime. Some dynamic languages do not make strict distinctions between runtime and compile time constructs, and therefore may not distinguish between metaobjects and classes. For example, if Human is a metaobject representing the class Person, then instances of class Person can be created by using the facilities of the Human metaobject. Prototype-based programming In contrast to creating an object from a class, some programming contexts support object creation by copying (cloning) a prototype object. See also Notes References Further reading Abadi; Cardelli: A Theory of Objects ISO/IEC 14882:2003 Programming Language C++, International standard Class Warfare: Classes vs. Prototypes, by Brian Foote Meyer, B.: "Object-oriented software construction", 2nd edition, Prentice Hall, 1997, Rumbaugh et al.: "Object-oriented modeling and design", Prentice Hall, 1991, Programming constructs Programming language topics
Class (computer programming)
Engineering
5,289
51,045
https://en.wikipedia.org/wiki/Guided%20rat
A remotely guided rat, popularly called a ratbot or robo-rat, is a rat with electrodes implanted in the medial forebrain bundle (MFB) and sensorimotor cortex of its brain. They were developed in 2002 by Sanjiv Talwar and John Chapin at the State University of New York Downstate Medical Center. The rats wear a small electronics backpack containing a radio receiver and electrical stimulator. The rat receives remote stimulation in the sensorimotor cortex via its backpack that causes the rat to feel a sensation in its left or right whiskers, and stimulation in the MFB that is interpreted as a reward or pleasure. After a period of training and conditioning using MFB stimulation as a reward, the rats can be remotely directed to move left, right, and forward in response to whisker stimulation signals. It is possible to roughly guide the animal along an obstacle course, jumping small gaps and scaling obstacles. Ethics Concerns have been raised by animal rights groups about the use of animals in this context, particularly due to a concern about the removal of autonomy from an independent creature. For example, a spokesman of the Dr Hadwen Trust, a group funding alternatives to animal research in medicine, has said that the experiments are an "appalling example of how the human species instrumentalizes other species." Researchers tend to liken the training mechanism of the robo-rat to standard operant conditioning techniques. Talwar himself has acknowledged the ethical issues apparent in the development of the robo-rat, but points out that the research meets standards for animal treatment laid down by the National Institute of Health. Moreover, the researchers emphasize that the animals are trained, not coerced, into particular behaviors. Because the rats are encouraged to act via the reward of pleasure, not muscularly compelled to behave in a particular manner, their behavior under MFB stimulation is likened to a carrot-and-stick model of encouraged behavior versus a system of mind control. It seems unlikely that the rats could be persuaded to knowingly risk their lives even with this stimulation. "Our animals were completely happy and treated well," Talwar stated. The technology is reminiscent of experiments performed in 1965 by Dr. Jose Delgado, a controversial scientist who was able to pacify a charging bull via electrodes fitted in its brain. He was also said to control cats and monkeys like "electronic toys." Doctor Robert Galbraith Heath also placed electrodes deep into the brains of patients and wrote hundreds of medical papers on his work. See also Remote control animal References External links News announcement: Nature article: Biocybernetics Cyborgs Rats
Guided rat
Biology
536
24,343,915
https://en.wikipedia.org/wiki/C20H28N2O5
{{DISPLAYTITLE:C20H28N2O5}} The molecular formula C20H28N2O5 (molar mass: 376.447 g/mol, exact mass: 376.1998 u) may refer to: Enalapril Remifentanil Molecular formulas
C20H28N2O5
Physics,Chemistry
68
5,377,788
https://en.wikipedia.org/wiki/Semicarbazone
In organic chemistry, a semicarbazone is a derivative of imines formed by a condensation reaction between a ketone or aldehyde and semicarbazide. They are classified as imine derivatives because they are formed from the reaction of an aldehyde or ketone with the terminal -NH2 group of semicarbazide, which behaves very similarly to primary amines. Formation For ketones H2NNHC(=O)NH2 + RC(=O)R → R2C=NNHC(=O)NH2 For aldehydes H2NNHC(=O)NH2 + RCHO → RCH=NNHC(=O)NH2 For example, the semicarbazone of acetone would have the structure (CH3)2C=NNHC(=O)NH2. Properties and uses Some semicarbazones, such as nitrofurazone, and thiosemicarbazones are known to have anti-viral and anti-cancer activity, usually mediated through binding to copper or iron in cells. Many semicarbazones are crystalline solids, useful for the identification of the parent aldehydes/ketones by melting point analysis. A thiosemicarbazone is an analog of a semicarbazone which contains a sulfur atom in place of the oxygen atom. See also Carbazone Carbazide Thiosemicarbazone References External links Compounds Containing a N-CO-N-N or More Complex Group Functional groups Semicarbazones
Semicarbazone
Chemistry
329
11,421,453
https://en.wikipedia.org/wiki/Rotavirus%20cis-acting%20replication%20element
This family represents a rotavirus cis-acting replication element (CRE) found at the 3'-end of rotavirus mRNAs. The family is thought to promote the synthesis of minus strand RNA to form viral dsRNA. References External links Cis-regulatory RNA elements
Rotavirus cis-acting replication element
Chemistry
58
22,367,956
https://en.wikipedia.org/wiki/Beuchat
Beuchat International, better known as Beuchat, is a company that designs, manufactures and markets underwater equipment. It was established in 1934 in Marseille, France, by Georges Beuchat, who descended from a Swiss watchmaking family. Georges Beuchat was an underwater pioneer who co-founded the French Underwater Federation in 1948. During its 75-year history, the company has deployed several different brand names, among them: "Pêche Sport", "Tarzan", "Beuchat", "Beuchat Sub" and "Beuchat International". Georges Beuchat sold the company in 1982 to the Alvarez de Toledo family. The firm is now owned by the Margnat family, who took over in 2002. Beuchat is an international company. From the outset, Georges Beuchat extended his operations beyond the borders of France, selling his products worldwide. In the 1970s, he created the Beuchat swordfish logo, which can still be found on every product. Business Beuchat currently has 3 core ranges: Scuba diving: recreational diving, professional diving and military diving, Spearfishing and freediving, Snorkeling Chronology 1934: Company founded in Marseille. 1947: Tarzan Speargun 1948: Surface Buoy 1950: Tarzan camera housing 1950: Tarzan calf sheath for diving knife. 1953: 1st Isothermic wetsuit. 1954: Split strap for diving mask. 1958: Compensator (single-window mask). 1959: Tarzan fin grips (3-way straps securing closed-heel fins on feet) 1960: Espadon Record fins with blades featuring parallel longitudinal ribs 1961: Export Award. Club subaquatique toulousain catalogue of Tarzan-Espadon equipment. 1963: Tarzan wetsuit 1964: Jetfins (1st vented fins. 100,000 units sold in the first few years). Souplair regulator release. Mid-1960s: Pêche Sport catalogue. Late 1960s: Beuchat & Co. catalogue. 1975: Marlin speargun 1978: Atmos regulator 1985: Lyfty ruff buoy 1986: Aladin computer distribution 1990: Cavalero purchasing 1993: Oceane buoy 1998: CX1, 1st French diving computer (Comex Algorithm, French Labor Ministry certified) 2001: Mundial Spearfishing fins 2007: Focea Comfort II wetsuit. Power Jet fins. 2008: BCD Masterlift Voyager 2009: VR 200 Evolution regulator. 75th brand anniversary. Anniversary wetsuit limited edition release. 2010: Marlin Revolution speargun - roller gun Spearfishing Ever since the company was established, Beuchat has manufactured spearfishing equipment, enabling spearfishers such as Pedro Carbonell, Sylvain Pioch, Pierre Roy, Ghislain Guillou and Vladimir Dokuchajev to gain numerous national and international titles. Various The Scubapro logo: "S" was adapted from the Beuchat "Souplair" regulator. References External links Corporate website SpearoTek, Inc. - U.S. Distributor Manufacturing companies established in 1934 Underwater diving equipment manufacturers Manufacturing companies based in Marseille Military diving equipment French brands Underwater diving engineering French companies established in 1934
Beuchat
Engineering
652
1,023,079
https://en.wikipedia.org/wiki/Malachite%20green
Malachite green is an organic compound that is used as a dyestuff and controversially as an antimicrobial in aquaculture. Malachite green is traditionally used as a dye for materials such as silk, leather, and paper. Despite its name the dye is not prepared from the mineral malachite; the name just comes from the similarity of color. Structures and properties Malachite green is classified in the dyestuff industry as a triarylmethane dye and also using in pigment industry. Formally, malachite green refers to the chloride salt , although the term malachite green is used loosely and often just refers to the colored cation. The oxalate salt is also marketed. The anions have no effect on the color. The intense green color of the cation results from a strong absorption band at 621 nm (extinction coefficient of ). Malachite green is prepared by the condensation of benzaldehyde and dimethylaniline to give leuco malachite green (LMG): C6H5CHO + C6H5N(CH3)2 -> (C6H5N(CH3)2)2C6H5 + H2O Second, this colorless leuco compound, a relative of triphenylmethane, is oxidized to the cation that is MG: A typical oxidizing agent is manganese dioxide. Hydrolysis of MG gives an alcohol: This alcohol is important because it, not MG, traverses cell membranes. Once inside the cell, it is metabolized into LMG. Only the cation MG is deeply colored, whereas the leuco and alcohol derivatives are not. This difference arises because only the cationic form has extended pi-delocalization, which allows the molecule to absorb visible light. Preparation The leuco form of malachite green was first prepared by Hermann Fischer in 1877 by condensing benzaldehyde and dimethylaniline in the molecular ratio 1:2 in the presence of sulfuric acid. Uses Malachite green is traditionally used as a dye. Kilotonnes of MG and related triarylmethane dyes are produced annually for this purpose. MG is active against the oomycete Saprolegnia, which infects fish eggs in commercial aquaculture, MG has been used to treat Saprolegnia and is used as an antibacterial. It is a very popular treatment against Ichthyophthirius multifiliis in freshwater aquaria. The principal metabolite, leuco-malachite green (LMG), is found in fish treated with malachite green, and this finding is the basis of controversy and government regulation. See also Antimicrobials in aquaculture. MG has frequently been used to catch thieves and pilferers. The bait, usually money, is sprinkled with the anhydrous powder. Anyone handling the contaminated money will find that on upon washing the hands, a green stain on the skin that lasts for several days will result. Niche uses Numerous niche applications exploit the intense color of MG. It is used as a biological stain for microscopic analysis of cell biology and tissue samples. In the Gimenez staining method, basic fuchsin stains bacteria red or magenta, and malachite green is used as a blue-green counterstain. Malachite green is also used in endospore staining, since it can directly stain endospores within bacterial cells; here a safranin counterstain is often used. Malachite green is a part of Alexander's pollen stain. Malachite green can also be used as a saturable absorber in dye lasers, or as a pH indicator between pH 0.2–1.8. However, this use is relatively rare. Leuco-malachite green (LMG) is used as a detection method for latent blood in forensic science. Hemoglobin catalyzes the reaction between LMG and hydrogen peroxide, converting the colorless LMG into malachite green. Therefore, the appearance of a green color indicates the presence of blood. A set of malachite green derivatives is also a key component in a fluorescence microscopy tool called the fluorogen activating protein/fluorogen system. Malachite green is in a class of molecules called fluorophores. When malachite green's rotational freedom is restricted, it transforms from a non fluorescent molecule to a highly fluorescent molecule. In the fluorogen activating protein tool, established by a group at Carnegie Mellon University, Malachite green binds a specific fluorogen activating protein to become highly fluorescent. Expression of the fluorogen activating protein as fusions of targeting domains can impart subcellular localization. Its use is similar to that of GFP but has the added benefit of having a 'dark state' before the malachite green fluorophore is added. This is especially useful for FRET studies. Regulation In 1992, Canadian authorities determined that eating fish contaminated with malachite green posed a significant health risk. Malachite green was classified a Class II Health Hazard. Due to its low manufacturing cost, malachite green is still used in certain countries with less restrictive laws for non aquaculture purposes. In 2005, analysts in Hong Kong found traces of malachite green in eels and fish imported from China. In 2006, the United States Food and Drug Administration (FDA) detected malachite green in seafood from China, among others, where the substance is also banned for use in aquaculture. In June 2007, the FDA blocked the importation of several varieties of seafood due to continued malachite green contamination. Malachite green has been banned in the United States since 1983 in food-related applications. The substance is also banned in the United Kingdom. It is prohibited from the use in food in Macao. Animals metabolize malachite green to its leuco form. Being lipophillic (the leuco form has a log P of 5.70), the metabolite is retained in catfish muscle longer (HL = 10 days) than is the parent molecule (HL = 2.8 days). Toxicity The (oral, mouse) is 80 mg/kg. Rats fed malachite green experience "a dose-related increase in liver DNA adducts" along with lung adenomas. Leucomalachite green causes an "increase in the number and severity of changes". As leucomalachite green is the primary metabolite of malachite green and is retained in fish muscle much longer, most human dietary intake of malachite green from eating fish would be in the leuco form. During the experiment, rats were fed up to 543 ppm of leucomalachite green, an extreme amount compared to the average 5 ppb discovered in fish. After a period of two years, an increase in lung adenomas in male rats was discovered but no incidences of liver tumors. Therefore, it could be concluded that malachite green caused carcinogenic symptoms, but a direct link between malachite green and liver tumor was not established. Detection Although malachite green has almost no fluorescence in aqueous solution (quantum yield 7.9x10−5), several research groups have developed technologies to detect malachite green. For example, Zhao et al., demonstrated the use of malachite green aptamer in microcantilever based sensors to detect low concentration of malachite green. References Further reading Schoettger, 1970; Smith and Heath, 1979; Gluth and Hanke, 1983. Bills et al. (1977) External links U.S. National Institutes of Health U.S. Food and Drug Administration U.K. Department of Health Malachite green - endospore staining technique (video) Malachite Green Dyes Triarylmethane dyes Staining dyes PH indicators Antimicrobials Aromatic amines Fish medicine Dimethylamino compounds Carbocations
Malachite green
Chemistry,Materials_science,Biology
1,693
9,025,310
https://en.wikipedia.org/wiki/List%20of%20UN%20numbers%203001%20to%203100
UN numbers from UN3001 to UN3100 as assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods are as follows: UN 3001 to UN 3100 See also Lists of UN numbers References External links ADR Dangerous Goods, cited on 7 May 2015. UN Dangerous Goods List from 2015, cited on 7 May 2015. UN Dangerous Goods List from 2013, cited on 7 May 2015. Lists of UN numbers
List of UN numbers 3001 to 3100
Chemistry,Technology
88
26,386,344
https://en.wikipedia.org/wiki/Sea%20foam
Sea foam, ocean foam, beach foam, or spume is a type of foam created by the agitation of seawater, particularly when it contains higher concentrations of dissolved organic matter (including proteins, lignins, and lipids) derived from sources such as the offshore breakdown of algal blooms. These compounds can act as surfactants or foaming agents. As the seawater is churned by breaking waves in the surf zone adjacent to the shore, the surfactants under these turbulent conditions trap air, forming persistent bubbles that stick to each other through surface tension. Sea foam is a global phenomenon, and it varies depending on location and the potential influence of the surrounding marine, freshwater, and/or terrestrial environments. Due to its low density and persistence, foam can be blown by strong on-shore winds inland, towards the beach. Human activities, such as production, transport or spillage of petroleum products or detergents, can also contribute to the formation of sea foam. Formation Sea foam is formed under conditions that are similar to the formation of sea spray. One of the main distinctions from sea spray formation is the presence of higher concentrations of dissolved organic matter from macrophytes and phytoplankton. The dissolved organic matter in the surface water, which can be derived from the natural environment or human-made sources, provides stability to the resulting sea foam. The physical processes that contribute to sea foam formation are breaking surface waves, bubble entrainment, a process of bubbles being incorporated or captured within a liquid such as sea water and whitecap formation. Breaking of surface waves injects air from the atmosphere into the water column, leading to bubble creation. These bubbles get transported around the top few meters of the surface ocean due to their buoyancy. The smallest bubbles entrained in the water column dissolve entirely, leading to higher ratios of dissolved gases in the surface ocean. The bubbles that do not dissolve eventually make it back to the surface. As they rise, these bubbles accumulate hydrophobic substances. Presence of dissolved organic matter stabilizes the bubbles, aggregating together as sea foam. Some studies on sea foam report that breaking of algal cells in times of heavy swells makes sea foam production more likely. Falling rain drops on the sea surface can also contribute to sea foam formation and destruction. There have been some non-mechanistic studies demonstrating increased sea foam formation due to high rainfall events. Turbulence in the surface mixed layer can affect the concentration of dissolved organic matter and aids in the formation of nutrient-dense foam. Composition The composition of sea foam is generally a mixture of decomposed organic materials, including zooplankton, phytoplankton, algae (including diatoms), bacteria, fungi, protozoans, and vascular plant detritus, though each occurrence of sea foam varies in its specific contents. In some areas, sea foam is found to be made up of primarily protein, dominant in both fresh and old foam, as well as lipids and carbohydrates. The high protein and low carbohydrate concentration suggest that sugars originally present in the surrounding mucilage created by algae or plant matter has been quickly consumed by bacteria. Additional research has shown that a small fraction of the dry weight in sea foam is organic carbon, which contains phenolics, sugars, amino sugars, and amino acids. In the Bay of Fundy, high mortality rates of an abundant tube-dwelling amphipod (Corophium volutator) by natural die-offs as well as predation by migrating seabirds contributed to amino sugars released in the surrounding environment and thus, in sea foam. The organic matter in sea foam has been found to increase dramatically during phytoplankton blooms in the area. Some research has shown very high concentrations of microplankton in sea foam, with significantly higher numbers of autotrophic phytoplankton than heterotrophs Some foams are particularly rich in their diatom population which can make up the majority of the microalgal biomass in some cases. A diversity of bacteria is also present in sea foam; old foam tends to have a higher density of bacteria. One study found that 95% of sea foam bacteria were rod-shaped, while the surrounding surface water contained mostly coccoid-form bacteria and only 5% - 10% rod-shaped bacteria. There is also seasonal variability of sea foam composition; in some regions there is a seasonal occurrence of pollen in sea foam which can alter its chemistry. Though foam is not inherently toxic, it may contain high concentrations of contaminants. Foam bubbles can be coated with or contain these materials which can include petroleum compounds, pesticides, and herbicides. Longevity and stability Structurally, sea foam is thermodynamically unstable, though some sea foam can persist in the environment for several days at most. There are two types of sea foam categorized based on their stability: 1) Unstable or transient foams have very short lifetimes of only seconds. The bubbles formed in sea foam may burst releasing aerosols into the air, contributing to sea spray. 2) Metastable foams can have a lifetime of several hours to several days; their duration is sometimes attributed to small particles of silica, calcium, or iron which contribute to foam stability and longevity. Additionally, seawater that contains released dissolved organic material from phytoplankton and macrophytic algae that is then agitated in its environment is most likely to produce stable, longer-lasting foam when compared with seawater lacking one of those components. For example, filtered seawater when added to the fronds of the kelp, Ecklonia maxima, produced foam but it lacked the stability that unfiltered seawater provided. Additionally, kelp fronds that were maintained in flowing water therefore reducing their mucus coating, were unable to help foam form. Different types of salt are also found to have varying effects on bubble proximity within sea foam, therefore contributing to its stability. Ecological role Food source The presence of sea foam in the marine environment plays a number of ecological roles including providing sources of food and creating habitat. As a food source, sea foam with a stable composition is more important ecologically, as it is able to persist longer and can transport nutrients within the marine environment. Longer decay times result in a higher chance that energy contained in sea foam will move up the food web into higher trophic levels. In the Bay of Fundy for example, a tube-dwelling amphipod, Corophium volutator, can potentially attain 70% of its nutritional requirements from the sugars and amino acids derived from sea foam in its environment. At times however, the sea foam was found to be toxic to this species. It is thought that high concentrations of phenolics and/or the occasional presence of heavy metals or pesticides incorporated into the sea foam from the sea surface contributed to its toxicity. On the west coast of Cape Peninsula, South Africa, sea foam often occurs in nearshore marine areas with large kelp beds during periods of strong westerly winds. It is thought that the foam generated in these conditions is an important food source for local organisms due to the presence of organic detritus in the sea foam. Material transport Sea foam also acts as a mode of transport for both organisms and nutrients within the marine environment and, at times, into the intertidal or terrestrial environments. Wave action can deposit foam into intertidal areas where it can remain when the tide recedes, bringing nutrients to the intertidal zone. Additionally, sea foam can become airborne in windy conditions, transporting materials between marine and terrestrial environments. The ability of sea foam to transport materials is also thought to benefit macroalgal organisms, as macroalgae propagules can be carried to different microenvironments, thus influencing the tidal landscape and contributing to new possible ecological interactions. As sea foam is a wet environment, it is conducive habitat to algal spores where propagules can attach to the substrate and avoid risk of dissemination. When sea foam contains fungi, it can also aid in the decomposition of plant and animal remains in coastal ecosystems. Habitat Additionally, sea foam is a habitat for a number of marine microorganisms. Some research has shown the presence of various microphytoplanktonic, nanophytoplanktonic, and diatom groups in seafoam; the phytoplankton groups appeared in significantly higher abundance than in sea surface film and the top pelagic zone Hazards Toxicity Naturally occurring sea foam is not inherently toxic; however, it can be exposed to high concentrations of contaminants in the surface microlayer derived from the breakdown of algal blooms, fossil fuel production and transport, and stormwater runoff. These contaminants contribute to the formation of noxious sea foam through adsorption onto bubbles. Bubbles may burst and release toxins into the atmosphere in the form of sea spray or aerosol, or they may persist in foams. Toxins released through aerosols and breaking bubbles can be inhaled by humans. The microorganisms that occupy sea foams as habitat have increased susceptibility for contaminant exposure. Consequently, these toxic substances can be integrated into the trophic food web. Harmful algal blooms Foams can form following the degradation of harmful algal blooms (HABs). These are primarily composed of algal species, but can also consist of dinoflagellates and cyanobacteria. Biomass from algae in the bloom is integrated into sea foam in the sea surface microlayer. When the impacted sea foam breaks down, toxins from the algae are released into the air causing respiratory issues and occasionally initiating asthma attacks. Phaeocystis globosa is one algal species that is considered problematic, as observed in a study in the Netherlands. Its high biomass accumulation allows it to create large quantities of toxic foam that often wash onto beaches. P. globosa blooms are initiated in areas of high nutrient availability, often affiliated with coastal locations with a lot of stormwater runoff and eutrophication. Studies suggest that the development of foam is directly correlated to blooms caused by P. globosa, despite that foam formation typically occurs approximately two weeks after the appearance of an algal bloom offshore. Organic material from P. globosa was observed decomposing while suspended at the sea surface, but was not observed lower in the water column. P. globosa is also considered a nuisance species because its large foam formations impair the public's ability to enjoy the beach. Human activities While sea foam is a common result of the agitation of seawater mixing with organic material in the surface ocean, human activities can contribute to the production of excess and often toxic foam. In addition to the organic oils, acids, and proteins that amass in the sea surface microlayer, compounds derived from petroleum production and transport, synthetic surfactants, and pesticide use can enter the sea surface and be incorporated into foam. The pollutants present can also affect the persistence of the foam produced. Crude oil discharged from tankers, motor oil, sewage, and detergents from polluted runoff can create longer-lasting foams. In one study, polychlorinated biphenyls (PCBs), a persistent organic pollutant, were found to amass in sea foams. Some experts and health authorities recommend avoiding contact with sea foam in lakes and rivers and seas that are contaminated with PFAS, since these substances were found to accumulate in sea foam in high concentrations. Man-made microplastic pollution can accumulate in breaking waves and increase sea foam stability. Natural gas terminals have been cited as contributors to the production of modified foams due to the process of using seawater to convert natural gas to liquified natural gas. One study showed a much greater abundance of heterotrophic prokaryotes (archaea and bacteria) and cyanobacteria in foam that was generated near a liquified natural gas terminal. These prokaryotes were able to recycle chemical materials discharged from the terminal, which enhanced microbial growth. Additionally, higher levels of total organic carbon (TOC) and plankton biomass were recorded in foam generated in close proximity to the terminal. Organic carbon was transferred readily into the pelagic food web after uptake by prokaryotes and ingestion by grazers. Notable occurrences 24 August 2007: A large buildup of sea foam occurred on the coast of Yamba, northern New South Wales. January–February 2008: Sea foam occurrences at Caloundra and Point Cartwright on Queensland's Sunshine Coast attracted world-wide media attention. December 2011: The coast road at Cleveleys, Lancashire was swamped by meter-high drifts of sea foam. 2012: During live coverage of Hurricane Irene in Ocean City, Maryland, Tucker Barnes was covered in sea foam. 24–25 September 2012: Following storms and high winds, the beach front of the Footdee area of Aberdeen was engulfed with sea foam. 27–28 January 2013: The Sunshine Coast in Queensland, Australia had masses of foam wash up on land from ex-tropical Cyclone Oswald. June 2016: Sea foam occurred across the East coast of Australia, whipped up by storms. 28 March 2017: Sea foam was generated by Cyclone Debbie at Sarina Beach in Queensland, Australia. 16 October 2017: Hurricane Ophelia covered Cleveleys, Lancashire with spume. January 2018: Storm Eleanor causes widespread foam to appear across coastal Europe. 11 October 2019: Subtropical storm Melissa brought sea foam to Nantasket Beach in Hull, Massachusetts. 21 January 2020: Storm Gloria floods Tossa de Mar, Spain, with thick sea foam on top of major flooding. 11 May 2020: Five surfers die in The Netherlands, presumably upon drowning after becoming disoriented in over 2 meters thick sea foam. 13 July 2020: The Cape Town storm, South Africa See also Aphrodite#Birth References External links April 2007 Storm Photo Gallery , Lane Memorial Library, Hampton, New Hampshire. Accessed 5 November 2010 How foam forms on ocean waves, New Scientist, Issue 1837, 5 September 1992. Article preview. Accessed 5 November 2010. Blanket of white foam covers Aberdeen coast—Guardian video. Accessed 25 September 2012 Sea Foam Video on YouTube Sea foam covering a swimmer, Australia Gold Coast Aquatic ecology Articles containing video clips Chemical oceanography Sea Liquid water Physical oceanography
Sea foam
Physics,Chemistry,Biology
2,972
441,179
https://en.wikipedia.org/wiki/Combustion%20chamber
A combustion chamber is part of an internal combustion engine in which the fuel/air mix is burned. For steam engines, the term has also been used for an extension of the firebox which is used to allow a more complete combustion process. Internal combustion engines In an internal combustion engine, the pressure caused by the burning air/fuel mixture applies direct force to part of the engine (e.g. for a piston engine, the force is applied to the top of the piston), which converts the gas pressure into mechanical energy (often in the form of a rotating output shaft). This contrasts an external combustion engine, where the combustion takes place in a separate part of the engine to where the gas pressure is converted into mechanical energy. Spark-ignition engines In spark ignition engines, such as petrol (gasoline) engines, the combustion chamber is usually located in the cylinder head. The engines are often designed such that the bottom of combustion chamber is roughly in line with the top of the engine block. Modern engines with overhead valves or overhead camshaft(s) use the top of the piston (when it is near top dead centre) as the bottom of the combustion chamber. Above this, the sides and roof of the combustion chamber include the intake valves, exhaust valves and spark plug. This forms a relatively compact combustion chamber without any protrusions to the side (i.e. all of the chamber is located directly above the piston). Common shapes for the combustion chamber are typically similar to one or more half-spheres (such as the hemi, pent-roof, wedge or kidney-shaped chambers). The older flathead engine design uses a "bathtub"-shaped combustion chamber, with an elongated shape that sits above both the piston and the valves (which are located beside the piston). IOE engines combine elements of overhead valve and flathead engines; the intake valve is located above the combustion chamber, while the exhaust valve is located below it. The shape of the combustion chamber, intake ports and exhaust ports are key to achieving efficient combustion and maximising power output. Cylinder heads are often designed to achieve a certain "swirl" pattern (rotational component to the gas flow) and turbulence, which improves the mixing and increases the flow rate of gasses. The shape of the piston top also affects the amount of swirl. Another design feature to promote turbulence for good fuel/air mixing is squish, where the fuel/air mix is "squished" at high pressure by the rising piston. The location of the spark plug is also an important factor, since this is the starting point of the flame front (the leading edge of the burning gasses) which then travels downwards towards the piston. Good design should avoid narrow crevices where stagnant "end gas" can become trapped, reducing the power output of the engine and potentially leading to engine knocking. Most engines use a single spark plug per cylinder, however some (such as the 1986-2009 Alfa Romeo Twin Spark engine) use two spark plugs per cylinder. Compression-ignition engines Compression-ignition engines, such as diesel engines, are typically classified as either: Direct injection, where the fuel is injected into the combustion chamber. Common varieties include unit direct injection and common rail injection. Indirect injection, where the fuel is injected into a swirl chamber or pre-combustion chamber. The fuel ignites as it is injected into this chamber and the burning air/fuel mixture spreads into the main combustion chamber. Direct injection engines usually give better fuel economy but indirect injection engines can use a lower grade of fuel. Harry Ricardo was prominent in developing combustion chambers for diesel engines, the best known being the Ricardo Comet. Gas turbine In a continuous flow system, for example a jet engine combustor, the pressure is controlled and the combustion creates an increase in volume. The combustion chamber in gas turbines and jet engines (including ramjets and scramjets) is called the combustor. The combustor is fed with high pressure air by the compression system, adds fuel and burns the mix and feeds the hot, high pressure exhaust into the turbine components of the engine or out the exhaust nozzle. Different types of combustors exist, mainly: Can type: Can combustors are self-contained cylindrical combustion chambers. Each "can" has its own fuel injector, liner, interconnectors, casing. Each "can" get an air source from individual opening. Cannular type: Like the can type combustor, can annular combustors have discrete combustion zones contained in separate liners with their own fuel injectors. Unlike the can combustor, all the combustion zones share a common air casing. Annular type: Annular combustors do away with the separate combustion zones and simply have a continuous liner and casing in a ring (the annulus). Rocket engine If the gas velocity changes, thrust is produced, such as in the nozzle of a rocket engine. Steam engines Considering the definition of combustion chamber used for internal combustion engines, the equivalent part of a steam engine would be the firebox, since this is where the fuel is burned. However, in the context of a steam engine, the term "combustion chamber" has also been used for a specific area between the firebox and the boiler. This extension of the firebox is designed to allow a more complete combustion of the fuel, improving fuel efficiency and reducing build-up of soot and scale. The use of this type of combustion chamber is large steam locomotive engines, allows the use of shorter firetubes. Micro combustion chambers Micro combustion chambers are the devices in which combustion happens at a very small volume, due to which surface to volume ratio increases which plays a vital role in stabilizing the flame. See also Cylinder head Engine displacement Combustor Variable compression ratio References Engine technology Locomotive parts Gas turbine technology
Combustion chamber
Technology
1,194
16,179,698
https://en.wikipedia.org/wiki/Isrotel%20Tower
The Isrotel Tower is a hotel located on the beachfront of Tel Aviv, Israel. The tower is 108 meters high, has 29 floors and is operated by the Israeli Isrotel hotel group. A Gvirtzman Architects designed the towers which were completed in 1966, whilst the main core was completed in the 1980s. The diameter of the structure is 29 meters and the tower is constructed on the site of the Gan Rina Theatre. The hotel consists of 90 suites whilst the top floors house 62 apartments. The tower is constructed on a narrow pedestal. The Nakash family purchased the tower for $150 million USD in April 2013. See also List of skyscrapers in Israel Architecture of Israel Tourism in Israel References External links Isrotel Tower Tel Aviv Isrotel Tower Skyscrapers in Tel Aviv Hotels in Tel Aviv Residential buildings completed in 1997 Postmodern architecture Hotels established in 1997 Skyscraper hotels Residential skyscrapers in Israel Skyscrapers in Israel
Isrotel Tower
Engineering
190
769,021
https://en.wikipedia.org/wiki/Bioavailability
In pharmacology, bioavailability is a subcategory of absorption and is the fraction (%) of an administered drug that reaches the systemic circulation. By definition, when a medication is administered intravenously, its bioavailability is 100%. However, when a medication is administered via routes other than intravenous, its bioavailability is lower due to intestinal epithelium absorption and first-pass metabolism. Thereby, mathematically, bioavailability equals the ratio of comparing the area under the plasma drug concentration curve versus time (AUC) for the extravascular formulation to the AUC for the intravascular formulation. AUC is used because AUC is proportional to the dose that has entered the systemic circulation. Bioavailability of a drug is an average value; to take population variability into account, deviation range is shown as ±. To ensure that the drug taker who has poor absorption is dosed appropriately, the bottom value of the deviation range is employed to represent real bioavailability and to calculate the drug dose needed for the drug taker to achieve systemic concentrations similar to the intravenous formulation. To dose without knowing the drug taker's absorption rate, the bottom value of the deviation range is used in order to ensure the intended efficacy, unless the drug is associated with a narrow therapeutic window. For dietary supplements, herbs and other nutrients in which the route of administration is nearly always oral, bioavailability generally designates simply the quantity or fraction of the ingested dose that is absorbed. Definitions In pharmacology Bioavailability is a term used to describe the percentage of an administered dose of a xenobiotic that reaches the systemic circulation. It is denoted by the letter f (or, if expressed in percent, by F). In nutritional science In nutritional science, which covers the intake of nutrients and non-drug dietary ingredients, the concept of bioavailability lacks the well-defined standards associated with the pharmaceutical industry. The pharmacological definition cannot apply to these substances because utilization and absorption is a function of the nutritional status and physiological state of the subject, resulting in even greater differences from individual to individual (inter-individual variation). Therefore, bioavailability for dietary supplements can be defined as the proportion of the administered substance capable of being absorbed and available for use or storage. In both pharmacology and nutrition sciences, bioavailability is measured by calculating the area under curve (AUC) of the drug concentration time profile. In environmental sciences or science Bioavailability is the measure by which various substances in the environment may enter into living organisms. It is commonly a limiting factor in the production of crops (due to solubility limitation or absorption of plant nutrients to soil colloids) and in the removal of toxic substances from the food chain by microorganisms (due to sorption to or partitioning of otherwise degradable substances into inaccessible phases in the environment). A noteworthy example for agriculture is plant phosphorus deficiency induced by precipitation with iron and aluminum phosphates at low soil pH and precipitation with calcium phosphates at high soil pH. Toxic materials in soil, such as lead from paint may be rendered unavailable to animals ingesting contaminated soil by supplying phosphorus fertilizers in excess. Organic pollutants such as solvents or pesticides may be rendered unavailable to microorganisms and thus persist in the environment when they are adsorbed to soil minerals or partition into hydrophobic organic matter. Absolute bioavailability Absolute bioavailability compares the bioavailability of the active drug in systemic circulation following non-intravenous administration (i.e., after oral, buccal, ocular, nasal, rectal, transdermal, subcutaneous, or sublingual administration), with the bioavailability of the same drug following intravenous administration. It is the fraction of exposure to a drug (AUC) through non-intravenous administration compared with the corresponding intravenous administration of the same drug. The comparison must be dose normalized (e.g., account for different doses or varying weights of the subjects); consequently, the amount absorbed is corrected by dividing the corresponding dose administered. In pharmacology, in order to determine absolute bioavailability of a drug, a pharmacokinetic study must be done to obtain a plasma drug concentration vs time plot for the drug after both intravenous (iv) and extravascular (non-intravenous, i.e., oral) administration. The absolute bioavailability is the dose-corrected area under curve (AUC) non-intravenous divided by AUC intravenous. The formula for calculating the absolute bioavailability, F, of a drug administered orally (po) is given below (where D is dose administered). Therefore, a drug given by the intravenous route will have an absolute bioavailability of 100% (f = 1), whereas drugs given by other routes usually have an absolute bioavailability of less than one. If we compare the two different dosage forms having same active ingredients and compare the two drug bioavailability is called comparative bioavailability. Although knowing the true extent of systemic absorption (referred to as absolute bioavailability) is clearly useful, in practice it is not determined as frequently as one may think. The reason for this is that its assessment requires an intravenous reference; that is, a route of administration that guarantees all of the administered drug reaches systemic circulation. Such studies come at considerable cost, not least of which is the necessity to conduct preclinical toxicity tests to ensure adequate safety, as well as potential problems due to solubility limitations. These limitations may be overcome, however, by administering a very low dose (typically a few micrograms) of an isotopically labelled drug concomitantly with a therapeutic non-isotopically labelled oral dose (the isotopically labelled intravenous dose is sufficiently low so as not to perturb the systemic drug concentrations achieved from the non-labelled oral dose). The intravenous and oral concentrations can then be deconvoluted by virtue of their different isotopic constitution, and can thus be used to determine the oral and intravenous pharmacokinetics from the same dose administration. This technique eliminates pharmacokinetic issues with non-equivalent clearance as well as enabling the intravenous dose to be administered with a minimum of toxicology and formulation. The technique was first applied using stable-isotopes such as 13C and mass-spectrometry to distinguish the isotopes by mass difference. More recently, 14C labelled drugs are administered intravenously and accelerator mass spectrometry (AMS) used to measure the isotopically labelled drug along with mass spectrometry for the unlabelled drug. There is no regulatory requirement to define the intravenous pharmacokinetics or absolute bioavailability however regulatory authorities do sometimes ask for absolute bioavailability information of the extravascular route in cases in which the bioavailability is apparently low or variable and there is a proven relationship between the pharmacodynamics and the pharmacokinetics at therapeutic doses. In all such cases, to conduct an absolute bioavailability study requires that the drug be given intravenously. Intravenous administration of a developmental drug can provide valuable information on the fundamental pharmacokinetic parameters of volume of distribution (V) and clearance (CL). Relative bioavailability and bioequivalence In pharmacology, relative bioavailability measures the bioavailability (estimated as the AUC) of a formulation (A) of a certain drug when compared with another formulation (B) of the same drug, usually an established standard, or through administration via a different route. When the standard consists of intravenously administered drug, this is known as absolute bioavailability (see above). Relative bioavailability is one of the measures used to assess bioequivalence (BE) between two drug products. For FDA approval, a generic manufacturer must demonstrate that the 90% confidence interval for the ratio of the mean responses (usually of AUC and the maximum concentration, Cmax) of its product to that of the "brand name drug" is within the limits of 80% to 125%. Where AUC refers to the concentration of the drug in the blood over time t = 0 to t = ∞, Cmax refers to the maximum concentration of the drug in the blood. When Tmax is given, it refers to the time it takes for a drug to reach Cmax. While the mechanisms by which a formulation affects bioavailability and bioequivalence have been extensively studied in drugs, formulation factors that influence bioavailability and bioequivalence in nutritional supplements are largely unknown. As a result, in nutritional sciences, relative bioavailability or bioequivalence is the most common measure of bioavailability, comparing the bioavailability of one formulation of the same dietary ingredient to another. Factors influencing bioavailability The absolute bioavailability of a drug, when administered by an extravascular route, is usually less than one (i.e., F< 100%). Various physiological factors reduce the availability of drugs prior to their entry into the systemic circulation. Whether a drug is taken with or without food will also affect absorption, other drugs taken concurrently may alter absorption and first-pass metabolism, intestinal motility alters the dissolution of the drug and may affect the degree of chemical degradation of the drug by intestinal microflora. Disease states affecting liver metabolism or gastrointestinal function will also have an effect. Other factors may include, but are not limited to: Physical properties of the drug (hydrophobicity, pKa, solubility) The drug formulation (immediate release, excipients used, manufacturing methods, modified release – delayed release, extended release, sustained release, etc.) Whether the formulation is administered in a fed or fasted state Gastric emptying rate Circadian differences Interactions with other drugs/foods: Interactions with other drugs (e.g., antacids, alcohol, nicotine) Interactions with other foods (e.g., grapefruit juice, pomello, cranberry juice, brassica vegetables) Transporters: Substrate of efflux transporters (e.g. P-glycoprotein) Health of the gastrointestinal tract Enzyme induction/inhibition by other drugs/foods: Enzyme induction (increased rate of metabolism), e.g., Phenytoin induces CYP1A2, CYP2C9, CYP2C19, and CYP3A4 Enzyme inhibition (decreased rate of metabolism), e.g., grapefruit juice inhibits CYP3A → higher nifedipine concentrations Individual variation in metabolic differences Age: In general, drugs are metabolized more slowly in fetal, neonatal, and geriatric populations Phenotypic differences, enterohepatic circulation, diet, gender Disease state E.g., hepatic insufficiency, poor renal function Each of these factors may vary from patient to patient (inter-individual variation), and indeed in the same patient over time (intra-individual variation). In clinical trials, inter-individual variation is a critical measurement used to assess the bioavailability differences from patient to patient in order to ensure predictable dosing. See also ADME-Tox Biopharmaceutics Classification System Caco-2 Lipinski's Rule of 5 Notes References Sources Pharmacokinetic metrics Medicinal chemistry Life sciences industry
Bioavailability
Chemistry,Biology
2,451
56,857,842
https://en.wikipedia.org/wiki/Cosmic-Ray%20Extremely%20Distributed%20Observatory
Cosmic-Ray Extremely Distributed Observatory (CREDO) is a scientific project initiated at the end of August 2016 by Polish scientists from the Institute of Nuclear Physics in Kraków (researchers from the Czech Republic, Slovakia and Hungary also joined the project) whose purpose is the detection of cosmic rays and the search for dark matter. Its aim is to involve as many people as possible in the construction of a global system of cosmic ray detectors, from which it will be possible to examine the essence of dark matter. Having a camera and a GPS module, a smartphone works well as a detector of particles from space. Objective The main objective of CREDO is the detection and analysis of extended cosmic ray phenomena, so-called super-preshowers (SPS), using existing as well as new infrastructure (cosmic-ray observatories, educational detectors, single detectors etc.). The search for ensembles of cosmic ray events initiated by SPS is yet an untouched topic, in contrast to the current state-of-the-art analysis, which is focused on the detection of single cosmic ray events. Theoretical explanation of SPS could be given either within classical (e.g., photon-photon interaction) or exotic (e.g., Super Heavy Dark Matter decay or annihilation) scenarios, thus detection of SPS would provide a better understanding of particle physics, high energy astrophysics and cosmology. The ensembles of cosmic rays can be classified based on the spatial and temporal extent of particles constituting the ensemble. Some classes of SPS are predicted to have huge spatial distribution, a unique signature detectable only with a facility of global size. Since development and commissioning of a completely new facility with such requirements is economically unwarranted and time-consuming, the global analysis goals are achievable when all types of existing detectors are merged into a worldwide network. The idea to use the instruments in operation is based on a novel trigger algorithm: in parallel to looking for neighbour surface detectors receiving the signal simultaneously, one should also look for spatially isolated stations clustered in a small time window. On the other hand, CREDO's strategy is also aimed at an active engagement of a large number of participants, who will contribute to the project by using common electronic devices (e.g. smartphones), capable of detecting cosmic rays. It will help not only in expanding the geographical spread of CREDO, but also in managing a large manpower necessary for a more efficient crowd-sourced pattern recognition scheme to identify and classify SPS. A worldwide network of cosmic-ray detectors could not only become a unique tool to study fundamental physics, it will also provide a number of other opportunities, including space weather or geophysics studies. Among the latter, one can list the potential to predict earthquakes by monitoring the rate of low energy cosmic-ray events. This diversity of potential applications has motivated the researchers to advertise the concept across the astroparticle physics community. Implementation The user must install an application that turns their phone into a cosmic ray detector, connect it to the charger and arrange it horizontally; for example, put it on a table or bedside cabinet. It is also important that the cameras of the device are well covered, for example with a piece of black adhesive tape, and notifications indicated by the blinking of lights are turned off. If a radiation particle passes through a photosensitive matrix in the phone, it will stimulate several pixels, which will be noticed by the program that sends information to the server. Thanks to the GPS module, the time and place of the event is also known. All data from smartphones will then be analyzed together in the Academic Computer Center Cyfronet AGH, which will keep participants informed about the progress of the search for signs of high-energy particles. By 2020 the application is still under testing and may not produce the expected results on some mobile devices. Preview of collected data All traces of particles registered by smartphones can be viewed on a dedicated website. Their size and shape depends on the type and energy of the captured particle and the direction from which it came. External links Project page Detected events Polish board English board Video about CREDO CREDO Scientific publications References Astronomy in Poland Astronomy projects Citizen science
Cosmic-Ray Extremely Distributed Observatory
Astronomy
858
26,985,551
https://en.wikipedia.org/wiki/Roman%20lead%20pipe%20inscription
A Roman lead pipe inscription is a Latin inscription on a Roman water pipe made of lead which provides brief information on its manufacturer and owner, often the reigning emperor himself as the supreme authority. The identification marks were created by full text stamps. Manufacture of pipes Lead, a by-product of the ancient silver smelting process, was produced in the Roman Empire with an estimated peak production of 80,000 metric tons per yeara truly industrial scale. The metal was used along with other materials in the vast water supply network of the Romans for the manufacture of water pipes, particularly for urban plumbing. The method of manufacturing the lead pipes is recorded by Vitruvius and Frontinus. The lead was poured into sheets of a uniform length, which were bent to form a cylinder and soldered at the seam. The lead pipes could range in size from approximately in diameter, depending on the required rate of flow. Creation of inscriptions Since the 19th century, the hypothesis has occasionally been put forward that the Roman inscriptions were created by movable type printing. A recent investigation by the typesetter and linguist Herbert Brekle, however, concludes that all material evidence points to the use of common text stamps. Brekle describes the manufacturing method as follows: Brekle lists the following reasons for the employment of stamps and against that of movable type: for printing on lead sheets the way the Romans created them, it would be much more practical to use single stamp blocks than sets of individual letters, since the latter would be unstable and would have required a clamp or some similar mechanism to maintain the necessary cohesion. Neither impressions of such clamps nor of the fine lines between the individual letters typical for the use of movable type are discernible in the inscriptions. By contrast, the outer rim of one examined stamp block left a raised rectangular edge running around the inscription text, thus providing positive evidence for the use of such a printing device. In addition, evidence of the poor positioning of movable type, such as individual letters tilting to the right or left or deviating from the baseline – something which could have been expected to occur at least in a few extant specimens – is notably absent. In those inscriptions where the letters are not properly aligned, the entire text is blurred, which clearly points to the use of full text stamps. Finally, it needs to be considered that archaeological excavations have never unearthed ancient sets of movable type, whereas moulds with reversed inscription texts for stamp printing have indeed been recovered. See also Plumbing Roman aqueduct References Sources Lanciani, R.: "Topografia di Roma antica. I commentarii di Frontino intorno le acque e gli acquedotti. Silloge epigrafica aquaria", in: Memorie della Reale Accademia dei Lincei, Serie III, Volume IV, Classe di Scienze Morali, Rom 1881 (Reprint: Quasar publishing house, 1975), pp. 215–616 External links Lead pipe Plumbing Inscriptions History of printing
Roman lead pipe inscription
Engineering
618
72,173,841
https://en.wikipedia.org/wiki/MTT%2068
MTT 68 is a multiple star system located on the outskirts of the HD 97950 cluster in the NGC 3603 star-forming region, about 25,000 light years from Earth. It contains a rare example of an O2If* star which is one of the most luminous and most massive known. MTT 68 was first identified as being associated with NGC 3603 when it was listed as object 68 in a survey of the region by Melnick, Tapia, and Terlevich published in 1989. It is from the centre of the main ionising cluster for NGC 3603. In 2002, it was found to be a strong source of X-rays, indicating that it may be a close binary containing two massive stars. In 2013, it was classified with a spectral type of O2If*, only the second known example after the prototype HD 93129A, also in the constellation of Carina. The spectral class indicates that this is a very hot supergiant star with emission lines of triple-ionised nitrogen stronger than those of doubly-ionised nitrogen. MTT 68 is resolved into a pair of stars apart. The fainter component is 1.2 magnitudes dimmer than the brighter star. Although it is expected that MTT 68 is a binary due to its high x-ray luminosity, the observed companion is too distant to create the x-rays by colliding winds and a third, closer, companion is suspected. Although MTT 68 is catalogued in Gaia Data Release 3, the parallax is too imprecise to give a useful distance. Analysis of the cluster as a whole allows a distance of to be calculated. At that distance, interstellar extinction causes stars to be dimmed by about 6.7 magnitudes and strongly reddened. Correcting for this places both of the component stars near the main sequence within initial masses of at least and respectively. See also NGC 3603-A1 NGC 3603-B References External links NASA Image of the day Carina (constellation) NGC 3603 O-type supergiants
MTT 68
Astronomy
427
9,755,564
https://en.wikipedia.org/wiki/Congruence%20lattice%20problem
In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem. Preliminaries We denote by Con A the congruence lattice of an algebra A, that is, the lattice of all congruences of A under inclusion. The following is a universal-algebraic triviality. It says that for a congruence, being finitely generated is a lattice-theoretical property. Lemma. A congruence of an algebra A is finitely generated if and only if it is a compact element of Con A. As every congruence of an algebra is the join of the finitely generated congruences below it (e.g., every submodule of a module is the union of all its finitely generated submodules), we obtain the following result, first published by Birkhoff and Frink in 1948. Theorem (Birkhoff and Frink 1948). The congruence lattice Con A of any algebra A is an algebraic lattice. While congruences of lattices lose something in comparison to groups, modules, rings (they cannot be identified with subsets of the universe), they also have a property unique among all the other structures encountered yet. Theorem (Funayama and Nakayama 1942). The congruence lattice of any lattice is distributive. This says that α ∧ (β ∨ γ) = (α ∧ β) ∨ (α ∧ γ), for any congruences α, β, and γ of a given lattice. The analogue of this result fails, for instance, for modules, as , as a rule, for submodules A, B, C of a given module. Soon after this result, Dilworth proved the following result. He did not publish the result but it appears as an exercise credited to him in Birkhoff 1948. The first published proof is in Grätzer and Schmidt 1962. Theorem (Dilworth ≈1940, Grätzer and Schmidt 1962). Every finite distributive lattice is isomorphic to the congruence lattice of some finite lattice. It is important to observe that the solution lattice found in Grätzer and Schmidt's proof is sectionally complemented, that is, it has a least element (true for any finite lattice) and for all elements a ≤ b there exists an element x with a ∨ x = b and a ∧ x = 0. It is also in that paper that CLP is first stated in published form, although it seems that the earliest attempts at CLP were made by Dilworth himself. Congruence lattices of finite lattices have been given an enormous amount of attention, for which a reference is Grätzer's 2005 monograph. The congruence lattice problem (CLP): Is every distributive algebraic lattice isomorphic to the congruence lattice of some lattice? The problem CLP has been one of the most intriguing and longest-standing open problems of lattice theory. Some related results of universal algebra are the following. Theorem (Grätzer and Schmidt 1963). Every algebraic lattice is isomorphic to the congruence lattice of some algebra. The lattice Sub V of all subspaces of a vector space V is certainly an algebraic lattice. As the next result shows, these algebraic lattices are difficult to represent. Theorem (Freese, Lampe, and Taylor 1979). Let V be an infinite-dimensional vector space over an uncountable field F. Then Con A isomorphic to Sub V implies that A has at least card F operations, for any algebra A. As V is infinite-dimensional, the largest element (unit) of Sub V is not compact. However innocuous it sounds, the compact unit assumption is essential in the statement of the result above, as demonstrated by the following result. Theorem (Lampe 1982). Every algebraic lattice with compact unit is isomorphic to the congruence lattice of some groupoid. Semilattice formulation of CLP The congruence lattice Con A of an algebra A is an algebraic lattice. The (∨,0)-semilattice of compact elements of Con A is denoted by Conc A, and it is sometimes called the congruence semilattice of A. Then Con A is isomorphic to the ideal lattice of Conc A. By using the classical equivalence between the category of all (∨,0)-semilattices and the category of all algebraic lattices (with suitable definitions of morphisms), as it is outlined here, we obtain the following semilattice-theoretical formulation of CLP. Semilattice-theoretical formulation of CLP: Is every distributive (∨,0)-semilattice isomorphic to the congruence semilattice of some lattice? Say that a distributive (∨,0)-semilattice is representable, if it is isomorphic to Conc L, for some lattice L. So CLP asks whether every distributive (∨,0)-semilattice is representable. Many investigations around this problem involve diagrams of semilattices or of algebras. A most useful folklore result about these is the following. Theorem. The functor Conc, defined on all algebras of a given signature, to all (∨,0)-semilattices, preserves direct limits. Schmidt's approach via distributive join-homomorphisms We say that a (∨,0)-semilattice satisfies Schmidt's Condition, if it is isomorphic to the quotient of a generalized Boolean semilattice B under some distributive join-congruence of B. One of the deepest results about representability of (∨,0)-semilattices is the following. Theorem (Schmidt 1968). Any (∨,0)-semilattice satisfying Schmidt's Condition is representable. This raised the following problem, stated in the same paper. Problem 1 (Schmidt 1968). Does any (∨,0)-semilattice satisfy Schmidt's Condition? Partial positive answers are the following. Theorem (Schmidt 1981). Every distributive lattice with zero satisfies Schmidt's Condition; thus it is representable. This result has been improved further as follows, via a very long and technical proof, using forcing and Boolean-valued models. Theorem (Wehrung 2003). Every direct limit of a countable sequence of distributive lattices with zero and (∨,0)-homomorphisms is representable. Other important representability results are related to the cardinality of the semilattice. The following result was prepared for publication by Dobbertin after Huhn's passing away in 1985. The two corresponding papers were published in 1989. Theorem (Huhn 1985). Every distributive (∨,0)-semilattice of cardinality at most ℵ1 satisfies Schmidt's Condition. Thus it is representable. By using different methods, Dobbertin got the following result. Theorem (Dobbertin 1986). Every distributive (∨,0)-semilattice in which every principal ideal is at most countable is representable. Problem 2 (Dobbertin 1983). Is every conical refinement monoid measurable? Pudlák's approach; lifting diagrams of (∨,0)-semilattices The approach of CLP suggested by Pudlák in his 1985 paper is different. It is based on the following result, Fact 4, p. 100 in Pudlák's 1985 paper, obtained earlier by Yuri L. Ershov as the main theorem in Section 3 of the Introduction of his 1977 monograph. Theorem (Ershov 1977, Pudlák 1985). Every distributive (∨,0)-semilattice is the directed union of its finite distributive (∨,0)-subsemilattices. This means that every finite subset in a distributive (∨,0)-semilattice S is contained in some finite distributive (∨,0)-subsemilattice of S. Now we are trying to represent a given distributive (∨,0)-semilattice S as Conc L, for some lattice L. Writing S as a directed union of finite distributive (∨,0)-subsemilattices, we are hoping to represent each Si as the congruence lattice of a lattice Li with lattice homomorphisms fij : Li→ Lj, for i ≤ j in I, such that the diagram of all Si with all inclusion maps Si→Sj, for i ≤ j in I, is naturally equivalent to , we say that the diagram lifts (with respect to the Conc functor). If this can be done, then, as we have seen that the Conc functor preserves direct limits, the direct limit satisfies . While the problem whether this could be done in general remained open for about 20 years, Pudlák could prove it for distributive lattices with zero, thus extending one of Schmidt's results by providing a functorial solution. Theorem (Pudlák 1985). There exists a direct limits preserving functor Φ, from the category of all distributive lattices with zero and 0-lattice embeddings to the category of all lattices with zero and 0-lattice embeddings, such that ConcΦ is naturally equivalent to the identity. Furthermore, Φ(S) is a finite atomistic lattice, for any finite distributive (∨,0)-semilattice S. This result is improved further, by an even far more complex construction, to locally finite, sectionally complemented modular lattices by Růžička in 2004 and 2006. Pudlák asked in 1985 whether his result above could be extended to the whole category of distributive (∨,0)-semilattices with (∨,0)-embeddings. The problem remained open until it was recently solved in the negative by Tůma and Wehrung. Theorem (Tůma and Wehrung 2006). There exists a diagram D of finite Boolean (∨,0)-semilattices and (∨,0,1)-embeddings, indexed by a finite partially ordered set, that cannot be lifted, with respect to the Conc functor, by any diagram of lattices and lattice homomorphisms. In particular, this implies immediately that CLP has no functorial solution. Furthermore, it follows from deep 1998 results of universal algebra by Kearnes and Szendrei in so-called commutator theory of varieties that the result above can be extended from the variety of all lattices to any variety such that all Con A, for , satisfy a fixed nontrivial identity in the signature (∨,∧) (in short, with a nontrivial congruence identity). We should also mention that many attempts at CLP were also based on the following result, first proved by Bulman-Fleming and McDowell in 1978 by using a categorical 1974 result of Shannon, see also Goodearl and Wehrung in 2001 for a direct argument. Theorem (Bulman-Fleming and McDowell 1978). Every distributive (∨,0)-semilattice is a direct limit of finite Boolean (∨,0)-semilattices and (∨,0)-homomorphisms. It should be observed that while the transition homomorphisms used in the Ershov-Pudlák Theorem are (∨,0)-embeddings, the transition homomorphisms used in the result above are not necessarily one-to-one, for example when one tries to represent the three-element chain. Practically this does not cause much trouble, and makes it possible to prove the following results. Theorem. Every distributive (∨,0)-semilattice of cardinality at most ℵ1 is isomorphic to (1) Conc L, for some locally finite, relatively complemented modular lattice L (Tůma 1998 and Grätzer, Lakser, and Wehrung 2000). (2) The semilattice of finitely generated two-sided ideals of some (not necessarily unital) von Neumann regular ring (Wehrung 2000). (3) Conc L, for some sectionally complemented modular lattice L (Wehrung 2000). (4) The semilattice of finitely generated normal subgroups of some locally finite group (Růžička, Tůma, and Wehrung 2007). (5) The submodule lattice of some right module over a (non-commutative) ring (Růžička, Tůma, and Wehrung 2007). Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings We recall that for a (unital, associative) ring R, we denote by V(R) the (conical, commutative) monoid of isomorphism classes of finitely generated projective right R-modules, see here for more details. Recall that if R is von Neumann regular, then V(R) is a refinement monoid. Denote by Idc R the (∨,0)-semilattice of finitely generated two-sided ideals of R. We denote by L(R) the lattice of all principal right ideals of a von Neumann regular ring R. It is well known that L(R) is a complemented modular lattice. The following result was observed by Wehrung, building on earlier works mainly by Jónsson and Goodearl. Theorem (Wehrung 1999). Let R be a von Neumann regular ring. Then the (∨,0)-semilattices Idc R and Conc L(R) are both isomorphic to the maximal semilattice quotient of V(R). Bergman proves in a well-known unpublished note from 1986 that any at most countable distributive (∨,0)-semilattice is isomorphic to Idc R, for some locally matricial ring R (over any given field). This result is extended to semilattices of cardinality at most ℵ1 in 2000 by Wehrung, by keeping only the regularity of R (the ring constructed by the proof is not locally matricial). The question whether R could be taken locally matricial in the ℵ1 case remained open for a while, until it was disproved by Wehrung in 2004. Translating back to the lattice world by using the theorem above and using a lattice-theoretical analogue of the V(R) construction, called the dimension monoid, introduced by Wehrung in 1998, yields the following result. Theorem (Wehrung 2004). There exists a distributive (∨,0,1)-semilattice of cardinality ℵ1 that is not isomorphic to Conc L, for any modular lattice L every finitely generated sublattice of which has finite length. Problem 3 (Goodearl 1991). Is the positive cone of any dimension group with order-unit isomorphic to V(R), for some von Neumann regular ring R? A first application of Kuratowski's free set theorem The abovementioned Problem 1 (Schmidt), Problem 2 (Dobbertin), and Problem 3 (Goodearl) were solved simultaneously in the negative in 1998. Theorem (Wehrung 1998). There exists a dimension vector space G over the rationals with order-unit whose positive cone G+ is not isomorphic to V(R), for any von Neumann regular ring R, and is not measurable in Dobbertin's sense. Furthermore, the maximal semilattice quotient of G+ does not satisfy Schmidt's Condition. Furthermore, G can be taken of any given cardinality greater than or equal to ℵ2. It follows from the previously mentioned works of Schmidt, Huhn, Dobbertin, Goodearl, and Handelman that the ℵ2 bound is optimal in all three negative results above. As the ℵ2 bound suggests, infinite combinatorics are involved. The principle used is Kuratowski's free set theorem, first published in 1951. Only the case n=2 is used here. The semilattice part of the result above is achieved via an infinitary semilattice-theoretical statement URP (Uniform Refinement Property). If we want to disprove Schmidt's problem, the idea is (1) to prove that any generalized Boolean semilattice satisfies URP (which is easy), (2) that URP is preserved under homomorphic image under a weakly distributive homomorphism (which is also easy), and (3) that there exists a distributive (∨,0)-semilattice of cardinality ℵ2 that does not satisfy URP (which is difficult, and uses Kuratowski's free set theorem). Schematically, the construction in the theorem above can be described as follows. For a set Ω, we consider the partially ordered vector space E(Ω) defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x+a1,x=1, a0,x ≥ 0, and a1,x ≥ 0, for any x in Ω. By using a Skolemization of the theory of dimension groups, we can embed E(Ω) functorially into a dimension vector space F(Ω). The vector space counterexample of the theorem above is G=F(Ω), for any set Ω with at least ℵ2 elements. This counterexample has been modified subsequently by Ploščica and Tůma to a direct semilattice construction. For a (∨,0)-semilattice, the larger semilattice R(S) is the (∨,0)-semilattice freely generated by new elements t(a,b,c), for a, b, c in S such that c ≤ a ∨ b, subjected to the only relations c=t(a,b,c) ∨ t(b,a,c) and t(a,b,c) ≤ a. Iterating this construction gives the free distributive extension of S. Now, for a set Ω, let L(Ω) be the (∨,0)-semilattice defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x ∨ a1,x=1, for any x in Ω. Finally, put G(Ω)=D(L(Ω)). In most related works, the following uniform refinement property is used. It is a modification of the one introduced by Wehrung in 1998 and 1999. Definition (Ploščica, Tůma, and Wehrung 1998). Let e be an element in a (∨,0)-semilattice S. We say that the weak uniform refinement property WURP holds at e, if for all families and of elements in S such that ai ∨ bi=e for all i in I, there exists a family of elements of S such that the relations • ci,j ≤ ai,bj, • ci,j ∨ aj ∨ bi=e, • ci,k ≤ ci,j∨ cj,k hold for all i, j, k in I. We say that S satisfies WURP, if WURP holds at every element of S. By building on Wehrung's abovementioned work on dimension vector spaces, Ploščica and Tůma proved that WURP does not hold in G(Ω), for any set Ω of cardinality at least ℵ2. Hence G(Ω) does not satisfy Schmidt's Condition. All negative representation results mentioned here always make use of some uniform refinement property, including the first one about dimension vector spaces. However, the semilattices used in these negative results are relatively complicated. The following result, proved by Ploščica, Tůma, and Wehrung in 1998, is more striking, because it shows examples of representable semilattices that do not satisfy Schmidt's Condition. We denote by FV(Ω) the free lattice on Ω in V, for any variety V of lattices. Theorem (Ploščica, Tůma, and Wehrung 1998). The semilattice Conc FV(Ω) does not satisfy WURP, for any set Ω of cardinality at least ℵ2 and any non-distributive variety V of lattices. Consequently, Conc FV(Ω) does not satisfy Schmidt's Condition. It is proved by Tůma and Wehrung in 2001 that Conc FV(Ω) is not isomorphic to Conc L, for any lattice L with permutable congruences. By using a slight weakening of WURP, this result is extended to arbitrary algebras with permutable congruences by Růžička, Tůma, and Wehrung in 2007. Hence, for example, if Ω has at least ℵ2 elements, then Conc FV(Ω) is not isomorphic to the normal subgroup lattice of any group, or the submodule lattice of any module. Solving CLP: the Erosion Lemma The following recent theorem solves CLP. Theorem (Wehrung 2007). The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵω+1 elements. Hence, the counterexample to CLP had been known for nearly ten years, it is just that nobody knew why it worked! All the results prior to the theorem above made use of some form of permutability of congruences. The difficulty was to find enough structure in congruence lattices of non-congruence-permutable lattices. We shall denote by ε the `parity function' on the natural numbers, that is, ε(n)=n mod 2, for any natural number n. We let L be an algebra possessing a structure of semilattice (L,∨) such that every congruence of L is also a congruence for the operation ∨ . We put and we denote by ConcU L the (∨,0)-subsemilattice of Conc L generated by all principal congruences Θ(u,v) ( = least congruence of L that identifies u and v), where (u,v) belongs to U ×U. We put Θ+(u,v)=Θ(u ∨ v,v), for all u, v in L.br /> The Erosion Lemma (Wehrung 2007). Let x0, x1 in L and let , for a positive integer n, be a finite subset of L with . Put Then there are congruences , for j<2, such that (Observe the faint formal similarity with first-order resolution in mathematical logic. Could this analogy be pushed further?) The proof of the theorem above runs by setting a structure theorem for congruence lattices of semilattices—namely, the Erosion Lemma, against non-structure theorems for free distributive extensions G(Ω), the main one being called the Evaporation Lemma. While the latter are technically difficult, they are, in some sense, predictable. Quite to the contrary, the proof of the Erosion Lemma is elementary and easy, so it is probably the strangeness of its statement that explains that it has been hidden for so long. More is, in fact, proved in the theorem above: For any algebra L with a congruence-compatible structure of join-semilattice with unit and for any set Ω with at least ℵω+1 elements, there is no weakly distributive homomorphism μ: Conc L → G(Ω) containing 1 in its range. In particular, CLP was, after all, not a problem of lattice theory, but rather of universal algebra—even more specifically, semilattice theory! These results can also be translated in terms of a uniform refinement property, denoted by CLR in Wehrung's paper presenting the solution of CLP, which is noticeably more complicated than WURP. Finally, the cardinality bound ℵω+1 has been improved to the optimal bound ℵ2 by Růžička. Theorem (Růžička 2008). The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵ2 elements. Růžička's proof follows the main lines of Wehrung's proof, except that it introduces an enhancement of Kuratowski's Free Set Theorem, called there existence of free trees, which it uses in the final argument involving the Erosion Lemma. A positive representation result for distributive semilattices The proof of the negative solution for CLP shows that the problem of representing distributive semilattices by compact congruences of lattices already appears for congruence lattices of semilattices. The question whether the structure of partially ordered set would cause similar problems is answered by the following result. Theorem (Wehrung 2008). For any distributive (∨,0)-semilattice S, there are a (∧,0)-semilattice P and a map μ : P × P → S such that the following conditions hold: (1) x ≤ y implies that μ(x,y)=0, for all x, y in P. (2) μ(x,z) ≤ μ(x,y) ∨ μ(y,z), for all x, y, z in P. (3) For all x ≥ y in P and all α, β in S such that μ(x,y) ≤ α ∨ β, there are a positive integer n and elements x=z0 ≥ z1 ≥ ... ≥ z2n=y such that μ(zi,zi+1) ≤ α (resp., μ(zi,zi+1) ≤ β) whenever i < 2n is even (resp., odd). (4) S is generated, as a join-semilattice, by all the elements of the form μ(x,0), for x in P. Furthermore, if S has a largest element, then P can be assumed to be a lattice with a largest element. It is not hard to verify that conditions (1)–(4) above imply the distributivity of S, so the result above gives a characterization of distributivity for (∨,0)-semilattices. Notes References Lattice theory Mathematical problems
Congruence lattice problem
Mathematics
5,866
4,371,417
https://en.wikipedia.org/wiki/Unique%20Material%20Identifier
The Unique Material Identifier (UMID) is a SMPTE standard for providing a stand-alone method for generating a unique label designed to be used to attach to media files and streams. The UMID is standardized in SMPTE 330M. There are two types of UMID: Basic UMID contains the minimal components necessary for the unique identification (the essential metadata) The length of the basic UMID is 32 octets. The Extended UMID provides information on the creation time and date, recording location and the name of the organisation and the maker as well as the components of the Basic UMID. The length of the Extended UMID is 64 octets. This data may be parsed to extract specific information produced at the time it was generated or simply used as a unique label. References Unique identifiers Broadcasting standards SMPTE standards
Unique Material Identifier
Technology
174
10,698,707
https://en.wikipedia.org/wiki/Gleaning%20%28birds%29
Gleaning is a feeding strategy by birds and bats in which they catch invertebrate prey, mainly arthropods, by plucking them from foliage or the ground, from crevices such as rock faces and under the eaves of houses, or even, as in the case of ticks and lice, from living animals. This behavior is contrasted with hawking insects from the air or chasing after moving insects such as ants. Gleaning, in birds, does not refer to foraging for seeds or fruit. Gleaning is a common feeding strategy for some groups of birds, including nuthatches, tits (including chickadees), wrens, woodcreepers, treecreepers, Old World flycatchers, Tyrant flycatchers, babblers, Old World warblers, New World warblers, Vireos and some hummingbirds and cuckoos. Many birds make use of multiple feeding strategies, depending on the availability of different sources of food and opportunities of the moment. Techniques and adaptations Foliage gleaning, the strategy of gleaning over the leaves and branches of trees and shrubs, can involve a variety of styles and maneuvers. Some birds, such as the common chiffchaff of Eurasia and the Wilson's warbler of North America, feed actively and appear energetic. Some will even hover in the air near a leaf or twig while gleaning from it; this behavior is called "hover-gleaning". Other birds are more methodical in their approach to gleaning, even seeming lethargic as they perch upon and deliberately pick over foliage. This behavior is characteristic of the bay-breasted warbler and many vireos. Another tactic is to hang upside-down from the tips of branches to glean the undersides of leaves. Tits such as the familiar black-capped chickadee are often observed feeding in this manner. Some birds, like the ruby-crowned kinglet and red-eyed vireo of North America use a combination of these tactics. Gleaning birds are typically small with compact bodies and have small, sharply pointed bills. These features are even seen in gleaning birds that are not closely related. For example, in flycatchers of the family Tyrannidae, in which some member species are more adapted for hawking insects on the wing and others for gleaning, the gleaners have bills that resemble those of tits and warblers, unlike their larger-billed relatives. Also, some members of the woodpecker family, particularly piculets such as the rufous piculet of Southeast Asia, are similarly adapted for gleaning, with small, compact bodies and sharp bills, rather than the long, supportive tails and wedge-shaped bills more typical of woodpeckers. Birds such as the aforementioned piculet are specialized for gleaning the bark of trees, as are nuthatches, woodcreepers, and treecreepers. Most bark-gleaners work their way up tree trunks or along branches, though nuthatches are well known as the birds that can go the opposite direction, facing down and working their way down the trunk, as well. This requires strong legs and feet on the part of the nuthatch and piculet, while birds that face upwards tend to have stiff tail feathers to prop them up. Birds often specialize in a particular niche, such as a particular stratum of forest or type of vegetation. In South and Southeast Asia, for example, the mountain tailorbird is often found gleaning in thickets and stands of bamboo, Abbott's babbler gleans lower-storey foliage in lowland forest, the rufous-chested flycatcher and brown fulvetta are birds of the mid-storey forest, the yellow-breasted warbler gleans in the mid- to upper-storey, and the greater green leafbird specializes in the upper-storey forest. The Javan white-eye is a bird of coastal scrub and mangroves, while the related black-capped white-eye is restricted to montane forest. Further specialization within a habitat is associated with behaviors and morphological adaptations (physical traits of size and shape). Tiny birds are lightweight enough to hang onto the ends of twigs and pluck small prey; the goldcrest of Europe and its counterpart the golden-crowned kinglet of North America exhibit this feeding style. The related common firecrest is very similar in size and shape, but slightly bulkier, and has less of a tendency to glean along twigs and more of a habit of flying from perch to perch. Having a very small bill seems to be good for taking tiny prey from the surfaces of leaves, and small-billed birds such as the blue tit forage in broad-leafed woodlands. The long-billed gnatwren and speckled spinetail of Central and South America, and the ashy tailorbird and striped tit-babbler of South Asia, show a preference for gleaning in tangles of vines. The ash-browed spinetail of South America specializes in gleaning among epiphytes on moss-covered tree branches. Many hummingbirds take small insects from flowers while probing for nectar, and some species glean actively among bark and leaves. The Puerto Rican emerald is one such hummingbird. Found only on the island of Puerto Rico, the female subsists on insects and spiders, while the male has a typical hummingbird diet of nectar. Hummingbirds and other gleaners are also sometimes attracted to the sap wells created by sapsuckers. Sapsuckers, which are in the woodpecker family, drill small holes in living tree branches to get the sap flowing. The sap and the insects it attracts are then consumed, and rufous hummingbirds have been observed to follow the movements of sapsuckers and take advantage of this food source. Clusters of dead leaves also often harbor invertebrate prey, and the Bewick's wren and worm-eating warbler of North America have long bills well-suited for probing them, as do certain Asian babblers, such as the rusty-cheeked scimitar-babbler. In Central and South America, foliage-gleaners such as the red-faced spinetail and buff-throated foliage-gleaner are also examples of birds that glean clusters of dead leaves. Crevice-gleaning is a niche particular to dry and rocky habitats. Adaptations for crevice-gleaning are similar to that of bark-gleaning. Just as the Bewick's wren, as mentioned in the preceding paragraph, has a long bill suited for poking around in the small places of woods and gardens, another North American wren, the canyon wren, has an even longer bill, which allows it to probe crevices in rocky cliffs. It also has skeletal adaptations to aid it in reaching deep into small spaces. These same traits are useful for gleaning the sides of buildings, as well. Another kind of rocky habitat is found along mountain streams, where birds such as the Louisiana waterthrush of North America and the forktails of Asia pick over stream-side rocks and exposed roots for aquatic insects and other moisture-loving prey. Other foraging techniques Foraging for invertebrate prey on the ground often involves gleaning the leaf litter of the forest floor, sometimes flicking, flipping, or scratching through dead leaves. Birds can use their bills to flick or toss dead leaves from the ground to reveal prey residing beneath. The leaftossers of Central and South America and the pittas and laughingthrushes of Asia do this. An example of a bird that employs flipping is the ovenbird, a species of North American wood-warbler. It deliberately turns over leaves on the ground to search for spiders, worms, and such underneath. In other parts of the world, similar leaf-flipping behavior has been observed in unrelated birds, such as the jungle babbler of India. Some birds, such as hummingbirds, will use their wings to create a blast of air to roll leaves over. Other birds rake a foot through the leaf litter, like a chicken, for the same purpose. This has been observed in buttonquails. Some American sparrows, such as the green-tailed towhee, perform a double-scratch by raking both legs simultaneously through the leaf-litter. They then catch prey items dislodged by the disturbance. Ground-foraging birds can be very hard for humans to observe, as they often occupy densely vegetated habitat, as in the case of the Bornean wren-babbler, which specializes in gleaning leaf litter in gullies in the forest of Southeast Asia. A feeding technique that is somewhere between gleaning and hawking is where a bird flies from a perch and takes prey off foliage; this is called "sally-gleaning". The pygmy tyrants of South America are tiny flycatchers that feed this way. The todies of the Caribbean employ a distinct version of sally-gleaning. These small birds choose a perch within their lush forest and plantation habitats in the Greater Antilles, from which they scan the undersides of leaves above them. Upon spotting an insect or spider, they fly up in an arcing sally, pluck their prey item without stopping, and complete the arcing movement to land on a new perch. An unusual feeding strategy is that of the oxpeckers of Africa. They perch on living animals and glean parasites from the animals' hides. On furry animals, such as buffalo, giraffe, and donkey, these birds run their bills through the fur of the animal, using a scissors-like motion to extract ticks and lice from near the skin. When they pull the insect out to the end of the fur, they catch it and eat it. (On animals with bare hides, such as rhinoceros and hippopotamus, oxpeckers pick at any open wounds the animals happen to have, consuming blood and pus, and possibly keeping the wounds free of maggots.) Historically, rhinoceros and other large wild mammals have been among the favored hosts, but as the populations of large mammals in the African savanna have declined in modern times, the population and range of both red-billed and yellow-billed oxpecker have also changed, and now the birds will use donkeys and domestic cattle as hosts. There are other tactics. Dippers forage underwater in fast-moving streams. Common grackles have been observed to follow farmers’ plows to glean the grubs exposed in the fresh soil. Similarly, on the island of Borneo, the Bornean ground-cuckoo will follow wild pigs and sun bears as they turn up soil while foraging in the forest. Brewer's blackbirds are often seen in parking lots, where they pick off dead insects from car grilles. Some hummingbirds are known to take prey items from spiderwebs. Behavioral implications Gleaning, like other methods of foraging, is a highly visual activity, and as such has some implications for birds. First, to see requires light, and thus time allotted to gleaning is limited to daytime. Second, while a bird focuses on examining an area for prey items, it must necessarily divert its attention from scanning its surroundings for predators. Birds that glean in tree branches will often join together in a flock, and often with other gleaners in a mixed-species foraging flock. It has been shown that individual birds feeding in flocks are able to spend more time looking for food and less time looking for predators. On the other hand, it is not a universal trait of gleaning birds to join with other species or even to be gregarious with their own kind. The leafbirds of Asia are foliage-gleaners, but are often found singly or in pairs. Also, where multiple species of gleaning birds forage in the same area, they may show niche segregation; for example, one species may stick to conifers while another species inhabits broadleaf trees, or they may even divide up a habitat, with smaller species feeding among higher, smaller tree branches and larger species staying on lower, larger branches. References Bird behavior Bird feeding Ornithology
Gleaning (birds)
Biology
2,545
55,270,757
https://en.wikipedia.org/wiki/Aminophosphine
In organophosphorus chemistry, aminophosphines are compounds with the formula R3−nP(NR2)n where R is a hydrogen or organic substituent, and n = 0, 1, or 2. At one extreme, the parents H2PNH2 and are lightly studied and fragile. At the other extreme, tris(dimethylamino)phosphine (P(NMe2)3) is commonly available. Intermediate members are known, such as Ph2PN(H)Ph. Aminophosphines are typically colorless and reactive to oxygen. Aminophosphines are pyramidal geometry at phosphorus. Parent members The fundamental aminophosphines have the formulae PH3−n(NH2)n (n = 1, 2, or 3). Fundamental aminophosphines can not be isolated in a practical quantities but have been examined theoretically. H2NPH2 is predicted to be more stable than the P(V) tautomer HN=PH3. Secondary amines are more straightforward. Trisaminophosphines are made by treating phosphorus trichloride with secondary amines: PCl3 + 6 HNMe2 → (Me2N)3P + 3 [H2NMe2]Cl Aminophosphine chlorides The amination of phosphorus trihalides occur sequentially, with each amination proceeding slower than before: PCl3 + 2 HNMe2 → Me2NPCl2 + [H2NMe2]Cl Me2NPCl2 + 2 HNMe2 → (Me2N)2PCl + [H2NMe2]Cl (Me2N)2PCl + 2 HNMe2 → (Me2N)3P + [H2NMe2]Cl Monosubstitution selectivity improves with bulky amines such as diisopropylamine. Commercially available aminophosphine chlorides include dimethylaminophosphorus dichloride and bis(dimethylamino)phosphorus chloride. Methylamine and trifluorophosphine react to give the diphosphine MeN(PF2)2: 2 PF3 + 3 MeNH2 → MeN(PF2)2 + 2 [MeNH3]F Me(PF2)2 is a bridging ligand in organometallic chemistry. Aminophosphines can also made from organophosphorus chlorides and amines. Chlorodiphenylphosphine and diethylamine react to give an aminophosphine: Ph2PCl + 2 HNEt2 → Ph2PNEt2 + [H2NEt2]Cl Primary amines react with phosphorus(III) chlorides to give aminophosphines with acidic α-NH centers: Ph2PCl + 2 H2NR → Ph2PN(H)R + [H3NR]Cl Reactions Protonolysis Protic reagents attack the P-N bond. Alcoholysis readily occurs: Ph2PNEt2 + ROH → Ph2POR + HNEt2 The P-N bond reverts to the chloride when treated with anhydrous hydrogen chloride: Ph2PNEt2 + HCl → Ph2PCl + HNEt2 Transamination similarly converts one aminophosphine to another: P(NMe2)3 + R2NH P(NR2)(NMe2)2 + HNMe2 With tris(dimethylamino)phosphine, dimethylamine evaporation can drive the equilibrium. Since Grignard reagents do not attack P-NR2 bond, aminophosphine chlorides are useful reagents in preparing unsymmetrical tertiary phosphines. Illustrative is converting dimethylaminophosphorus dichloride to chlorodimethylphosphine: 2 MeMgBr + Me2NPCl2 → Me2NPMe2 + 2 MgBrCl Me2NPMe2 + 2 HCl → ClPMe2 + Me2NH2Cl Also, illustrative is the synthesis of 1,2-bis(dichlorophosphino)benzene using (Et2N)2PCl (Et = ethyl). This route gives C6H4[P(NEt2)2]2, which is treated with hydrogen chloride: C6H4[P(NEt2)2]2 + 8 HCl → C6H4(PCl2)2 + 4 Et2NH2Cl Conversion to phosphenium salts Diaminophosphorus chlorides and tris(dimethylamino)phosphine are precursors to phosphenium ions of the type [(R2N)2P]+: R2PCl + AlCl3 → [R2P+]AlCl4− P(NMe2)3 + 2 HOTf → [P(NMe2)2]OTf + [H2NMe2]OTf Oxidation and quaternization Typical aminophosphines oxidize. Alkylation, such as by methyl iodide, gives the phosphonium cation. Addition to carbonyls In diazaphospholenes the polarity of the P-H bond is inverted compared to traditional secondary phosphines. They have some hydridic character. One manifestation of this polarity is their reactivity toward benzophenone in yet another way. References Functional groups Amides Phosphorus-nitrogen compounds
Aminophosphine
Chemistry
1,177
77,307,051
https://en.wikipedia.org/wiki/Convention%20for%20the%20Protection%20and%20Development%20of%20the%20Marine%20Environment%20of%20the%20Wider%20Caribbean%20Region
The Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region, commonly called the Cartagena Convention, is an international agreement for the protection of the Caribbean Sea, the Gulf of Mexico and a portion of the adjacent Atlantic Ocean. It was adopted on 24 March 1983, entered into force on 11 October 1986 subsequent to its ratification by Antigua and Barbuda, the ninth party to do so, and has been ratified by 26 states. It has been amended by three major protocols: the Protocol Concerning Co-operation in Combating Oil Spills in the Wider Caribbean Region (Oil Spills Protocol), the Protocol Concerning Specially Protected Areas and Wildlife to the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region (SPAW Protocol) and the Protocol Concerning Pollution from Land-Based Sources and Activities to the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region (LBS Protocol). History The United Nations Environment Programme established the Regional Seas Programme in 1974, which works to promote the development of conventions and action plans for protection of 18 designated regional seas, of which the Wider Caribbean is one. The Wider Caribbean Region encompasses the Gulf of Mexico, the Caribbean Sea, the Straits of Florida out to 200 nautical miles from shore and the states and territories whose coastlines abut them. The Cartagena Convention defines the Atlantic boundaries of its convention area as lying south of 30 degrees north latitude and within 200 nautical miles of the Atlantic coasts of participating states. In 1977, the Economic Commission for Latin America and the UNEP collaborated to start preparations for the creation of a regional action plan and establishment of the Caribbean Environment Programme (CEP) for the protection and development of the Wider Caribbean. The Action Plan for the CEP was adopted at a meeting of representatives from 22 regional governments in Montego Bay, Jamaica in 1981, following preparatory meetings of government-nominated experts in Caracas, Venezuela, and Managua, Nicaragua. An impetus for the subsequent creation of the Cartagena Convention was the major oil spill that occurred after two very large crude carriers, tankers SS Atlantic Empress and Aegean Captain collided off Trinidad and Tobago in July 1979. Between the collision itself and the subsequent breakup of the Atlantic Empress near Barbados two weeks later while under tow, it was the largest tanker spill ever, with loss of approximately 286,000 metric tons of oil to the marine environment. One month prior to the collision, the Ixtoc I oil spill began in the Bay of Campeche, which, after the 10 months required to stop the leakage from the blown-out oil well, became the largest oil spill to that point (476,190 metric tons). Approximately 250 spills, incidents that result in the release of greater than 0.17 metric tons of oil, occur annually in the oil-producing Gulf of Mexico and Caribbean Sea according to estimates published in 2007. Even regular ship traffic, such as the cargo vessels passing to and from the Caribbean Sea through the Panama Canal or cruise ships plying routes to islands, can contribute to oil pollution through collisions and discharge of contaminated bilge water that has not been properly separated. The Cartagena Convention was the product of the first Conference of Plenipotentiaries on the Protection and Development of the Marine Environment of the Wider Caribbean Region, held in Cartagena, Colombia, between 21 and 24 March 1983. The Convention and its first protocol, the Oil Spills protocol, were concurrently adopted on 24 March 1983 in English, French and Spanish, which are regarded as equally authoritative texts. Subsequent plenipotentiary conferences in 1990, in Kingston, Jamaica, and in 1999, in Oranjestad, Aruba, led to the adoptions of the SPAW Protocol and the LBS Protocol, respectively. Members of the original convention and Oil Spills Protocol can separately ratify the latter two protocols. As of 2021, 18 members have ratified the SPAW Protocol, which entered into force in 2000, and 15 have ratified the LBS Protocol, which entered into force in 2010. Provisions The Cartagena Convention defines ship-based, land-based, seabed activity–derived and airborne pollution sources that can affect the convention area and are regulated by the convention. It stipulates that participants who become aware of a pollution emergency should take measures to stem the pollution and notify other states who have the potential to be impacted, as well as international bodies. It calls for international cooperation between participating states in proactively developing pollution event contingency plans and in conducting research and monitoring. Participating states are also encouraged to define specially protected areas where there are rare or threatened ecosystems or habitat for threatened species. They should conduct environmental impact assessments before undertaking major development projects in coastal areas for effects on marine ecosystems in the convention area. The participants typically meet once every two years. Extraordinary meetings may occur if a request for one is supported by a majority of signatories. Mechanisms for resolving disputes between parties on issues arising in the course of interpretating and implementing the Cartagena Convention are set forth in Article 23 and in an annex to the text. Parties can denounce the Convention or any of its protocols they have ratified two years after the Convention or the specific protocol has gone into effect for them, but if they are no longer contracted to any protocol after their denunciation, they will also be considered to have denounced the Cartagena Convention as a whole. Oil Spills Protocol The Oil Spills Protocol provides details on the implementation of Cartagena Convention provisions with respect to hazardous material releases, including making an inventory of emergency response equipment and expertise related to oil spills. Oil spills are defined by the protocol as an actual or threatened release requiring emergency action to protect health, natural resources, maritime activities (e.g. port operations) and/or historic sites or tourism appeal. A provision for an annex to the protocol extending the definition of hazardous materials to include substances other than oil is included, and until an annex is created, the protocol can be provisionally applied to non-oil hazardous substances. SPAW Protocol The Specially Protected Areas and Wildlife Protocol encourages parties to establish protected areas that conserve ecosystems, natural resources, habitats of endangered, threatened or endemic species and areas of historic, cultural or certain other forms of value. It also provides for the creation of buffer zones, areas of more limited protection, around the protected areas. Three annexes to the protocol establish lists of endangered and threatened wildlife: Annex I lists endangered and threatened flora, Annex II lists endangered and threatened fauna and Annex III contains flora and fauna that are in need of protection, but that could be able to be utilized on a "rational and sustainable basis" with conservation measures. In addition to inhabitants of the marine environment, the SPAW Protocol can be applied to selected fauna and flora and ecosystems of coasts and coastal watersheds above the freshwater transition point at the discretion of the party with jurisdiction. The annexes are developed and updated in consultation with an advisory committee and are subject to approval of the parties. Exemptions to strict protections may be provided to support traditional activities of local populations if they do not pose substantial risk to the survival or ecological function of protected species or areas. Guidance is made to limit the introduction of non-indigenous or genetically modified organisms. LBS Protocol The Land-Based Sources and Activities Protocol calls for parties to take action and cooperate to reduce land-based pollution from their territories. It defines ten priority point source categories in its Annex I for targeted mitigation, including from the sugar and mining industries, domestic sewage and from intensive animal farming operations, and lists pollutants of concern. Annex II specifies considerations for source control and management and lists alternative production practices that minimize waste generation. In Annex III, the protocol regulates domestic wastewater discharges in the convention area, including effluent containing grey water. This annex defines Class I waters as being especially sensitive to the effects of domestic wastewater exposure due to biological or ecological characteristics or their use by humans, e.g. for recreation. Class II waters, which are those considered less sensitive to pollution from domestic wastewater, have defined thresholds for total suspended solids, biological oxygen demand, pH and fats, oils and grease in effluent that are less stringent than those for discharges into Class I waters. In neither case should discharges contain visible floatables. It is recommended to parties that treatment plants and effluent outflow points are designed to minimize or entirely avoid effects on Class I waters. Parties are asked to control the amount of nitrogen and phosphorus that they release into the convention area from domestic sewage, and to avoid discharge of toxic chlorine from water treatment systems. Annex IV addresses agricultural non-point source pollution, including provisions for reduction of nitrogen and phosphorus pollution, pesticides and sediment in runoff and pathogens, such as those causing waterborne diseases. Membership As of 2023, the United Kingdom, a party to the convention, has not extended treaty membership to Anguilla or Bermuda, both UK overseas territories. Implementation Four Regional Activity Centres (RACs) have been established to help implement the Cartagena Convention and protocols, here listed with the protocol implemented and RAC location in parenthesis: the Regional Marine Pollution Emergency Information and Training Center for the Wider Caribbean (Oil Spills Protocol, Curaçao), The RAC for Specially Protected Areas and Wildlife (SPAW Protocol, Guadeloupe), The Centre of Engineering and Environmental Management of Coasts and Bays (LBS Protocol, Cuba) and The Institute of Marine Affairs (LBS Protocol, Trinidad and Tobago). The Regional Coordinating Unit and Secretariat for the convention are located in Kingston, Jamaica. The Cartagena Convention is administered by the United Nations Environment Programme. The 1981 Action Plan for the Caribbean Environment Programme (CEP) provided for establishment of a trust fund financing costs of implementing the Action Plan in the Caribbean, which opened in September 1983 after fulfilling promised contributions from various countries. Nevertheless, the CEP cited lack of contributions to the trust fund as an obstacle it faced in 2014, along with a very broad scope of tasks to support. Current initiatives of the CEP as of 2023 include a project addressing plastic pollution called The Prevention of Marine Litter in the Caribbean Sea (PROMAR) and projects to restore mangrove forests and coral reefs. In 2011, Justice Winston Anderson of the Caribbean Court of Justice expressed concern that the implementation of the Cartagena Convention had "lost some momentum" due in part to the need for legislation in Caribbean Community (CARICOM) states to implement aspects of the convention in their respective countries. He praised Trinidad and Tobago for its implementation of Cartagena Convention provisions through its Environmental Management Act 2000. See also The Caribbean Cruise ship pollution in the United States Environmental effects of shipping Environmental impacts of tourism in the Caribbean Environmental issues with coral reefs International Convention on Oil Pollution Preparedness, Response and Co-operation International Convention for the Prevention of Pollution of the Sea by Oil MARPOL 73/78 References Further reading External links List of Protected Areas listed under the SPAW Protocol as of 16 November 2023. Retrieved 31 July 2024. SPAW Protocol annexes as revised 3 June 2019 after the 10th Contracting Parties to the SPAW Protocol meeting. Retrieved 30 July 2024. Website of The Caribbean Environment Programme and Cartagena Convention Secretariat, UN Environment Programme. Retrieved 30 July 2024. Website of the Regional Marine Pollution Emergency, Information and Training Centre – Caribe, a Regional Activity Centre of the Caribbean Environment Program. Retrieved 30 July 2024. World Environment Situation Room: Data, Information and Knowledge on the Environment – Cartagena Convention. Retrieved 30 July 2024. Environmental treaties Oil spills Treaties extended to the Turks and Caicos Islands Treaties extended to the British Virgin Islands Treaties extended to Montserrat Treaties extended to the Cayman Islands Treaties of Antigua and Barbuda Treaties of the Bahamas Treaties of Barbados Treaties of Belize Treaties of Colombia Treaties of Costa Rica Treaties of Cuba Treaties of the Dominican Republic Treaties of Dominica Treaties of France Treaties of Grenada Treaties of Guatemala Treaties of Guyana Treaties of Honduras Treaties of Jamaica Treaties of Mexico Treaties of the Netherlands Treaties of Nicaragua Treaties of Panama Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of Trinidad and Tobago Treaties of the United Kingdom Treaties of the United States Treaties of Venezuela Treaties extended to the Netherlands Antilles
Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region
Chemistry,Environmental_science
2,497
33,941
https://en.wikipedia.org/wiki/Windows%202000
Windows 2000 is a major release of the Windows NT operating system developed by Microsoft and oriented towards businesses. It is the direct successor to Windows NT 4.0, and was released to manufacturing on December 15, 1999, officially released to retail on February 17, 2000 for all versions, and on September 26, 2000 for Windows 2000 Datacenter Server. It was Microsoft's primary business-oriented operating system until the introduction of Windows XP Professional in 2001. Windows 2000 introduces NTFS 3.0, Encrypting File System, and basic and dynamic disk storage. Support for people with disabilities is improved over Windows NT 4.0 with a number of new assistive technologies, and Microsoft increased support for different languages and locale information. The Windows 2000 Server family has additional features, most notably the introduction of Active Directory, which in the years following became a widely used directory service in business environments. Four editions of Windows 2000 have been released: Professional, Server, Advanced Server, and Datacenter Server; the latter was both released to manufacturing and launched months after the other editions. While each edition of Windows 2000 is targeted at a different market, they share a core set of features, including many system utilities such as the Microsoft Management Console and standard system administration applications. Microsoft marketed Windows 2000 as the most secure Windows version ever at the time; however, it became the target of a number of high-profile virus attacks such as Code Red and Nimda. For ten years after its release, it continued to receive patches for security vulnerabilities nearly every month until reaching the end of support on July 13, 2010, the same day that support ended for Windows XP SP2. Windows 2000 and Windows 2000 Server were succeeded by Windows XP and Windows Server 2003, released in 2001 and 2003, respectively. Although unreleased, it was developed on Alpha in alpha, beta, and release candidate versions. Its successor, Windows XP, only supports x86, x64 and Itanium processors. Both the Original Xbox and the Xbox 360 use a modified version of Windows 2000 as their system software. History Windows 2000, originally named Windows NT 5.0, is a continuation of the Microsoft Windows NT family of operating systems, replacing Windows NT 4.0. Chairman and CEO Bill Gates was originally "pretty confident" Windows NT 5.0 would ship in the first half of 1998, revealing that the first set of beta builds had been shipped in early 1997; these builds were identical to Windows NT 4.0. The first official beta was released in September 1997, followed by Beta 2 in August 1998. On October 27, 1998, Microsoft announced that the name of the final version of the operating system would be Windows 2000, a name which referred to its projected release date. Windows 2000 Beta 3 was released in May 1999. Windows NT 5.0 Beta 1 was similar to Windows NT 4.0, including a very similarly themed logo. Windows NT 5.0 Beta 2 introduced a new 'mini' boot screen, and removed the 'dark space' theme in the logo. The Windows NT 5.0 betas had very long startup and shutdown sounds, though these were changed in the early Windows 2000 beta, but during Beta 3, a new piano-made startup and shutdown sounds were made, composed by Steven Ray Allen. It was featured in the final version as well as in Windows Me. The new login prompt from the final version made its first appearance in Beta 3 build 1946 (the first build of Beta 3). The new, updated icons (for My Computer, Recycle Bin etc.) first appeared in Beta 3 build 1964. The Windows 2000 boot screen in the final version first appeared in Beta 3 build 1983. Windows 2000 did not have an actual codename because, according to Dave Thompson of Windows NT team, "Jim Allchin didn't like codenames"., although Windows 2000 Service Pack 1 was codenamed "Asteroid". During development, builds for the Alpha architecture were compiled, but the project was abandoned in the final stages of development (between RC1 and RC2) after Compaq announced they had dropped support for Windows NT on Alpha. From here, Microsoft issued three release candidates between July and November 1999, and finally released the operating system to partners on December 12, 1999, followed by manufacturing three days later on December 15. The public could buy the full version of Windows 2000 on February 17, 2000. Three days before this event, which Microsoft advertised as "a standard in reliability," a leaked memo from Microsoft reported on by Mary Jo Foley revealed that Windows 2000 had "over 63,000 potential known defects." After Foley's article was published, she claimed that Microsoft blacklisted her for a considerable time. However, Abraham Silberschatz et al. claim in their computer science textbook that "Windows 2000 was the most reliable, stable operating system Microsoft had ever shipped to that point. Much of this reliability came from maturity in the source code, extensive stress testing of the system, and automatic detection of many serious errors in drivers." InformationWeek summarized the release "our tests show the successor to Windows NT 4.0 is everything we hoped it would be. Of course, it isn't perfect either." Wired News later described the results of the February launch as "lackluster." Novell criticized Microsoft's Active Directory, the new directory service architecture, as less scalable or reliable than its own Novell Directory Services (NDS) alternative. Windows 2000 was initially planned to replace both Windows 98 and Windows NT 4.0. However, this would be changed later, as an updated version of Windows 98 called Windows 98 Second Edition was released in 1999. On or shortly before February 12, 2004, "portions of the Microsoft Windows 2000 and Windows NT 4.0 source code were illegally made available on the Internet." The source of the leak was later traced to Mainsoft, a Windows Interface Source Environment partner. Microsoft issued the following statement: "Microsoft source code is both copyrighted and protected as a trade secret. As such, it is illegal to post it, make it available to others, download it or use it." Despite the warnings, the archive containing the leaked code spread widely on the file-sharing networks. On February 16, 2004, an exploit "allegedly discovered by an individual studying the leaked source code" for certain versions of Microsoft Internet Explorer was reported. On April 15, 2015, GitHub took down a repository containing a copy of the Windows NT 4.0 source code that originated from the leak. Microsoft planned to release in 2000 a version of Windows 2000, specially codenamed "Janus", which would run on 64-bit Intel Itanium microprocessors. However, the first officially released 64-bit version of Windows was Windows XP 64-Bit Edition, released alongside the 32-bit editions of Windows XP on October 25, 2001, followed by the server versions Windows Datacenter Server Limited Edition and later Windows Advanced Server Limited Edition, which were based on the pre-release Windows Server 2003 (then known as Windows .NET Server) codebase. These editions were released in 2002, were shortly available through the OEM channel and then were superseded by the final versions of Server 2003. New and updated features Windows 2000 introduced many of the new features of Windows 98 and 98 SE into the NT line, such as the Windows Desktop Update, Internet Explorer 5 (Internet Explorer 6, which followed in 2001, is also available for Windows 2000), Outlook Express, NetMeeting, FAT32 support, SSE and SSE2 support, Windows Driver Model, Internet Connection Sharing, Windows Media Player 6.4, WebDAV support etc. Certain new features are common across all editions of Windows 2000, among them NTFS 3.0, the Microsoft Management Console (MMC), UDF support, the Encrypting File System (EFS), Logical Disk Manager, Image Color Management 2.0, support for PostScript 3-based printers, OpenType (.OTF) and Type 1 PostScript (.PFB) font support (including a new font—Palatino Linotype—to showcase some OpenType features), the Data protection API (DPAPI), an LDAP/Active Directory-enabled Address Book, usability enhancements and multi-language and locale support. Windows 2000 also introduced USB device class drivers for USB printers, Mass storage class devices, and improved FireWire SBP-2 support for printers and scanners, along with a Safe removal applet for removable storage devices. Windows 2000 SP4 added native USB 2.0 support, Wireless Zero Configuration support and SSE3 support. Windows 2000 is also the first Windows version to support hibernation at the operating system level (OS-controlled ACPI S4 sleep state) unlike Windows 98 which required special drivers from the hardware manufacturer or driver developer. A new capability designed to protect critical system files called Windows File Protection was introduced. This protects critical Windows system files by preventing programs other than Microsoft's operating system update mechanisms such as the Package Installer, Windows Installer and other update components from modifying them. The System File Checker utility provides users the ability to perform a manual scan of the integrity of all protected system files, and optionally repair them, either by restoring from a cache stored in a separate "DLLCACHE" directory, or from the original install media. Microsoft recognized that a serious error (a Blue Screen of Death or stop error) could cause problems for servers that needed to be constantly running and so provided a system setting that would allow the server to automatically reboot when a stop error occurred. Also included is an option to dump any of the first 64 KB of memory to disk (the smallest amount of memory that is useful for debugging purposes, also known as a minidump), a dump of only the kernel's memory, or a dump of the entire contents of memory to disk, as well as write that this event happened to the Windows 2000 event log. In order to improve performance on servers running Windows 2000, Microsoft gave administrators the choice of optimizing the operating system's memory and processor usage patterns for background services or for applications. Windows 2000 also introduced core system administration and management features, such as the Windows Installer, Windows Management Instrumentation and Event Tracing for Windows (ETW) into the operating system. Plug and Play and hardware support improvements The most notable improvement from Windows NT 4.0 is the addition of Plug and Play with full ACPI and Windows Driver Model support. Similar to Windows 9x, Windows 2000 supports automatic recognition of installed hardware, hardware resource allocation, loading of appropriate drivers, PnP APIs and device notification events. The addition of the kernel PnP Manager along with the Power Manager are two significant subsystems added in Windows 2000. Windows 2000 introduced version 3 print drivers (user mode printer drivers) based on Unidrv, which made it easier for printer manufacturers to write device drivers for printers. Generic support for 5-button mice is also included as standard and installing IntelliPoint allows reassigning the programmable buttons. Windows 98 lacked generic support. Driver Verifier was introduced to stress test and catch device driver bugs. Shell Windows 2000 introduces layered windows that allow for transparency, translucency and various transition effects like shadows, gradient fills and alpha-blended GUI elements to top-level windows. Menus support a new Fade transition effect. The Start menu in Windows 2000 introduces personalized menus, expandable special folders and the ability to launch multiple programs without closing the menu by holding down the SHIFT key. A Re-sort button forces the entire Start Menu to be sorted by name. The Taskbar introduces support for balloon notifications which can also be used by application developers. Windows 2000 Explorer introduces customizable Windows Explorer toolbars, auto-complete in Windows Explorer address bar and Run box, advanced file type association features, displaying comments in shortcuts as tooltips, extensible columns in Details view (IColumnProvider interface), icon overlays, integrated search pane in Windows Explorer, sort by name function for menus, and Places bar in common dialogs for Open and Save. Windows Explorer has been enhanced in several ways in Windows 2000. It is the first Windows NT release to include Active Desktop, first introduced as a part of Internet Explorer 4.0 (specifically Windows Desktop Update), and only pre-installed in Windows 98 by that time. It allowed users to customize the way folders look and behave by using HTML templates, having the file extension HTT. This feature was abused by computer viruses that employed malicious scripts, Java applets, or ActiveX controls in folder template files as their infection vector. Two such viruses are VBS/Roor-C and VBS.Redlof.a. The "Web-style" folders view, with the left Explorer pane displaying details for the object currently selected, is turned on by default in Windows 2000. For certain file types, such as pictures and media files, the preview is also displayed in the left pane. Until the dedicated interactive preview pane appeared in Windows Vista, Windows 2000 had been the only Windows release to feature an interactive media player as the previewer for sound and video files, enabled by default. However, such a previewer can be enabled in previous versions of Windows with the Windows Desktop Update installed through the use of folder customization templates. The default file tooltip displays file title, author, subject and comments; this metadata may be read from a special NTFS stream, if the file is on an NTFS volume, or from an OLE structured storage stream, if the file is a structured storage document. All Microsoft Office documents since Office 4.0 make use of structured storage, so their metadata is displayable in the Windows 2000 Explorer default tooltip. File shortcuts can also store comments which are displayed as a tooltip when the mouse hovers over the shortcut. The shell introduces extensibility support through metadata handlers, icon overlay handlers and column handlers in Explorer Details view. The right pane of Windows 2000 Explorer, which usually just lists files and folders, can also be customized. For example, the contents of the system folders aren't displayed by default, instead showing in the right pane a warning to the user that modifying the contents of the system folders could harm their computer. It's possible to define additional Explorer panes by using DIV elements in folder template files. This degree of customizability is new to Windows 2000; neither Windows 98 nor the Desktop Update could provide it. The new DHTML-based search pane is integrated into Windows 2000 Explorer, unlike the separate search dialog found in all previous Explorer versions. The Indexing Service has also been integrated into the operating system and the search pane built into Explorer allows searching files indexed by its database. NTFS 3.0 Microsoft released the version 3.0 of NTFS (sometimes incorrectly called "NTFS 5" in relation to the kernel version number) as part of Windows 2000; this introduced disk quotas (provided by QuotaAdvisor), file-system-level encryption, sparse files and reparse points. Sparse files allow for the efficient storage of data sets that are very large yet contain many areas that only have zeros. Reparse points allow the object manager to reset a file namespace lookup and let file system drivers implement changed functionality in a transparent manner. Reparse points are used to implement volume mount points, junctions, Hierarchical Storage Management, Native Structured Storage and Single Instance Storage. Volume mount points and directory junctions allow for a file to be transparently referred from one file or directory location to another. Windows 2000 also introduces a Distributed Link Tracking service to ensure file shortcuts remain working even if the target is moved or renamed. The target object's unique identifier is stored in the shortcut file on NTFS 3.0 and Windows can use the Distributed Link Tracking service for tracking the targets of shortcuts, so that the shortcut file may be silently updated if the target moves, even to another hard drive. Encrypting File System The Encrypting File System (EFS) introduced strong file system-level encryption to Windows. It allows any folder or drive on an NTFS volume to be encrypted transparently by the user. EFS works together with the EFS service, Microsoft's CryptoAPI and the EFS File System Runtime Library (FSRTL). To date, its encryption has not been compromised. EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes less time to encrypt and decrypt large amounts of data than if an asymmetric key cipher were used. The symmetric key used to encrypt the file is then encrypted with a public key associated with the user who encrypted the file, and this encrypted data is stored in the header of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key stored in the file header. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. For a user losing access to their key, support for recovery agents that can decrypt files is built into EFS. A Recovery Agent is a user who is authorized by a public key recovery certificate to decrypt files belonging to other users using a special private key. By default, local administrators are recovery agents however they can be customized using Group Policy. Basic and dynamic disk storage Windows 2000 introduced the Logical Disk Manager and the diskpart command line tool for dynamic storage. All versions of Windows 2000 support three types of dynamic disk volumes (along with basic disks): simple volumes, spanned volumes and striped volumes: Simple volume, a volume with disk space from one disk. Spanned volumes, where up to 32 disks show up as one, increasing it in size but not enhancing performance. When one disk fails, the array is destroyed. Some data may be recoverable. This corresponds to SPAN and not to RAID-1. Striped volumes, also known as RAID-0, store all their data across several disks in stripes. This allows better performance because disk reads and writes are balanced across multiple disks. Like spanned volumes, when one disk in the array fails, the entire array is destroyed (some data may be recoverable). In addition to these disk volumes, Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Datacenter Server support mirrored volumes and striped volumes with parity: Mirrored volumes, also known as RAID-1, store identical copies of their data on 2 or more identical disks (mirrored). This allows for fault tolerance; in the event one disk fails, the other disk(s) can keep the server operational until the server can be shut down for replacement of the failed disk. Striped volumes with parity, also known as RAID-5, functions similar to striped volumes/RAID-0, except "parity data" is written out across each of the disks in addition to the data. This allows the data to be "rebuilt" in the event a disk in the array needs replacement. Accessibility With Windows 2000, Microsoft introduced the Windows 9x accessibility features for people with visual and auditory impairments and other disabilities into the NT-line of operating systems. These included: StickyKeys: makes modifier keys (ALT, CTRL and SHIFT) become "sticky": a user can press the modifier key, and then release it before pressing the combination key. (Activated by pressing Shift five times quickly.) FilterKeys: a group of keyboard-related features for people with typing issues, including: Slow Keys: Ignore any keystroke not held down for a certain period. Bounce Keys: Ignore repeated keystrokes pressed in quick succession. Repeat Keys: lets users slow down the rate at which keys are repeated via the keyboard's key-repeat feature. Toggle Keys: when turned on, Windows will play a sound when the CAPS LOCK, NUM LOCK or SCROLL LOCK key is pressed. SoundSentry: designed to help users with auditory impairments, Windows 2000 shows a visual effect when a sound is played through the sound system. MouseKeys: lets users move the cursor around the screen via the numeric keypad. SerialKeys: lets Windows 2000 support speech augmentation devices. High contrast theme: to assist users with visual impairments. Microsoft Magnifier: a screen magnifier that enlarges a part of the screen the cursor is over. Additionally, Windows 2000 introduced the following new accessibility features: On-screen keyboard: displays a virtual keyboard on the screen and allows users to press its keys using a mouse or a joystick. Microsoft Narrator: introduced in Windows 2000, this is a screen reader that utilizes the Speech API 4, which would later be updated to Speech API 5 in Windows XP Utility Manager: an application designed to start, stop, and manage when accessibility features start. This was eventually replaced by the Ease of Access Center in Windows Vista. Accessibility Wizard: a control panel applet that helps users set up their computer for people with disabilities. Languages and locales Windows 2000 introduced the Multilingual User Interface (MUI). Besides English, Windows 2000 incorporates support for Arabic, Armenian, Baltic, Central European, Cyrillic, Georgian, Greek, Hebrew, Indic, Japanese, Korean, simplified Chinese, Thai, traditional Chinese, Turkic, Vietnamese and Western European languages. It also has support for many different locales. Games Windows 2000 included version 7.0 of the DirectX API, commonly used by game developers on Windows 98. The last version of DirectX that was released for Windows 2000 was DirectX 9.0c (Shader Model 3.0), which shipped with Windows XP Service Pack 2. Microsoft published quarterly updates to DirectX 9.0c through the February 2010 release after which support was dropped in the June 2010 SDK. These updates contain bug fixes to the core runtime and some additional libraries such as D3DX, XAudio 2, XInput and Managed DirectX components. The majority of games written for versions of DirectX 9.0c (up to the February 2010 release) can therefore run on Windows 2000. Windows 2000 included the same games as Windows NT 4.0 did: FreeCell, Minesweeper, Pinball, and Solitaire. System utilities Windows 2000 introduced the Microsoft Management Console (MMC), which is used to create, save, and open administrative tools. Each of these is called a console, and most allow an administrator to administer other Windows 2000 computers from one centralised computer. Each console can contain one or many specific administrative tools, called snap-ins. These can be either standalone (with one function), or an extension (adding functions to an existing snap-in). In order to provide the ability to control what snap-ins can be seen in a console, the MMC allows consoles to be created in author mode or user mode. Author mode allows snap-ins to be added, new windows to be created, all portions of the console tree to be displayed and consoles to be saved. User mode allows consoles to be distributed with restrictions applied. User mode consoles can grant full access to the user for any change, or they can grant limited access, preventing users from adding snapins to the console though they can view multiple windows in a console. Alternatively users can be granted limited access, preventing them from adding to the console and stopping them from viewing multiple windows in a single console. The main tools that come with Windows 2000 can be found in the Computer Management console (in Administrative Tools in the Control Panel). This contains the Event Viewer—a means of viewing system or application-related events and the Windows equivalent of a log file, a system information utility, a backup utility, Task Scheduler and management consoles to view open shared folders and shared folder sessions, configure and manage COM+ applications, configure Group Policy, manage all the local users and user groups, and a device manager. It contains Disk Management and Removable Storage snap-ins, a disk defragmenter as well as a performance diagnostic console, which displays graphs of system performance and configures data logs and alerts. It also contains a service configuration console, which allows users to view all installed services and to stop and start them, as well as configure what those services should do when the computer starts. CHKDSK has significant performance improvements. Windows 2000 comes with two utilities to edit the Windows registry, REGEDIT.EXE and REGEDT32.EXE. REGEDIT has been directly ported from Windows 98, and therefore does not support editing registry permissions. REGEDT32 has the older multiple document interface (MDI) and can edit registry permissions in the same manner that Windows NT's REGEDT32 program could. REGEDIT has a left-side tree view of the Windows registry, lists all loaded hives and represents the three components of a value (its name, type, and data) as separate columns of a table. REGEDT32 has a left-side tree view, but each hive has its own window, so the tree displays only keys and it represents values as a list of strings. REGEDIT supports right-clicking of entries in a tree view to adjust properties and other settings. REGEDT32 requires all actions to be performed from the top menu bar. Windows XP is the first system to integrate these two programs into a single utility, adopting the REGEDIT behavior with the additional NT features. The System File Checker (SFC) also comes with Windows 2000. It is a command line utility that scans system files and verifies whether they were signed by Microsoft and works in conjunction with the Windows File Protection mechanism. It can also repopulate and repair all the files in the Dllcache folder. Recovery Console The Recovery Console is run from outside the installed copy of Windows to perform maintenance tasks that can neither be run from within it nor feasibly be run from another computer or copy of Windows 2000. It is usually used to recover the system from problems that cause booting to fail, which would render other tools useless, like Safe Mode or Last Known Good Configuration, or chkdsk. It includes commands like fixmbr, which are not present in MS-DOS. It has a simple command-line interface, used to check and repair the hard drive(s), repair boot information (including NTLDR), replace corrupted system files with fresh copies from the CD, or enable/disable services and drivers for the next boot. The console can be accessed in either of the two ways: Booting from the Windows 2000 CD, and choosing to start the Recovery Console from the CD itself instead of continuing with setup. The Recovery Console is accessible as long as the installation CD is available. Preinstalling the Recovery Console on the hard disk as a startup option in Boot.ini, via WinNT32.exe, with the /cmdcons switch. In this case, it can only be started as long as NTLDR can boot from the system partition. Windows Scripting Host 2.0 Windows 2000 introduced Windows Script Host 2.0 which included an expanded object model and support for logon and logoff scripts. Networking Starting with Windows 2000, the Server Message Block (SMB) protocol directly interfaces with TCP/IP. In Windows NT 4.0, SMB requires the NetBIOS over TCP/IP (NBT) protocol to work on a TCP/IP network. Windows 2000 introduces a client-side DNS caching service. When the Windows DNS resolver receives a query response, the DNS resource record is added to a cache. When it queries the same resource record name again and it is found in the cache, then the resolver does not query the DNS server. This speeds up DNS query time and reduces network traffic. Server family features The Windows 2000 Server family consists of Windows 2000 Server, Windows 2000 Advanced Server, Windows 2000 Small Business Server, and Windows 2000 Datacenter Server. All editions of Windows 2000 Server have the following services and features built in: Routing and Remote Access Service (RRAS) support, facilitating dial-up and VPN connections using IPsec, L2TP or L2TP/IPsec, support for RADIUS authentication in Internet Authentication Service, network connection sharing, Network Address Translation, unicast and multicast routing schemes. Remote access security features: Remote Access Policies for setup, verify Caller ID (IP address for VPNs), callback and Remote access account lockout Autodial by location feature using the Remote Access Auto Connection Manager service Extensible Authentication Protocol support in IAS (EAP-MD5 and EAP-TLS) later upgraded to PEAPv0/EAP-MSCHAPv2 and PEAP-EAP-TLS in Windows 2000 SP4 DNS server, including support for Dynamic DNS. Active Directory relies heavily on DNS. IPsec support and TCP/IP filtering Smart card support Microsoft Connection Manager Administration Kit (CMAK) and Connection Point Services Support for distributed file systems (DFS) Hierarchical Storage Management support including remote storage, a service that runs with NTFS and automatically transfers files that are not used for some time to less expensive storage media Fault tolerant volumes, namely Mirrored and RAID-5 Group Policy (part of Active Directory) IntelliMirror, a collection of technologies for fine-grained management of Windows 2000 Professional clients that duplicates users' data, applications, files, and settings in a centralized location on the network. IntelliMirror employs technologies such as Group Policy, Windows Installer, Roaming profiles, Folder Redirection, Offline Files (also known as Client Side Caching or CSC), File Replication Service (FRS), Remote Installation Services (RIS) to address desktop management scenarios such as user data management, user settings management, software installation and maintenance. COM+, Microsoft Transaction Server and Distributed Transaction Coordinator MSMQ 2.0 TAPI 3.0 Integrated Windows Authentication (including Kerberos, Secure channel and SPNEGO (Negotiate) SSP packages for Security Support Provider Interface (SSPI)). MS-CHAP v2 protocol Public Key Infrastructure (PKI) and Enterprise Certificate Authority support Terminal Services and support for the Remote Desktop Protocol (RDP) Internet Information Services (IIS) 5.0 and Windows Media Services 4.1 Network quality of service features A new Windows Time service which is an implementation of Simple Network Time Protocol (SNTP) as detailed in IETF . The Windows Time service synchronizes the date and time of computers in a domain running on Windows 2000 Server or later. Windows 2000 Professional includes an SNTP client. The Server editions include more features and components, including the Microsoft Distributed File System (DFS), Active Directory support and fault-tolerant storage. Distributed File System The Distributed File System (DFS) allows shares in multiple different locations to be logically grouped under one folder, or DFS root. When users try to access a network share off the DFS root, the user is really looking at a DFS link and the DFS server transparently redirects them to the correct file server and share. A DFS root can only exist on a Windows 2000 version that is part of the server family, and only one DFS root can exist on that server. There can be two ways of implementing a DFS namespace on Windows 2000: either through a standalone DFS root or a domain-based DFS root. Standalone DFS allows for only DFS roots on the local computer, and thus does not use Active Directory. Domain-based DFS roots exist within Active Directory and can have their information distributed to other domain controllers within the domain – this provides fault tolerance to DFS. DFS roots that exist on a domain must be hosted on a domain controller or on a domain member server. The file and root information is replicated via the Microsoft File Replication Service (FRS). Active Directory A new way of organizing Windows network domains, or groups of resources, called Active Directory, is introduced with Windows 2000 to replace Windows NT's earlier domain model. Active Directory's hierarchical nature allowed administrators a built-in way to manage user and computer policies and user accounts, and to automatically deploy programs and updates with a greater degree of scalability and centralization than provided in previous Windows versions. User information stored in Active Directory also provided a convenient phone book-like function to end users. Active Directory domains can vary from small installations with a few hundred objects, to large installations with millions. Active Directory can organise and link groups of domains into a contiguous domain name space to form trees. Groups of trees outside of the same namespace can be linked together to form forests. Active Directory services could always be installed on a Windows 2000 Server Standard, Advanced, or Datacenter computer, and cannot be installed on a Windows 2000 Professional computer. However, Windows 2000 Professional is the first client operating system able to exploit Active Directory's new features. As part of an organization's migration, Windows NT clients continued to function until all clients were upgraded to Windows 2000 Professional, at which point the Active Directory domain could be switched to native mode and maximum functionality achieved. Active Directory requires a DNS server that supports SRV resource records, or that an organization's existing DNS infrastructure be upgraded to support this. There should be one or more domain controllers to hold the Active Directory database and provide Active Directory directory services. Volume fault tolerance Along with support for simple, spanned and striped volumes, the Windows 2000 Server family also supports fault-tolerant volume types. The types supported are mirrored volumes and RAID-5 volumes: Mirrored volumes: the volume contains several disks, and when data is written to one it is also written to the other disks. This means that if one disk fails, the data can be totally recovered from the other disk. Mirrored volumes are also known as RAID-1. RAID-5 volumes: a RAID-5 volume consists of multiple disks, and it uses block-level striping with parity data distributed across all member disks. Should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on-the-fly." Deployment Windows 2000 can be deployed to a site via various methods. It can be installed onto servers via traditional media (such as CD) or via distribution folders that reside on a shared folder. Installations can be attended or unattended. During a manual installation, the administrator must specify configuration options. Unattended installations are scripted via an answer file, or a predefined script in the form of an INI file that has all the options filled in. An answer file can be created manually or using the graphical Setup manager. The Winnt.exe or Winnt32.exe program then uses that answer file to automate the installation. Unattended installations can be performed via a bootable CD, using Microsoft Systems Management Server (SMS), via the System Preparation Tool (Sysprep), via the Winnt32.exe program using the /syspart switch or via Remote Installation Services (RIS). The ability to slipstream a service pack into the original operating system setup files is also introduced in Windows 2000. The Sysprep method is started on a standardized reference computer – though the hardware need not be similar – and it copies the required installation files from the reference computer to the target computers. The hard drive does not need to be in the target computer and may be swapped out to it at any time, with the hardware configured later. The Winnt.exe program must also be passed a /unattend switch that points to a valid answer file and a /s file that points to one or more valid installation sources. Sysprep allows the duplication of a disk image on an existing Windows 2000 Server installation to multiple servers. This means that all applications and system configuration settings will be copied across to the new installations, and thus, the reference and target computers must have the same HALs, ACPI support, and mass storage devices – though Windows 2000 automatically detects "plug and play" devices. The primary reason for using Sysprep is to quickly deploy Windows 2000 to a site that has multiple computers with standard hardware. (If a system had different HALs, mass storage devices or ACPI support, then multiple images would need to be maintained.) Systems Management Server can be used to upgrade multiple computers to Windows 2000. These must be running Windows NT 3.51, Windows NT 4.0, Windows 98 or Windows 95 OSR2.x along with the SMS client agent that can receive software installation operations. Using SMS allows installations over a wide area and provides centralised control over upgrades to systems. Remote Installation Services (RIS) are a means to automatically install Windows 2000 Professional (and not Windows 2000 Server) to a local computer over a network from a central server. Images do not have to support specific hardware configurations and the security settings can be configured after the computer reboots as the service generates a new unique security ID (SID) for the machine. This is required so that local accounts are given the right identifier and do not clash with other Windows 2000 Professional computers on a network. RIS requires that client computers are able to boot over the network via either a network interface card that has a Pre-Boot Execution Environment (PXE) boot ROM installed or that the client computer has a network card installed that is supported by the remote boot disk generator. The remote computer must also meet the Net PC specification. The server that RIS runs on must be Windows 2000 Server and it must be able to access a network DNS Service, a DHCP service and the Active Directory services. Editions Microsoft released various editions of Windows 2000 for different markets and business needs: Professional, Server, Advanced Server and Datacenter Server. Each was packaged separately. Windows 2000 Professional was designed as the desktop operating system for businesses and power users. It is the client version of Windows 2000. It offers greater security and stability than many of the previous Windows desktop operating systems. It supports up to two processors, and can address up to 4GB of RAM. The system requirements are a Pentium processor (or equivalent) of 133MHz or greater, at least 32MB of RAM, 650MB of hard drive space, and a CD-ROM drive (recommended: Pentium II, 128MB of RAM, 2GB of hard drive space, and CD-ROM drive). However, despite the official minimum processor requirements, it is still possible to install Windows 2000 on 4th-generation x86 CPUs such as the 80486. Windows 2000 Server shares the same user interface with Windows 2000 Professional, but contains additional components for the computer to perform server roles and run infrastructure and application software. A significant new component introduced in the server versions is Active Directory, which is an enterprise-wide directory service based on LDAP (Lightweight Directory Access Protocol). Additionally, Microsoft integrated Kerberos network authentication, replacing the often-criticised NTLM (NT LAN Manager) authentication system used in previous versions. This also provided a purely transitive-trust relationship between Windows 2000 Server domains in a forest (a collection of one or more Windows 2000 domains that share a common schema, configuration, and global catalog, being linked with two-way transitive trusts). Furthermore, Windows 2000 introduced a Domain Name Server which allows dynamic registration of IP addresses. Windows 2000 Server supports up to 4 processors and 4GB of RAM, with a minimum requirement of 128MB of RAM and 1GB hard disk space, however requirements may be higher depending on installed components. Windows 2000 Advanced Server is a variant of Windows 2000 Server operating system designed for medium-to-large businesses. It offers the ability to create clusters of servers, support for up to 8 CPUs, a main memory amount of up to 8GB on Physical Address Extension (PAE) systems and the ability to do 8-way SMP. It supports TCP/IP load balancing and builds on Microsoft Cluster Server (MSCS) in Windows NT Enterprise Server 4.0, adding enhanced functionality for two-node clusters. System requirements are similar to those of Windows 2000 Server, however they may need to be higher to scale to larger infrastructure. Windows 2000 Datacenter Server is a variant of Windows 2000 Server designed for large businesses that move large quantities of confidential or sensitive data frequently via a central server. Like Advanced Server, it supports clustering, failover and load balancing. Its minimum system requirements are similar to those of Advanced Server, but it was designed to be capable of handing advanced, fault-tolerant and scalable hardware—for instance computers with up to 32 CPUs and 32GBs RAM, with rigorous system testing and qualification, hardware partitioning, coordinated maintenance and change control. Windows 2000 Datacenter Server was released to manufacturing on August 11, 2000 and launched on September 26, 2000. This edition was based on Windows 2000 with Service Pack 1 and was not available at retail. Service packs Windows 2000 has received four full service packs and one rollup update package following SP4, which is the last service pack. Microsoft phased out all development of its Java Virtual Machine (JVM) from Windows 2000 in SP3. Internet Explorer 5.01 has also been upgraded to the corresponding service pack level. Service Pack 4 with Update Rollup was released on September 13, 2005, nearly four years following the release of Windows XP and sixteen months prior to the release of Windows Vista. Microsoft had originally intended to release a fifth service pack for Windows 2000, but cancelled this project early in its development, and instead released Update Rollup 1 for SP4, a collection of all the security-related hotfixes and some other significant issues. The Update Rollup does not include all non-security related hotfixes and is not subjected to the same extensive regression testing as a full service pack. Microsoft states that this update will meet customers' needs better than a whole new service pack, and will still help Windows 2000 customers secure their PCs, reduce support costs, and support existing computer hardware. Upgradeability Several Windows 2000 components are upgradable to latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows 2000 include: ActiveSync 4.5 DirectX 9.0c (5 February 2010 Redistributable) Internet Explorer 6 SP1 and Outlook Express 6 SP1 Microsoft Agent 2.0 Microsoft Data Access Components 2.81 Microsoft NetMeeting 3.01 Microsoft Virtual PC 2004 SP1 Office 2003 SP3 MSN Messenger 7.0 (Windows Messenger) MSXML 6.0 SP2 .NET Framework 2.0 SP2 Tweak UI 1.33 Visual C++ 2008 Visual Studio 2005 Windows Desktop Search 2.66 Windows Script Host 5.7 Windows Installer 3.1 Windows Media Format Runtime and Windows Media Player 9 Series (including Windows Media Encoder 7.1 and the Windows Media 8 Encoding Utility) Security During the Windows 2000 period, the nature of attacks on Windows servers changed: more attacks came from remote sources via the Internet. This has led to an overwhelming number of malicious programs exploiting the IIS services – specifically a notorious buffer overflow tendency. This tendency is not operating-system-version specific, but rather configuration-specific: it depends on the services that are enabled. Following this, a common complaint is that "by default, Windows 2000 installations contain numerous potential security problems. Many unneeded services are installed and enabled, and there is no active local security policy." In addition to insecure defaults, according to the SANS Institute, the most common flaws discovered are remotely exploitable buffer overflow vulnerabilities. Other criticized flaws include the use of vulnerable encryption techniques. Code Red and Code Red II were famous (and much discussed) worms that exploited vulnerabilities of the Windows Indexing Service of Windows 2000's Internet Information Services (IIS). In August 2003, security researchers estimated that two major worms called Sobig and Blaster infected more than half a million Microsoft Windows computers. The 2005 Zotob worm was blamed for security compromises on Windows 2000 machines at ABC, CNN, the New York Times Company, and the United States Department of Homeland Security. On September 8, 2009, Microsoft skipped patching two of the five security flaws that were addressed in the monthly security update, saying that patching one of the critical security flaws was "infeasible." According to Microsoft Security Bulletin MS09-048: "The architecture to properly support TCP/IP protection does not exist on Microsoft Windows 2000 systems, making it infeasible to build the fix for Microsoft Windows 2000 Service Pack 4 to eliminate the vulnerability. To do so would require re-architecting a very significant amount of the Microsoft Windows 2000 Service Pack 4 operating system, there would be no assurance that applications designed to run on Microsoft Windows 2000 Service Pack 4 would continue to operate on the updated system." No patches for this flaw were released for the newer Windows XP (32-bit) and Windows XP Professional x64 Edition either, despite both also being affected; Microsoft suggested turning on Windows Firewall in those versions. Support lifecycle Windows 2000 and Windows 2000 Server were superseded by newer Microsoft operating systems: Windows 2000 Server products by Windows Server 2003, and Windows 2000 Professional by Windows XP Professional. The Windows 2000 family of operating systems moved from mainstream support to the extended support phase on June 30, 2005. Microsoft says that this marks the progression of Windows 2000 through the Windows lifecycle policy. Under mainstream support, Microsoft freely provides design changes if any, service packs and non-security related updates in addition to security updates, whereas in extended support, service packs are not provided and non-security updates require contacting the support personnel by e-mail or phone. Under the extended support phase, Microsoft continued to provide critical security updates every month for all components of Windows 2000 (including Internet Explorer 5.0 SP4) and paid per-incident support for technical issues. Because of Windows 2000's age, updated versions of components such as Windows Media Player 11 and Internet Explorer 7 have not been released for it. In the case of Internet Explorer, Microsoft said in 2005 that, "some of the security work in IE 7 relies on operating system functionality in XP SP2 that is non-trivial to port back to Windows 2000." While users of Windows 2000 Professional and Server were eligible to purchase the upgrade license for Windows Vista Business or Windows Server 2008, neither of these operating systems can directly perform an upgrade installation from Windows 2000; a clean installation must be performed instead or a two-step upgrade through XP/2003. Microsoft has dropped the upgrade path from Windows 2000 (and earlier) to Windows 7. Users of Windows 2000 must buy a full Windows 7 license. Although Windows 2000 is the last NT-based version of Microsoft Windows which does not include product activation, Microsoft has introduced Windows Genuine Advantage for certain downloads and non-critical updates from the Download Center for Windows 2000. Windows 2000 reached the end of its lifecycle (EoL) on July 13, 2010 (alongside Service Pack 2 of Windows XP). It will not receive new security updates and new security-related hotfixes after this date. In Japan, over 130,000 servers and 500,000 PCs in local governments were affected; many local governments said that they will not update as they do not have funds to cover a replacement. As of 2011, Windows Update still supports the Windows 2000 updates available on Patch Tuesday in July 2010, e.g., if older optional Windows 2000 features are enabled later. Microsoft Office products under Windows 2000 have their own product lifecycles. While Internet Explorer 6 for Windows XP did receive security patches up until it lost support, this is not the case for IE6 under Windows 2000. The Windows Malicious Software Removal Tool installed monthly by Windows Update for XP and later versions can be still downloaded manually for Windows 2000. Microsoft in 2020 announced that it would disable the Windows Update service for SHA-1 endpoints and since Windows 2000 did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of April 2021, the old updates for Windows 2000 are still available on the Microsoft Update Catalog. Total cost of ownership In October 2002, Microsoft commissioned IDC to determine the total cost of ownership (TCO) for enterprise applications on Windows 2000 versus the TCO of the same applications on Linux. IDC's report is based on telephone interviews of IT executives and managers of 104 North American companies in which they determined what they were using for a specific workload for file, print, security and networking services. IDC determined that the four areas where Windows 2000 had a better TCO than Linux – over a period of five years for an average organization of 100 employees – were file, print, network infrastructure and security infrastructure. They determined, however, that Linux had a better TCO than Windows 2000 for web serving. The report also found that the greatest cost was not in the procurement of software and hardware, but in staffing costs and downtime. While the report applied a 40% productivity factor during IT infrastructure downtime, recognizing that employees are not entirely unproductive, it did not consider the impact of downtime on the profitability of the business. The report stated that Linux servers had less unplanned downtime than Windows 2000 servers. It found that most Linux servers ran less workload per server than Windows 2000 servers and also that none of the businesses interviewed used 4-way SMP Linux computers. The report also did not take into account specific application servers – servers that need low maintenance and are provided by a specific vendor. The report did emphasize that TCO was only one factor in considering whether to use a particular IT platform, and also noted that as management and server software improved and became better packaged the overall picture shown could change. See also Architecture of Windows NT BlueKeep (security vulnerability) Comparison of operating systems DEC Multia, one of the DEC Alpha computers capable of running Windows 2000 beta Microsoft Servers, Microsoft's network server software brand Windows Neptune, a cancelled consumer edition based on Windows 2000 References Further reading Bolosky, William J.; Corbin, Scott; Goebel, David; & Douceur, John R. "Single Instance Storage in Windows 2000." Microsoft Research & Balder Technology Group, Inc. (white paper). Bozman, Jean; Gillen, Al; Kolodgy, Charles; Kusnetzky, Dan; Perry, Randy; & Shiang, David (October 2002). "Windows 2000 Versus Linux in Enterprise Computing: An assessment of business value for selected workloads." IDC, sponsored by Microsoft Corporation. White paper. Finnel, Lynn (2000). MCSE Exam 70–215, Microsoft Windows 2000 Server. Microsoft Press. . Microsoft. Running Nonnative Applications in Windows 2000 Professional . Windows 2000 Resource Kit. Retrieved May 4, 2005. Microsoft. "Active Directory Data Storage." Retrieved May 9, 2005. Russinovich, Mark (October 1997). "Inside NT's Object Manager." Windows IT Pro. Russinovich, Mark (2002). "Inside Win2K NTFS, Part 1." Windows IT Pro (formerly Windows 2000 Magazine). Saville, John (January 9, 2000). "What is Native Structure Storage?." Windows IT Pro (formerly Windows 2000 Magazine). Trott, Bob (October 27, 1998). "It's official: NT 5.0 becomes Windows 2000." InfoWorld. External links Windows 2000 End-of-Life Windows 2000 Service Pack 4 Windows 2000 Update Rollup 1 Version 2 1999 software 2000 software Products and services discontinued in 2010 Turn of the third millennium IA-32 operating systems 2000 Microsoft Windows
Windows 2000
Technology
10,765
51,563,546
https://en.wikipedia.org/wiki/Avant%20Stellar
The Avant Stellar is a mechanical keyboard that was produced by Creative Vision Technologies Inc (CVT). It was the successor to the popular and successful OmniKey keyboard by Northgate Computers, and was regarded as being very similar to the OmniKey Plus. It is no longer in production. References Footnotes Sources Further reading External links Definition at the PC Magazine Encyclopedia Computer keyboard models
Avant Stellar
Technology
78
1,852,572
https://en.wikipedia.org/wiki/Marangoni%20effect
The Marangoni effect (also called the Gibbs–Marangoni effect) is the mass transfer along an interface between two phases due to a gradient of the surface tension. In the case of temperature dependence, this phenomenon may be called thermo-capillary convection or Bénard–Marangoni convection. History This phenomenon was first identified in the so-called "tears of wine" by physicist James Thomson (Lord Kelvin's brother) in 1855. The general effect is named after Italian physicist Carlo Marangoni, who studied it for his doctoral dissertation at the University of Pavia and published his results in 1865. A complete theoretical treatment of the subject was given by J. Willard Gibbs in his work On the Equilibrium of Heterogeneous Substances (1875–1878). Mechanism Since a liquid with a high surface tension pulls more strongly on the surrounding liquid than one with a low surface tension, the presence of a gradient in surface tension will naturally cause the liquid to flow away from regions of low surface tension. The surface tension gradient can be caused by concentration gradient or by a temperature gradient (surface tension is a function of temperature). In simple cases, the speed of the flow , where is the difference in surface tension and is the viscosity of the liquid. Water at room temperature has a surface tension of around 0.07 N/m and a viscosity of approximately 10−3 Pa⋅s. So even variations of a few percent in the surface tension of water can generate Marangoni flows of almost 1 m/s. Thus Marangoni flows are common and easily observed. For the case of a small drop of surfactant dropped onto the surface of water, Roché and coworkers performed quantitative experiments and developed a simple model that was in approximate agreement with the experiments. This described the expansion in the radius of a patch of the surface covered in surfactant, due to an outward Marangoni flow at a speed . They found that speed of expansion of the surfactant-covered patch of the water surface occurred at speed of approximately for the surface tension of water, the (lower) surface tension of the surfactant-covered water surface, the viscosity of water, and the mass density of water. For  N/m, i.e., of order of tens of percent reduction in surface tension of water, and as for water  N⋅m−6⋅s3, we obtain with u in m/s and r in m. This gives speeds that decrease as surfactant-covered region grows, but are of order of cm/s to mm/s. The equation is obtained by making a couple of simple approximations, the first is by equating the stress at the surface due to the concentration gradient of surfactant (which drives the Marangoni flow) with the viscous stresses (that oppose flow). The Marangoni stress , i.e., gradient in the surface tension due gradient in the surfactant concentration (from high in the centre of the expanding patch, to zero far from the patch). The viscous shear stress is simply the viscosity times the gradient in shear velocity , for the depth into the water of the flow due to the spreading patch. Roché and coworkers assume that the momentum (which is directed radially) diffuses down into the liquid, during spreading, and so when the patch has reached a radius , , for the kinematic viscosity, which is the diffusion constant for momentum in a fluid. Equating the two stresses, where we approximated the gradient . Taking the 2/3 power of both sides gives the expression above. The Marangoni number, a dimensionless value, can be used to characterize the relative effects of surface tension and viscous forces. Tears of wine As an example, wine may exhibit a visible effect called "tears of wine". The effect is a consequence of the fact that alcohol has a lower surface tension and higher volatility than water. The water/alcohol solution rises up the surface of the glass lowering the surface energy of the glass. Alcohol evaporates from the film leaving behind liquid with a higher surface tension (more water, less alcohol). This region with a lower concentration of alcohol (greater surface tension) pulls on the surrounding fluid more strongly than the regions with a higher alcohol concentration (lower in the glass). The result is the liquid is pulled up until its own weight exceeds the force of the effect, and the liquid drips back down the vessel's walls. This can also be easily demonstrated by spreading a thin film of water on a smooth surface and then allowing a drop of alcohol to fall on the center of the film. The liquid will rush out of the region where the drop of alcohol fell. Significance to transport phenomena Under earth conditions, the effect of gravity causing natural convection in a system with a temperature gradient along a fluid/fluid interface is usually much stronger than the Marangoni effect. Many experiments (ESA MASER 1-3) have been conducted under microgravity conditions aboard sounding rockets to observe the Marangoni effect without the influence of gravity. Research on heat pipes performed on the International Space Station revealed that whilst heat pipes exposed to a temperature gradient on Earth cause the inner fluid to evaporate at one end and migrate along the pipe, thus drying the hot end, in space (where the effects of gravity can be ignored) the opposite happens and the hot end of the pipe is flooded with liquid. This is due to the Marangoni effect, together with capillary action. The fluid is drawn to the hot end of the tube by capillary action. But the bulk of the liquid still ends up as a droplet a short distance away from the hottest part of the tube, explained by Marangoni flow. The temperature gradients in axial and radial directions makes the fluid flow away from the hot end and the walls of the tube, towards the center axis. The liquid forms a droplet with a small contact area with the tube walls, a thin film circulating liquid between the cooler droplet and the liquid at the hot end. The effect of the Marangoni effect on heat transfer in the presence of gas bubbles on the heating surface (e.g., in subcooled nucleate boiling) has long been ignored, but it is currently a topic of ongoing research interest because of its potential fundamental importance to the understanding of heat transfer in boiling. Examples and application A familiar example is in soap films: the Marangoni effect stabilizes soap films. Another instance of the Marangoni effect appears in the behavior of convection cells, the so-called Bénard cells. One important application of the Marangoni effect is the use for drying silicon wafers after a wet processing step during the manufacture of integrated circuits. Liquid spots left on the wafer surface can cause oxidation that damages components on the wafer. To avoid spotting, an alcohol vapor (IPA) or other organic compound in gas, vapor, or aerosol form is blown through a nozzle over the wet wafer surface (or at the meniscus formed between the cleaning liquid and wafer as the wafer is lifted from an immersion bath), and the subsequent Marangoni effect causes a surface-tension gradient in the liquid allowing gravity to more easily pull the liquid completely off the wafer surface, effectively leaving a dry wafer surface. A similar phenomenon has been creatively utilized to self-assemble nanoparticles into ordered arrays and to grow ordered nanotubes. An alcohol containing nanoparticles is spread on the substrate, followed by blowing humid air over the substrate. The alcohol is evaporated under the flow. Simultaneously, water condenses and forms microdroplets on the substrate. Meanwhile, the nanoparticles in alcohol are transferred into the microdroplets and finally form numerous coffee rings on the substrate after drying. Another application is the manipulation of particles taking advantage of the relevance of the surface tension effects at small scales. A controlled thermo-capillary convection is created by locally heating the air–water interface using an infrared laser. Then, this flow is used to control floating objects in both position and orientation and can prompt the self-assembly of floating objects, profiting from the Cheerios effect. The Marangoni effect is also important to the fields of welding, crystal growth and electron beam melting of metals. See also Plateau–Rayleigh instability — an instability in a stream of liquid Diffusioosmosis - the Marangoni effect is flow at a fluid/fluid interface due to a gradient in the interfacial free energy, the analog at a fluid/solid interface is diffusioosmosis References External links Motoring Oil Drops Physical Review Focus February 22, 2005 Thin Film Physics, ISS astronaut Don Pettit demonstrate. YouTube-movie. Fluid mechanics Convection Physical phenomena Articles containing video clips
Marangoni effect
Physics,Chemistry,Engineering
1,822
12,503,315
https://en.wikipedia.org/wiki/API%20oil%E2%80%93water%20separator
An API oil–water separator is a device designed to separate gross amounts of oil and suspended solids from industrial wastewater produced at oil refineries, petrochemical plants, chemical plants, natural gas processing plants and other industrial oily water sources. The API separator is a gravity separation device designed by using Stokes Law to define the rise velocity of oil droplets based on their density and size. The design is based on the specific gravity difference between the oil and the wastewater because that difference is much smaller than the specific gravity difference between the suspended solids and water. The suspended solids settles to the bottom of the separator as a sediment layer, the oil rises to top of the separator and the cleansed wastewater is the middle layer between the oil layer and the solids. The name is derived from the fact that such separators are designed according to standards published by the American Petroleum Institute (API). Description of the design and operation The API separator is a gravity separation device designed using Stokes' law principles that define the rise velocity of oil droplets based on their density, size and water properties. The design of the separator is based on the specific gravity difference between the oil and the wastewater because that difference is much smaller than the specific gravity difference between the suspended solids and water. Based on that design criterion, most of the suspended solids will settle to the bottom of the separator as a sediment layer, the oil will rise to top of the separator, and the wastewater will be the middle layer between the oil on top and the solids on the bottom. The API Design Standards, when correctly applied, make adjustments to the geometry, design and size of the separator beyond simple Stokes Law principles. This includes allowances for water flow entrance and exit turbulence losses as well as other factors. API Specification 421 requires a minimum length to width ratio of 5:1 and minimum depth-to-width ratio of 0.3:0.5. Typically, the oil layer is skimmed off and subsequently re-processed or disposed of, and the bottom sediment layer is removed by a chain and flight scraper (or similar device) and a sludge pump. The water layer is sent to further treatment for additional removal of any residual oil and then to some type of biological treatment unit for removal of undesirable dissolved chemical compounds. Many oils can be recovered from open water surfaces by skimming devices. Considered a dependable and cheap way to remove oil, grease and other hydrocarbons from water, oil skimmers can sometimes achieve the desired level of water purity. At other times, skimming is also a cost-efficient method to remove most of the oil before using membrane filters and chemical processes. Skimmers will prevent filters from blinding prematurely and keep chemical costs down because there is less oil to process. Because grease skimming involves higher viscosity hydrocarbons, skimmers must be equipped with heaters powerful enough to keep grease fluid for discharge. If floating grease forms into solid clumps or mats, a spray bar, aerator or mechanical apparatus can be used to facilitate removal. However, hydraulic oils and the majority of oils that have degraded to any extent will also have a soluble or emulsified component that will require further treatment to eliminate. Dissolving or emulsifying oil using surfactants or solvents usually exacerbates the problem rather than solving it, producing wastewater that is more difficult to treat. Design Limitations API design separators, and similar gravity tanks, are not intended to be effective when any of the following conditions apply to the feed conditions: Mean Oil droplets size in the feed is less than 150 micron Oil density is greater than 925 kg/m3 Suspended solids are adhering to the oil meaning the 'effective' oil density is greater than 925 kg/m3 Water temperature less than 5 °C There are high levels of dissolved hydrocarbons According to Stokes' Law, heavier oils require more retention time. In many cases where refineries have switched to heavier crude slates, the API separator’s efficiency has declined. Further treatment of API water discharges Because of performance limitations the water discharged from API type separators usually requires several further processing stages before the treated water can be discharged or reused. Further water treatment is designed to remove oil droplets smaller than 150 micron, dissolved materials and hydrocarbons, heavier oils or other contaminants not removed by the API. Secondary treatment technologies include dissolved air flotation (DAF), Anaerobic and Aerobic biological treatment, Parallel Plate Separators, Hydrocyclone, Walnut Shell Filters and Media filters. Alternative technologies Plate separators, or Coalescing Plate Separators are similar to API separators, in that they are based on Stokes Law principles, but include inclined plate assemblies (also known as parallel packs). The underside of each parallel plate provides more surface for suspended oil droplets to coalesce into larger globules. Coalescing plate separators may not be effective in situation where water chemicals or suspended solids restrict or prevent oil droplets coalesce. In operation it is intended that sediment will slide down the topside of each parallel plate, however in many practical situations the sediment can adhere to the plates requiring periodic removal and cleaning. Such separators still depend upon the specific gravity between the suspended oil and the water. However, the parallel plates can enhance the degree of oil-water separation for oil droplets above 50 micron in size. Alternatively parallel plate separators are added to the design of API Separators and require less space than a conventional API separator to achieve a similar degree of separation. Parallel plate separators are similar to API separators but they include tilted parallel plate assemblies (also known as parallel packs). The parallel plates provide more surface for suspended oil droplets to coalesce into larger globules. Such separators still depend upon the specific gravity between the suspended oil and the water. However, the parallel plates enhance the degree of oil-water separation. The result is that a parallel plate separator requires significantly less space than a conventional API separator to achieve the same degree of separation. History The API separator was developed by the API and the Rex Chain Belt Company (now Evoqua). The first API separator was installed in 1933 at the Atlantic Refining Company (ARCO) refinery in Philadelphia. Since that time, virtually all of the refineries worldwide have installed API separators as a first primary stage of their oily wastewater treatment plants. The majority of those refineries installed the API separators using the original design based on the specific gravity difference between oil and water. However, many refineries now use plastic parallel plate packing to enhance the gravity separation. Today regulations often require API separators with fixed or floating covers for volatile organic compound (VOC) control. Also, most API separators must be above ground for spill detection. Other oil–water separation applications There are other applications requiring oil-water separation. For example: Oily water separators (OWS) for separating oil from the bilge water accumulated in ships as required by the international MARPOL Convention. Oil and water separators are commonly used in electrical substations. The transformers found in substations use a large amount of oil for cooling purposes. Moats are constructed surrounding unenclosed substations to catch any leaked oil, but these will also catch rainwater. Oil and water separators therefore provide a quicker and easier cleanup of an oil leak. See also Pollution Wastewater Industrial wastewater treatment Industrial water treatment Centrifugal water–oil separator Induced gas flotation Wescorp Energy References External links Photographs, drawings and design discussion of gravimetric API Separators Oil/Water Separators Diagrams and description of separators using plastic parallel plate packing. Oil-in-water Separation Good discussion and explanation of wastewater treatment processes. Monroe Environmental API Separators Manufacturer, drawings, photographs, diagrams, case studies, and descriptions. Oil Water Separators Features, Case Studies, Technology, Photos AFL Industries Manufacturer. OWS descriptions and drawings Oil refineries Liquid-liquid separation Waste treatment technology Chemical equipment oil–water separator Industrial water treatment
API oil–water separator
Chemistry,Engineering
1,697
3,104,018
https://en.wikipedia.org/wiki/Thomson%20problem
The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms. Related problems include the study of the geometry of the minimum energy configuration and the study of the large behavior of the minimum energy. Mathematical statement The electrostatic interaction energy occurring between each pair of electrons of equal charges (, with the elementary charge of an electron) is given by Coulomb's law, where is the electric constant and is the distance between each pair of electrons located at points on the sphere defined by vectors and , respectively. Simplified units of and (the Coulomb constant) are used without loss of generality. Then, The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interaction energies The global minimization of over all possible configurations of N distinct points is typically found by numerical minimization algorithms. Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere". The main difference is that in Smale's problem the function to minimise is not the electrostatic potential but a logarithmic potential given by A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N. Example The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, , or Known exact solutions Mathematically exact minimum energy configurations have been rigorously identified in only a handful of cases. For N = 1, the solution is trivial. The single electron may reside at any point on the surface of the unit sphere. The total energy of the configuration is defined as zero because the charge of the electron is subject to no electric field due to other sources of charge. For N = 2, the optimal configuration consists of electrons at antipodal points. This represents the first one-dimensional solution. For N = 3, electrons reside at the vertices of an equilateral triangle about any great circle. The great circle is often considered to define an equator about the sphere and the two points perpendicular to the plane are often considered poles to aid in discussions about the electrostatic configurations of many-N electron solutions. Also, this represents the first two-dimensional solution. For N = 4, electrons reside at the vertices of a regular tetrahedron. Of interest, this represents the first three-dimensional solution. For N = 5, a mathematically rigorous computer-aided solution was reported in 2018 with electrons residing at vertices of a triangular dipyramid. Of interest, it is impossible for any N solution with five or more electrons to exhibit global equidistance among all pairs of electrons. For N = 6, electrons reside at vertices of a regular octahedron. The configuration may be imagined as four electrons residing at the corners of a square about the equator and the remaining two residing at the poles. For N = 12, electrons reside at the vertices of a regular icosahedron. Geometric solutions of the Thomson problem for N = 4, 6, and 12 electrons are Platonic solids whose faces are all congruent equilateral triangles. Numerical solutions for N = 8 and 20 are not the regular convex polyhedral configurations of the remaining two Platonic solids, the cube and dodecahedron respectively. Generalizations One can also ask for ground states of particles interacting with arbitrary potentials. To be mathematically precise, let f be a decreasing real-valued function, and define the energy functional Traditionally, one considers also known as Riesz -kernels. For integrable Riesz kernels see the 1972 work of Landkof. For non-integrable Riesz kernels, the Poppy-seed bagel theorem holds, see the 2004 work of Hardin and Saff. Notable cases include: α = ∞, the Tammes problem (packing); α = 1, the Thomson problem; α = 0, to maximize the product of distances, latterly known as Whyte's problem; α = −1 : maximum average distance problem. One may also consider configurations of N points on a sphere of higher dimension. See spherical design. Solution algorithms Several algorithms have been applied to this problem. The focus since the millennium has been on local optimization methods applied to the energy function, although random walks have made their appearance: constrained global optimization (Altschuler et al. 1994), steepest descent (Claxton and Benson 1966, Erber and Hockney 1991), random walk (Weinrach et al. 1990), genetic algorithm (Morris et al. 1996) While the objective is to minimize the global electrostatic potential energy of each N-electron case, several algorithmic starting cases are of interest. Continuous spherical shell charge The energy of a continuous spherical shell of charge distributed across its surface is given by and is, in general, greater than the energy of every Thomson problem solution. Note: Here N is used as a continuous variable that represents the infinitely divisible charge, Q, distributed across the spherical shell. For example, a spherical shell of represents the uniform distribution of a single electron's charge, , across the entire shell. Randomly distributed point charges The expected global energy of a system of electrons distributed in a purely random manner across the surface of the sphere is given by and is, in general, greater than the energy of every Thomson problem solution. Here, N is a discrete variable that counts the number of electrons in the system. As well, . Charge-centered distribution For every Nth solution of the Thomson problem there is an th configuration that includes an electron at the origin of the sphere whose energy is simply the addition of N to the energy of the Nth solution. That is, Thus, if is known exactly, then is known exactly. In general, is greater than , but is remarkably closer to each th Thomson solution than and . Therefore, the charge-centered distribution represents a smaller "energy gap" to cross to arrive at a solution of each Thomson problem than algorithms that begin with the other two charge configurations. Relations to other scientific problems The Thomson problem is a natural consequence of J. J. Thomson's plum pudding model in the absence of its uniform positive background charge. Though experimental evidence led to the abandonment of Thomson's plum pudding model as a complete atomic model, irregularities observed in numerical energy solutions of the Thomson problem have been found to correspond with electron shell-filling in naturally occurring atoms throughout the periodic table of elements. The Thomson problem also plays a role in the study of other physical models including multi-electron bubbles and the surface ordering of liquid metal drops confined in Paul traps. The generalized Thomson problem arises, for example, in determining arrangements of protein subunits that comprise the shells of spherical viruses. The "particles" in this application are clusters of protein subunits arranged on a shell. Other realizations include regular arrangements of colloid particles in colloidosomes, proposed for encapsulation of active ingredients such as drugs, nutrients or living cells, fullerene patterns of carbon atoms, and VSEPR theory. An example with long-range logarithmic interactions is provided by Abrikosov vortices that form at low temperatures in a superconducting metal shell with a large monopole at its center. Configurations of smallest known energy In the following table is the number of points (charges) in a configuration, is the energy, the symmetry type is given in Schönflies notation (see Point groups in three dimensions), and are the positions of the charges. Most symmetry types require the vector sum of the positions (and thus the electric dipole moment) to be zero. It is customary to also consider the polyhedron formed by the convex hull of the points. Thus, is the number of vertices where the given number of edges meet, is the total number of edges, is the number of triangular faces, is the number of quadrilateral faces, and is the smallest angle subtended by vectors associated with the nearest charge pair. Note that the edge lengths are generally not equal. Thus, except in the cases N = 2, 3, 4, 6, 12, and the geodesic polyhedra, the convex hull is only topologically equivalent to the figure listed in the last column. According to a conjecture, if is the polyhedron formed by the convex hull of the solution configuation to the Thomson Problem for electrons and is the number of quadrilateral faces of , then has edges. References Notes . . Configurations reprinted in . Configurations reproduced in This webpage contains many more electron configurations with the lowest known energy: https://www.hars.us. Electrostatics Electron Circle packing Unsolved problems in mathematics
Thomson problem
Chemistry,Mathematics
1,876
779,848
https://en.wikipedia.org/wiki/Buprenorphine
Buprenorphine, sold under the brand name Subutex among others, is an opioid used to treat opioid use disorder, acute pain, and chronic pain. It can be used under the tongue (sublingual), in the cheek (buccal), by injection (intravenous and subcutaneous), as a skin patch (transdermal), or as an implant. For opioid use disorder, the patient must have moderate opioid withdrawal symptoms before buprenorphine can be administered under direct observation of a health-care provider. In the United States, the combination formulation of buprenorphine/naloxone (Suboxone) is usually prescribed to discourage misuse by injection. However, more recently the efficacy of naloxone in preventing misuse has been brought into question, and preparations of buprenorphine combined with naloxone could potentially be less safe than buprenorphine alone. Maximum pain relief is generally within an hour with effects up to 24 hours. Buprenorphine affects different types of opioid receptors in different ways. Depending on the type of opioid receptor, it may be an agonist, partial agonist, or antagonist. Buprenorphine's activity as an agonist/antagonist is important in the treatment of opioid use disorder: it relieves withdrawal symptoms from other opioids and induces some euphoria, but also blocks the ability for many other opioids, including heroin, to cause an effect. Unlike full agonists like heroin or methadone, buprenorphine has a ceiling effect, such that taking more medicine past a certain point will not increase the effects of the drug. Side effects may include respiratory depression (decreased breathing), sleepiness, adrenal insufficiency, QT prolongation, low blood pressure, allergic reactions, constipation, and opioid addiction. Among those with a history of seizures, a risk exists of further seizures. Opioid withdrawal following stopping buprenorphine is generally less severe than with other opioids. Whether use during pregnancy is safe is unclear, but use while breastfeeding is probably safe, since the dose the infant receives is 1-2% that of the maternal dose, on a weight basis. Buprenorphine was patented in 1965, and approved for medical use in the United States in 1981. It is on the World Health Organization's List of Essential Medicines. In addition to prescription as an analgesic it is a common medication used to treat opioid use disorders, such as addiction to heroin. In 2020, it was the 186th most commonly prescribed medication in the United States, with more than 2.8million prescriptions. Buprenorphine may also be used recreationally for the high it can produce. In the United States, buprenorphine is a schedule III controlled substance. Medical uses Opioid use disorder Buprenorphine is used to treat people with opioid use disorder. In the U.S., the combination formulation of buprenorphine/naloxone is generally prescribed to deter injection, since naloxone, an opioid antagonist, is believed to cause acute withdrawal if the formulation is crushed and injected. Taken orally, the naloxone has virtually no effect, due to the drug's extremely high first-pass metabolism and low bioavailability (2%). However, the efficacy of naloxone in preventing misuse by injection has as of 2020 been brought into question and preparations including naloxone could even be less safe than preparations containing solely buprenorphine. Anecdotally, posters on drug-related online forums have stated that they were able to attain a high by injecting preparations of buprenorphine despite being combined with naloxone. Before starting buprenorphine, individuals are generally advised to wait long enough after their last dose of opioid until they have some withdrawal symptoms to allow for the medication to bind the receptors, since if taken too soon, buprenorphine can displace other opioids bound to the receptors and precipitate an acute withdrawal. The dose of buprenorphine is then adjusted until symptoms improve, and individuals remain on a maintenance dose of 8–16 mg. Because withdrawal is uncomfortable and a deterrent for many patients, users have called for different means of treatment initiation. The Bernese method, also known as micro dosing was described in 2016, where very small doses of buprenorphine (0.2 to 0.5 mg) are given while patients are still using street opioids, and without precipitating withdrawal, with medicine levels slowly titrated upward. This method has been used by some providers as of the 2020s. Buprenorphine versus methadone Both buprenorphine and methadone are medications used for detoxification and opioid replacement therapy, and appear to have similar effectiveness based on limited data. Both are safe for pregnant women with opioid use disorder, although preliminary evidence suggests that methadone is more likely to cause neonatal abstinence syndrome. In the US and European Union, only designated clinics can prescribe methadone for opioid use disorder, requiring patients to travel to the clinic daily. If patients are drug-free for a period they may be permitted to receive "take-home doses," reducing their visits to as little as once a week. Alternatively, up to a month's supply of buprenorphine has been able to be prescribed by clinicians in the US or Europe who have completed basic training (8–24 hours in the US) and received a waiver/licence allowing the prescription of the medicine. In France, buprenorphine prescription for opioid use disorder has been permitted without any special training or restrictions since 1995, resulting in treatment of approximately ten times more patients per year with buprenorphine than with methadone in the following decade. In 2021, seeking to address record levels of opioid overdose, the United States also removed the requirement for a special waiver for prescribing physicians. Whether this change will be sufficient to impact prescription is unclear, since even before the change as many as half of physicians with a waiver permitting them to prescribe buprenorphine did not do so, and one-third of non-waivered physicians reported that nothing would induce them to prescribe buprenorphine for opioid use disorder. Chronic pain A transdermal patch is available for the treatment of chronic pain. These patches are not indicated for use in acute pain, pain that is expected to last only for a short period, or pain after surgery, nor are they recommended for opioid addiction. Potency For equianalgesic dosing, when used sublingually, the potency of buprenorphine is about 40 to 70 times that of morphine. When used as a transdermal patch, the potency of buprenorphine may be 100 to 115 times that of morphine. Adverse effects Common adverse drug reactions associated with the use of buprenorphine, similar to those of other opioids, include nausea and vomiting, drowsiness, dizziness, headache, memory loss, cognitive and neural inhibition, perspiration, itchiness, dry mouth, shrinking of the pupils of the eyes (miosis), orthostatic hypotension, male ejaculatory difficulty, decreased libido, and urinary retention. Constipation and central nervous system (CNS) effects are seen less frequently than with morphine. Central sleep apnea has also been reported as a side effect of long-term buprenorphine use. Respiratory effects The most severe side effect associated with buprenorphine is respiratory depression (insufficient breathing). It occurs more often in those who are also taking benzodiazepines or alcohol, or have underlying lung disease. The usual reversal agents for opioids, such as naloxone, may be only partially effective, and additional efforts to support breathing may be required. Respiratory depression may be less than with other opioids, particularly with chronic use. In the setting of acute pain management, though, buprenorphine appears to cause the same rate of respiratory depression as other opioids such as morphine. Central sleep apnea is possible with long-term use, possibly resolving with dose reduction. Buprenorphine dependence Buprenorphine treatment carries the risk of causing psychological or physiological (physical) dependencies. It has a slow onset of activity, with a long duration of action, and a long half-life of 24 to 60 hours. Once a patient has stabilised on the (buprenorphine) medication and programme, three options remain - continual use (buprenorphine-only medication), switching to a buprenorphine/naloxone combination, or a medically supervised withdrawal. Pain management Achieving acute opioid analgesia is difficult in persons using buprenorphine for pain management. However, a systematic review found no clear benefit to bridging or stopping buprenorphine when used in opioid substitution therapy to facilitate perioperative pain management, but failure to restart it was found to pose concerns for relapse. Therefore, it is recommended that buprenorphine opioid substitution therapy is continued in the perioperative period when possible. In addition, preoperative pain management in patients taking buprenorphine should use an interdisciplinary approach with multimodal analgesia. Pharmacology Pharmacodynamics Opioid receptor modulator Buprenorphine has been reported to possess these following pharmacological activities: μ-Opioid receptor (MOR): Very high affinity partial agonist: at low doses, the MOR-mediated effects of buprenorphine are comparable to those of other narcotics, but these effects reach a "ceiling" as the receptor population is saturated. This behavior is responsible for several unique properties: buprenorphine greatly reduces the effect of most other MOR agonists, can cause precipitated withdrawal when used in actively opioid dependent persons, and has a lower incidence of respiratory depression and fatal overdose relative to full MOR agonists. κ-Opioid receptor (KOR): High affinity antagonist/weak partial agonist —this activity is hypothesized to underlie some of the effects of buprenorphine on mood disorders and addiction. δ-Opioid receptor (DOR): High affinity antagonist Nociceptin receptor (NOP, ORL-1): Weak affinity, very weak partial agonist In simplified terms, buprenorphine can essentially be thought of as a nonselective, mixed agonist–antagonist opioid receptor modulator, acting as an unusually high affinity, weak partial agonist of the MOR, a high affinity antagonist of the KOR and DOR, and a relatively low affinity, very weak partial agonist of the ORL-1/NOP. Although buprenorphine is a partial agonist of the MOR, human studies have found that it acts like a full agonist with respect to analgesia in opioid-intolerant individuals. Conversely, buprenorphine behaves like a partial agonist of the MOR with respect to respiratory depression. Buprenorphine is also known to bind to with high affinity and antagonize the putative ε-opioid receptor. Full analgesic efficacy of buprenorphine requires both exon 11- and exon 1-associated μ-opioid receptor splice variants. The active metabolites of buprenorphine are not thought to be clinically important in its CNS effects. In positron emission tomography (PET) imaging studies, buprenorphine was found to decrease whole-brain MOR availability due to receptor occupancy by 41% (i.e., 59% availability) at 2 mg, 80% (i.e., 20% availability) at 16 mg, and 84% (i.e., 16% availability) at 32 mg. Other actions Unlike some other opioids and opioid antagonists, buprenorphine binds only weakly to and possesses little if any activity at the sigma receptor. Buprenorphine also blocks voltage-gated sodium channels via the local anesthetic binding site, and this underlies its potent local anesthetic properties. Similarly to various other opioids, buprenorphine has also been found to act as an agonist of the toll-like receptor 4, albeit with very low affinity. Pharmacokinetics Buprenorphine is metabolized by the liver, via CYP3A4 (also CYP2C8 seems to be involved) isozymes of the cytochrome P450 enzyme system, into norbuprenorphine (by N-dealkylation). The glucuronidation of buprenorphine is primarily carried out by UGT1A1 and UGT2B7, and that of norbuprenorphine by UGT1A1 and UGT1A3. These glucuronides are then eliminated mainly through excretion into bile. The elimination half-life of buprenorphine is 20 to 73 hours (mean 37 hours). Due to the mainly hepatic elimination, no risk of accumulation exists in people with renal impairment. One of the major active metabolites of buprenorphine is norbuprenorphine, which, in contrast to buprenorphine itself, is a full agonist of the MOR, DOR, and ORL-1, and a partial agonist at the KOR. However, relative to buprenorphine, norbuprenorphine has extremely little antinociceptive potency (1/50th that of buprenorphine), but markedly depresses respiration (10-fold more than buprenorphine). This may be explained by very poor brain penetration of norbuprenorphine due to a high affinity of the compound for P-glycoprotein. In contrast to norbuprenorphine, buprenorphine and its glucuronide metabolites are negligibly transported by P-glycoprotein. The glucuronides of buprenorphine and norbuprenorphine are also biologically active, and represent major active metabolites of buprenorphine. Buprenorphine-3-glucuronide has affinity for the MOR (Ki = 4.9 pM), DOR (Ki = 270 nM) and ORL-1 (Ki = 36 μM), and no affinity for the KOR. It has a small antinociceptive effect and no effect on respiration. Norbuprenorphine-3-glucuronide has no affinity for the MOR or DOR, but does bind to the KOR (Ki = 300 nM) and ORL-1 (Ki = 18 μM). It has a sedative effect but no effect on respiration. Chemistry Buprenorphine is a semisynthetic derivative of thebaine, and is fairly soluble in water, as its hydrochloride salt. It degrades in the presence of light. Detection in body fluids Buprenorphine and norbuprenorphine may be quantified in blood or urine to monitor use or non-medical recreational use, confirm a diagnosis of poisoning, or assist in a medicolegal investigation. A significant overlap of drug concentrations exists in body fluids within the possible spectrum of physiological reactions ranging from asymptomatic to comatose. Therefore, knowing both the route of administration of the drug and the level of tolerance to opioids of the individual is critical when results are interpreted. History In 1969, researchers at Reckitt and Colman (now Reckitt Benckiser) had spent 10 years attempting to synthesize an opioid compound "with structures substantially more complex than morphine [that] could retain the desirable actions whilst shedding the undesirable side effects". Physical dependence and withdrawal from buprenorphine itself remain important issues since buprenorphine is a long-acting opioid. Reckitt found success when researchers synthesized RX6029 which had shown success in reducing dependence in test animals. RX6029 was named buprenorphine and began trials on humans in 1971. By 1978, buprenorphine was first launched in the UK as an injection to treat severe pain, with a sublingual formulation released in 1982. Society and culture Regulation United States In the United States, buprenorphine and buprenorphine with naloxone were approved for opioid use disorder by the Food and Drug Administration in October 2002. The DEA rescheduled buprenorphine from a schedule V drug to a schedule III drug just before approval. The ACSCN for buprenorphine is 9064, and being a schedule III substance, it does not have an annual manufacturing quota imposed by the DEA. The salt in use is the hydrochloride, which has a free-base conversion ratio of 0.928. In the years before buprenorphine/naloxone was approved, Reckitt Benckiser had lobbied Congress to help craft the Drug Addiction Treatment Act of 2000, which gave authority to the Secretary of Health and Human Services to grant a waiver to physicians with certain training to prescribe and administer schedule III, IV, or V narcotic drugs for the treatment of addiction or detoxification. Before this law was passed, such treatment was permitted only in clinics designed specifically for drug addiction. The waiver, which can be granted after the completion of an eight-hour course, was required for outpatient treatment of opioid addiction with buprenorphine from 2000 to 2021. Initially, the number of people each approved physician could treat was limited to 10. This was eventually modified to allow approved physicians to treat up to 100 people with buprenorphine for opioid addiction in an outpatient setting. This limit was increased by the Obama administration, raising the number of patients to which doctors can prescribe to 275. On 14 January 2021, the US Department of Health and Human Services announced that the waiver would no longer be required to prescribe buprenorphine to treat up to 30 people concurrently. New Jersey authorized paramedics to give buprenorphine to people at the scene after they have recovered from an overdose. Europe In the European Union, Subutex and Suboxone, buprenorphine's high-dose sublingual tablet preparations, were approved for opioid use disorder treatment in September 2006. In the Netherlands, buprenorphine is a list II drug of the Opium Law, though special rules and guidelines apply to its prescription and dispensation. In France, buprenorphine prescription by general practitioners and dispensed by pharmacies has been permitted since the mid-1990s as a response to HIV and overdose risk. Deaths caused by heroin overdose were reduced by four-fifths between 1994 and 2002, and the incidence of AIDS among people who inject drugs in France fell from 25% in the mid-1990s to 6% in 2010. Barriers to access In the US, the list price for a long-acting injectable form is five to 20 times as much as a daily pill. This has reduced the number of people who are able to get a single monthly dose, instead of daily pills. Some jails consider the more expensive form a positive tradeoff: a single monthly injection may be simpler and easier for the staff to manage than daily trips to the dispensary to have a nurse provide a pill and make sure that it has been swallowed. Brand names Buprenorphine is available under the brand names Cizdol, Brixadi (approved in the US by FDA for addiction treatment in 2023), Suboxone (with naloxone), Subutex (typically used for opioid use disorder), Zubsolv, Bunavail, Buvidal (approved in the UK, Europe and Australia for addiction treatment in 2018), Sublocade (approved in the US in 2018), Probuphine, Temgesic (sublingual tablets for moderate to severe pain), Buprenex (solutions for injection often used for acute pain in primary-care settings), Norspan, and Butrans (transdermal preparations used for chronic pain). In Poland buprenorphine is available under the trade names Bunondol (for pain treatment, when morphine is too little; amounts of 0.2mg and 0.4mg) and Bunorfin (for addicts substitution in amount of 2 and 8mg). Research Microdosing There is some evidence that a buprenorphine microdosing regime, started before opioid withdrawal symptoms have started, can be effective in helping people transition away from opioid dependence. Depression Some evidence supports the use of buprenorphine for depression. Buprenorphine/samidorphan, a combination product of buprenorphine and samidorphan (a preferential μ-opioid receptor antagonist), appears useful for treatment-resistant depression. A buprenorphine implant (developmental code name SK-2110) is under development by Shenzhen ScienCare Pharmaceutical in China for the treatment of refractory major depressive disorder. Cocaine dependence In combination with samidorphan or naltrexone (μ-opioid receptor antagonists), buprenorphine is under investigation for the treatment of cocaine dependence, and recently demonstrated effectiveness for this indication in a large-scale (n = 302) clinical trial (at a high buprenorphine dose of 16 mg, but not a low dose of 4 mg). Neonatal abstinence Buprenorphine has been used in the treatment of the neonatal abstinence syndrome, a condition in which newborns exposed to opioids during pregnancy demonstrate signs of withdrawal. In the United States, use currently is limited to infants enrolled in a clinical trial conducted under an FDA-approved investigational new drug (IND) application. Preliminary research suggests that buprenorphine is associated with shorter time in hospital for neonates, compared to methadone. An ethanolic formulation used in neonates is stable at room temperature for at least 30 days. Veterinary uses Veterinarians administer buprenorphine for perioperative pain, particularly in cats, where its effects are similar to morphine. The drug's legal status and lower potential for human abuse makes it an attractive alternative to other opioids. It has veterinary medical use for treatment of pain in dogs and cats, as well as other animals. References External links Cat medications CYP2D6 inhibitors CYP3A4 inhibitors Delta-opioid receptor antagonists Dog medications Drug rehabilitation 4,5-Epoxymorphinans Ethers Euphoriants Kappa-opioid receptor antagonists Drugs developed by Merck & Co. Mu-opioid receptor agonists Nociceptin receptor agonists Nociceptin receptor antagonists Oripavines Hydroxyarenes Drugs developed by Schering-Plough Semisynthetic opioids Sodium channel blockers Tertiary alcohols Wikipedia medicine articles ready to translate
Buprenorphine
Chemistry
4,884
48,981
https://en.wikipedia.org/wiki/Ascomycota
Ascomycota is a phylum of the kingdom Fungi that, together with the Basidiomycota, forms the subkingdom Dikarya. Its members are commonly known as the sac fungi or ascomycetes. It is the largest phylum of Fungi, with over 64,000 species. The defining feature of this fungal group is the "ascus" (), a microscopic sexual structure in which nonmotile spores, called ascospores, are formed. However, some species of Ascomycota are asexual and thus do not form asci or ascospores. Familiar examples of sac fungi include morels, truffles, brewers' and bakers' yeast, dead man's fingers, and cup fungi. The fungal symbionts in the majority of lichens (loosely termed "ascolichens") such as Cladonia belong to the Ascomycota. Ascomycota is a monophyletic group (containing all of the descendants of a common ancestor). Previously placed in the Basidiomycota along with asexual species from other fungal taxa, asexual (or anamorphic) ascomycetes are now identified and classified based on morphological or physiological similarities to ascus-bearing taxa, and by phylogenetic analyses of DNA sequences. Ascomycetes are of particular use to humans as sources of medicinally important compounds such as antibiotics, as well as for fermenting bread, alcoholic beverages, and cheese. Examples of ascomycetes include Penicillium species on cheeses and those producing antibiotics for treating bacterial infectious diseases. Many ascomycetes are pathogens, both of animals, including humans, and of plants. Examples of ascomycetes that can cause infections in humans include Candida albicans, Aspergillus niger and several tens of species that cause skin infections. The many plant-pathogenic ascomycetes include apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews. The members of the genus Cordyceps are entomopathogenic fungi, meaning that they parasitise and kill insects. Other entomopathogenic ascomycetes have been used successfully in biological pest control, such as Beauveria. Several species of ascomycetes are biological model organisms in laboratory research. Most famously, Neurospora crassa, several species of yeasts, and Aspergillus species are used in many genetics and cell biology studies. Sexual reproduction in ascomycetes Ascomycetes are 'spore shooters'. They are fungi which produce microscopic spores inside special, elongated cells or sacs, known as 'asci', which give the group its name. Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. Asexual reproduction of ascomycetes is very diverse from both structural and functional points of view. The most important and general is production of conidia, but chlamydospores are also frequently produced. Furthermore, Ascomycota also reproduce asexually through budding. Conidia formation Asexual reproduction may occur through vegetative reproductive spores, the conidia. The asexual, non-motile haploid spores of a fungus, which are named after the Greek word for dust (conia), are hence also known as . The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called , which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals. Conidiophores may simply branch off from the mycelia or they may be formed in fruiting bodies. The hypha that creates the sporing (conidiating) tip can be very similar to the normal hyphal tip, or it can be differentiated. The most common differentiation is the formation of a bottle shaped cell called a , from which the spores are produced. Not all of these asexual structures are a single hypha. In some groups, the conidiophores (the structures that bear the conidia) are aggregated to form a thick structure. E.g. In the order Moniliales, all of them are single hyphae with the exception of the aggregations, termed as coremia or synnema. These produce structures rather like corn-stokes, with many conidia being produced in a mass from the aggregated conidiophores. The diverse conidia and conidiophores sometimes develop in asexual sporocarps with different characteristics (e.g. acervulus, pycnidium, sporodochium). Some species of ascomycetes form their structures within plant tissue, either as parasite or saprophytes. These fungi have evolved more complex asexual sporing structures, probably influenced by the cultural conditions of plant tissue as a substrate. These structures are called the . This is a cushion of conidiophores created from a pseudoparenchymatous stroma in plant tissue. The is a globose to flask-shaped parenchymatous structure, lined on its inner wall with conidiophores. The is a flat saucer shaped bed of conidiophores produced under a plant cuticle, which eventually erupt through the cuticle for dispersal. Budding Asexual reproduction process in ascomycetes also involves the budding which we clearly observe in yeast. This is termed a "blastic process". It involves the blowing out or blebbing of the hyphal tip wall. The blastic process can involve all wall layers, or there can be a new cell wall synthesized which is extruded from within the old wall. The initial events of budding can be seen as the development of a ring of chitin around the point where the bud is about to appear. This reinforces and stabilizes the cell wall. Enzymatic activity and turgor pressure act to weaken and extrude the cell wall. New cell wall material is incorporated during this phase. Cell contents are forced into the progeny cell, and as the final phase of mitosis ends a cell plate, the point at which a new cell wall will grow inwards from, forms. Characteristics of ascomycetes Ascomycota are morphologically diverse. The group includes organisms from unicellular yeasts to complex cup fungi. 98% of lichens have an Ascomycota as the fungal part of the lichen. There are 2000 identified genera and 30,000 species of Ascomycota. The unifying characteristic among these diverse groups is the presence of a reproductive structure known as the , though in some cases it has a reduced role in the life cycle. Many ascomycetes are of commercial importance. Some play a beneficial role, such as the yeasts used in baking, brewing, and wine fermentation, plus truffles and morels, which are held as gourmet delicacies. Many of them cause tree diseases, such as Dutch elm disease and apple blights. Some of the plant pathogenic ascomycetes are apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews. The yeasts are used to produce alcoholic beverages and breads. The mold Penicillium is used to produce the antibiotic penicillin. Almost half of all members of the phylum Ascomycota form associations with algae to form lichens. Others, such as morels (a highly prized edible fungi), form important relationships with plants, thereby providing enhanced water and nutrient uptake and, in some cases, protection from insects. Most ascomycetes are terrestrial or parasitic. However, some have adapted to marine or freshwater environments. As of 2015, there were 805 marine fungi in the Ascomycota, distributed among 352 genera. The cell walls of the hyphae are variably composed of chitin and β-glucans, just as in Basidiomycota. However, these fibers are set in a matrix of glycoprotein containing the sugars galactose and mannose. The mycelium of ascomycetes is usually made up of septate hyphae. However, there is not necessarily any fixed number of nuclei in each of the divisions. The septal walls have septal pores which provide cytoplasmic continuity throughout the individual hyphae. Under appropriate conditions, nuclei may also migrate between septal compartments through the septal pores. A unique character of the Ascomycota (but not present in all ascomycetes) is the presence of Woronin bodies on each side of the septa separating the hyphal segments which control the septal pores. If an adjoining hypha is ruptured, the Woronin bodies block the pores to prevent loss of cytoplasm into the ruptured compartment. The Woronin bodies are spherical, hexagonal, or rectangular membrane bound structures with a crystalline protein matrix. Modern classification There are three subphyla that are described and accepted: The Pezizomycotina are the largest subphylum and contains all ascomycetes that produce ascocarps (fruiting bodies), except for one genus, Neolecta, in the Taphrinomycotina. It is roughly equivalent to the previous taxon, Euascomycetes. The Pezizomycotina includes most macroscopic "ascos" such as truffles, ergot, ascolichens, cup fungi (discomycetes), pyrenomycetes, lorchels, and caterpillar fungus. It also contains microscopic fungi such as powdery mildews, dermatophytic fungi, and Laboulbeniales. The Saccharomycotina comprise most of the "true" yeasts, such as baker's yeast and Candida, which are single-celled (unicellular) fungi, which reproduce vegetatively by budding. Most of these species were previously classified in a taxon called Hemiascomycetes. The Taphrinomycotina include a disparate and basal group within the Ascomycota that was recognized following molecular (DNA) analyses. The taxon was originally named Archiascomycetes (or Archaeascomycetes). It includes hyphal fungi (Neolecta, Taphrina, Archaeorhizomyces), fission yeasts (Schizosaccharomyces), and the mammalian lung parasite Pneumocystis. Outdated taxon names Several outdated taxon names—based on morphological features—are still occasionally used for species of the Ascomycota. These include the following sexual (teleomorphic) groups, defined by the structures of their sexual fruiting bodies: the Discomycetes, which included all species forming apothecia; the Pyrenomycetes, which included all sac fungi that formed perithecia or pseudothecia, or any structure resembling these morphological structures; and the Plectomycetes, which included those species that form cleistothecia. Hemiascomycetes included the yeasts and yeast-like fungi that have now been placed into the Saccharomycotina or Taphrinomycotina, while the Euascomycetes included the remaining species of the Ascomycota, which are now in the Pezizomycotina, and the Neolecta, which are in the Taphrinomycotina. Some ascomycetes do not reproduce sexually or are not known to produce asci and are therefore anamorphic species. Those anamorphs that produce conidia (mitospores) were previously described as mitosporic Ascomycota. Some taxonomists placed this group into a separate artificial phylum, the Deuteromycota (or "Fungi Imperfecti"). Where recent molecular analyses have identified close relationships with ascus-bearing taxa, anamorphic species have been grouped into the Ascomycota, despite the absence of the defining ascus. Sexual and asexual isolates of the same species commonly carry different binomial species names, as, for example, Aspergillus nidulans and Emericella nidulans, for asexual and sexual isolates, respectively, of the same species. Species of the Deuteromycota were classified as Coelomycetes if they produced their conidia in minute flask- or saucer-shaped conidiomata, known technically as pycnidia and acervuli. The Hyphomycetes were those species where the conidiophores (i.e., the hyphal structures that carry conidia-forming cells at the end) are free or loosely organized. They are mostly isolated but sometimes also appear as bundles of cells aligned in parallel (described as synnematal) or as cushion-shaped masses (described as sporodochial). Morphology Most species grow as filamentous, microscopic structures called hyphae or as budding single cells (yeasts). Many interconnected hyphae form a thallus usually referred to as the mycelium, which—when visible to the naked eye (macroscopic)—is commonly called mold. During sexual reproduction, many Ascomycota typically produce large numbers of asci. The ascus is often contained in a multicellular, occasionally readily visible fruiting structure, the ascocarp (also called an ascoma). Ascocarps come in a very large variety of shapes: cup-shaped, club-shaped, potato-like, spongy, seed-like, oozing and pimple-like, coral-like, nit-like, golf-ball-shaped, perforated tennis ball-like, cushion-shaped, plated and feathered in miniature (Laboulbeniales), microscopic classic Greek shield-shaped, stalked or sessile. They can appear solitary or clustered. Their texture can likewise be very variable, including fleshy, like charcoal (carbonaceous), leathery, rubbery, gelatinous, slimy, powdery, or cob-web-like. Ascocarps come in multiple colors such as red, orange, yellow, brown, black, or, more rarely, green or blue. Some ascomyceous fungi, such as Saccharomyces cerevisiae, grow as single-celled yeasts, which—during sexual reproduction—develop into an ascus, and do not form fruiting bodies. In lichenized species, the thallus of the fungus defines the shape of the symbiotic colony. Some dimorphic species, such as Candida albicans, can switch between growth as single cells and as filamentous, multicellular hyphae. Other species are pleomorphic, exhibiting asexual (anamorphic) as well as a sexual (teleomorphic) growth forms. Except for lichens, the non-reproductive (vegetative) mycelium of most ascomycetes is usually inconspicuous because it is commonly embedded in the substrate, such as soil, or grows on or inside a living host, and only the ascoma may be seen when fruiting. Pigmentation, such as melanin in hyphal walls, along with prolific growth on surfaces can result in visible mold colonies; examples include Cladosporium species, which form black spots on bathroom caulking and other moist areas. Many ascomycetes cause food spoilage, and, therefore, the pellicles or moldy layers that develop on jams, juices, and other foods are the mycelia of these species or occasionally Mucoromycotina and almost never Basidiomycota. Sooty molds that develop on plants, especially in the tropics are the thalli of many species. Large masses of yeast cells, asci or ascus-like cells, or conidia can also form macroscopic structures. For example. Pneumocystis species can colonize lung cavities (visible in x-rays), causing a form of pneumonia. Asci of Ascosphaera fill honey bee larvae and pupae causing mummification with a chalk-like appearance, hence the name "chalkbrood". Yeasts for small colonies in vitro and in vivo, and excessive growth of Candida species in the mouth or vagina causes "thrush", a form of candidiasis. The cell walls of the ascomycetes almost always contain chitin and β-glucans, and divisions within the hyphae, called "septa", are the internal boundaries of individual cells (or compartments). The cell wall and septa give stability and rigidity to the hyphae and may prevent loss of cytoplasm in case of local damage to cell wall and cell membrane. The septa commonly have a small opening in the center, which functions as a cytoplasmic connection between adjacent cells, also sometimes allowing cell-to-cell movement of nuclei within a hypha. Vegetative hyphae of most ascomycetes contain only one nucleus per cell (uninucleate hyphae), but multinucleate cells—especially in the apical regions of growing hyphae—can also be present. Metabolism In common with other fungal phyla, the Ascomycota are heterotrophic organisms that require organic compounds as energy sources. These are obtained by feeding on a variety of organic substrates including dead matter, foodstuffs, or as symbionts in or on other living organisms. To obtain these nutrients from their surroundings, ascomycetous fungi secrete powerful digestive enzymes that break down organic substances into smaller molecules, which are then taken up into the cell. Many species live on dead plant material such as leaves, twigs, or logs. Several species colonize plants, animals, or other fungi as parasites or mutualistic symbionts and derive all their metabolic energy in form of nutrients from the tissues of their hosts. Owing to their long evolutionary history, the Ascomycota have evolved the capacity to break down almost every organic substance. Unlike most organisms, they are able to use their own enzymes to digest plant biopolymers such as cellulose or lignin. Collagen, an abundant structural protein in animals, and keratin—a protein that forms hair and nails—, can also serve as food sources. Unusual examples include Aureobasidium pullulans, which feeds on wall paint, and the kerosene fungus Amorphotheca resinae, which feeds on aircraft fuel (causing occasional problems for the airline industry), and may sometimes block fuel pipes. Other species can resist high osmotic stress and grow, for example, on salted fish, and a few ascomycetes are aquatic. The Ascomycota is characterized by a high degree of specialization; for instance, certain species of Laboulbeniales attack only one particular leg of one particular insect species. Many Ascomycota engage in symbiotic relationships such as in lichens—symbiotic associations with green algae or cyanobacteria—in which the fungal symbiont directly obtains products of photosynthesis. In common with many basidiomycetes and Glomeromycota, some ascomycetes form symbioses with plants by colonizing the roots to form mycorrhizal associations. The Ascomycota also represents several carnivorous fungi, which have developed hyphal traps to capture small protists such as amoebae, as well as roundworms (Nematoda), rotifers, tardigrades, and small arthropods such as springtails (Collembola). Distribution and living environment The Ascomycota are represented in all land ecosystems worldwide, occurring on all continents including Antarctica. Spores and hyphal fragments are dispersed through the atmosphere and freshwater environments, as well as ocean beaches and tidal zones. The distribution of species is variable; while some are found on all continents, others, as for example the white truffle Tuber magnatum, only occur in isolated locations in Italy and Eastern Europe. The distribution of plant-parasitic species is often restricted by host distributions; for example, Cyttaria is only found on Nothofagus (Southern Beech) in the Southern Hemisphere. Reproduction Asexual reproduction Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. It occurs through vegetative reproductive spores, the conidia. The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called mitospores, which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals. Asexual spores Different types of asexual spores can be identified by colour, shape, and how they are released as individual spores. Spore types can be used as taxonomic characters in the classification within the Ascomycota. The most frequent types are the single-celled spores, which are designated amerospores. If the spore is divided into two by a cross-wall (septum), it is called a didymospore. When there are two or more cross-walls, the classification depends on spore shape. If the septae are transversal, like the rungs of a ladder, it is a phragmospore, and if they possess a net-like structure it is a dictyospore. In staurospores ray-like arms radiate from a central body; in others (helicospores) the entire spore is wound up in a spiral like a spring. Very long worm-like spores with a length-to-diameter ratio of more than 15:1, are called scolecospores. Conidiogenesis and dehiscence Important characteristics of the anamorphs of the Ascomycota are conidiogenesis, which includes spore formation and dehiscence (separation from the parent structure). Conidiogenesis corresponds to Embryology in animals and plants and can be divided into two fundamental forms of development: blastic conidiogenesis, where the spore is already evident before it separates from the conidiogenic hypha, and thallic conidiogenesis, during which a cross-wall forms and the newly created cell develops into a spore. The spores may or may not be generated in a large-scale specialized structure that helps to spread them. These two basic types can be further classified as follows: blastic-acropetal (repeated budding at the tip of the conidiogenic hypha, so that a chain of spores is formed with the youngest spores at the tip), blastic-synchronous (simultaneous spore formation from a central cell, sometimes with secondary acropetal chains forming from the initial spores), blastic-sympodial (repeated sideways spore formation from behind the leading spore, so that the oldest spore is at the main tip), blastic-annellidic (each spore separates and leaves a ring-shaped scar inside the scar left by the previous spore), blastic-phialidic (the spores arise and are ejected from the open ends of special conidiogenic cells called phialides, which remain constant in length), basauxic (where a chain of conidia, in successively younger stages of development, is emitted from the mother cell), blastic-retrogressive (spores separate by formation of crosswalls near the tip of the conidiogenic hypha, which thus becomes progressively shorter), thallic-arthric (double cell walls split the conidiogenic hypha into cells that develop into short, cylindrical spores called arthroconidia; sometimes every second cell dies off, leaving the arthroconidia free), thallic-solitary (a large bulging cell separates from the conidiogenic hypha, forms internal walls, and develops to a phragmospore). Sometimes the conidia are produced in structures visible to the naked eye, which help to distribute the spores. These structures are called "conidiomata" (singular: conidioma), and may take the form of pycnidia (which are flask-shaped and arise in the fungal tissue) or acervuli (which are cushion-shaped and arise in host tissue). Dehiscence happens in two ways. In schizolytic dehiscence, a double-dividing wall with a central lamella (layer) forms between the cells; the central layer then breaks down thereby releasing the spores. In rhexolytic dehiscence, the cell wall that joins the spores on the outside degenerates and releases the conidia. Heterokaryosis and parasexuality Several Ascomycota species are not known to have a sexual cycle. Such asexual species may be able to undergo genetic recombination between individuals by processes involving heterokaryosis and parasexual events. Parasexuality refers to the process of heterokaryosis, caused by merging of two hyphae belonging to different individuals, by a process called anastomosis, followed by a series of events resulting in genetically different cell nuclei in the mycelium. The merging of nuclei is not followed by meiotic events, such as gamete formation and results in an increased number of chromosomes per nuclei. Mitotic crossover may enable recombination, i.e., an exchange of genetic material between homologous chromosomes. The chromosome number may then be restored to its haploid state by nuclear division, with each daughter nuclei being genetically different from the original parent nuclei. Alternatively, nuclei may lose some chromosomes, resulting in aneuploid cells. Candida albicans (class Saccharomycetes) is an example of a fungus that has a parasexual cycle (see Candida albicans and Parasexual cycle). Sexual reproduction Sexual reproduction in the Ascomycota leads to the formation of the ascus, the structure that defines this fungal group and distinguishes it from other fungal phyla. The ascus is a tube-shaped vessel, a meiosporangium, which contains the sexual spores produced by meiosis and which are called ascospores. Apart from a few exceptions, such as Candida albicans, most ascomycetes are haploid, i.e., they contain one set of chromosomes per nucleus. During sexual reproduction there is a diploid phase, which commonly is very short, and meiosis restores the haploid state. The sexual cycle of one well-studied representative species of Ascomycota is described in greater detail in Neurospora crassa. Also, the adaptive basis for the maintenance of sexual reproduction in the Ascomycota fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for the maintenance of this capability is the benefit of repairing DNA damage by using recombination that occurs during meiosis. DNA damage can be caused by a variety of stresses such as nutrient limitation. Formation of sexual spores The sexual part of the life cycle commences when two hyphal structures mate. In the case of homothallic species, mating is enabled between hyphae of the same fungal clone, whereas in heterothallic species, the two hyphae must originate from fungal clones that differ genetically, i.e., those that are of a different mating type. Mating types are typical of the fungi and correspond roughly to the sexes in plants and animals; however one species may have more than two mating types, resulting in sometimes complex vegetative incompatibility systems. The adaptive function of mating type is discussed in Neurospora crassa. Gametangia are sexual structures formed from hyphae, and are the generative cells. A very fine hypha, called trichogyne emerges from one gametangium, the ascogonium, and merges with a gametangium (the antheridium) of the other fungal isolate. The nuclei in the antheridium then migrate into the ascogonium, and plasmogamy—the mixing of the cytoplasm—occurs. Unlike in animals and plants, plasmogamy is not immediately followed by the merging of the nuclei (called karyogamy). Instead, the nuclei from the two hyphae form pairs, initiating the dikaryophase of the sexual cycle, during which time the pairs of nuclei synchronously divide. Fusion of the paired nuclei leads to mixing of the genetic material and recombination and is followed by meiosis. A similar sexual cycle is present in the red algae (Rhodophyta). A discarded hypothesis held that a second karyogamy event occurred in the ascogonium prior to ascogeny, resulting in a tetraploid nucleus which divided into four diploid nuclei by meiosis and then into eight haploid nuclei by a supposed process called brachymeiosis, but this hypothesis was disproven in the 1950s. From the fertilized ascogonium, dinucleate hyphae emerge in which each cell contains two nuclei. These hyphae are called ascogenous or fertile hyphae. They are supported by the vegetative mycelium containing uni– (or mono–) nucleate hyphae, which are sterile. The mycelium containing both sterile and fertile hyphae may grow into fruiting body, the ascocarp, which may contain millions of fertile hyphae. An ascocarp is the fruiting body of the sexual phase in Ascomycota. There are five morphologically different types of ascocarp, namely: Naked asci: these occur in simple ascomycetes; asci are produced on the organism's surface. Perithecia: Asci are in flask-shaped ascoma (perithecium) with a pore (ostiole) at the top. Cleistothecia: The ascocarp (a cleistothecium) is spherical and closed. Apothecia: The asci are in a bowl shaped ascoma (apothecium). These are sometimes called the "cup fungi". Pseudothecia: Asci with two layers, produced in pseudothecia that look like perithecia. The ascospores are arranged irregularly. The sexual structures are formed in the fruiting layer of the ascocarp, the hymenium. At one end of ascogenous hyphae, characteristic U-shaped hooks develop, which curve back opposite to the growth direction of the hyphae. The two nuclei contained in the apical part of each hypha divide in such a way that the threads of their mitotic spindles run parallel, creating two pairs of genetically different nuclei. One daughter nucleus migrates close to the hook, while the other daughter nucleus locates to the basal part of the hypha. The formation of two parallel cross-walls then divides the hypha into three sections: one at the hook with one nucleus, one at the basal of the original hypha that contains one nucleus, and one that separates the U-shaped part, which contains the other two nuclei. Fusion of the nuclei (karyogamy) takes place in the U-shaped cells in the hymenium, and results in the formation of a diploid zygote. The zygote grows into the ascus, an elongated tube-shaped or cylinder-shaped capsule. Meiosis then gives rise to four haploid nuclei, usually followed by a further mitotic division that results in eight nuclei in each ascus. The nuclei along with some cytoplasma become enclosed within membranes and a cell wall to give rise to ascospores that are aligned inside the ascus like peas in a pod. Upon opening of the ascus, ascospores may be dispersed by the wind, while in some cases the spores are forcibly ejected form the ascus; certain species have evolved spore cannons, which can eject ascospores up to 30 cm. away. When the spores reach a suitable substrate, they germinate, form new hyphae, which restarts the fungal life cycle. The form of the ascus is important for classification and is divided into four basic types: unitunicate-operculate, unitunicate-inoperculate, bitunicate, or prototunicate. See the article on asci for further details. Ecology The Ascomycota fulfil a central role in most land-based ecosystems. They are important decomposers, breaking down organic materials, such as dead leaves and animals, and helping the detritivores (animals that feed on decomposing material) to obtain their nutrients. Ascomycetes, along with other fungi, can break down large molecules such as cellulose or lignin, and thus have important roles in nutrient cycling such as the carbon cycle. The fruiting bodies of the Ascomycota provide food for many animals ranging from insects and slugs and snails (Gastropoda) to rodents and larger mammals such as deer and wild boars. Many ascomycetes also form symbiotic relationships with other organisms, including plants and animals. Lichens Probably since early in their evolutionary history, the Ascomycota have formed symbiotic associations with green algae (Chlorophyta), and other types of algae and cyanobacteria. These mutualistic associations are commonly known as lichens, and can grow and persist in terrestrial regions of the earth that are inhospitable to other organisms and characterized by extremes in temperature and humidity, including the Arctic, the Antarctic, deserts, and mountaintops. While the photoautotrophic algal partner generates metabolic energy through photosynthesis, the fungus offers a stable, supportive matrix and protects cells from radiation and dehydration. Around 42% of the Ascomycota (about 18,000 species) form lichens, and almost all the fungal partners of lichens belong to the Ascomycota. Mycorrhizal fungi and endophytes Members of the Ascomycota form two important types of relationship with plants: as mycorrhizal fungi and as endophytes. Mycorrhiza are symbiotic associations of fungi with the root systems of the plants, which can be of vital importance for growth and persistence for the plant. The fine mycelial network of the fungus enables the increased uptake of mineral salts that occur at low levels in the soil. In return, the plant provides the fungus with metabolic energy in the form of photosynthetic products. Endophytic fungi live inside plants, and those that form mutualistic or commensal associations with their host, do not damage their hosts. The exact nature of the relationship between endophytic fungus and host depends on the species involved, and in some cases fungal colonization of plants can bestow a higher resistance against insects, roundworms (nematodes), and bacteria; in the case of grass endophytes the fungal symbiont produces poisonous alkaloids, which can affect the health of plant-eating (herbivorous) mammals and deter or kill insect herbivores. Symbiotic relationships with animals Several ascomycetes of the genus Xylaria colonize the nests of leafcutter ants and other fungus-growing ants of the tribe Attini, and the fungal gardens of termites (Isoptera). Since they do not generate fruiting bodies until the insects have left the nests, it is suspected that, as confirmed in several cases of Basidiomycota species, they may be cultivated. Bark beetles (family Scolytidae) are important symbiotic partners of ascomycetes. The female beetles transport fungal spores to new hosts in characteristic tucks in their skin, the mycetangia. The beetle tunnels into the wood and into large chambers in which they lay their eggs. Spores released from the mycetangia germinate into hyphae, which can break down the wood. The beetle larvae then feed on the fungal mycelium, and, on reaching maturity, carry new spores with them to renew the cycle of infection. A well-known example of this is Dutch elm disease, caused by Ophiostoma ulmi, which is carried by the European elm bark beetle, Scolytus multistriatus. Plant disease interactions One of their most harmful roles is as the agent of many plant diseases. For instance: Dutch elm disease, caused by the closely related species Ophiostoma ulmi and Ophiostoma novo-ulmi, has led to the death of many elms in Europe and North America. The originally Asian Cryphonectria parasitica is responsible for attacking Sweet Chestnuts (Castanea sativa), and virtually eliminated the once-widespread American Chestnut (Castanea dentata), A disease of maize (Zea mays), which is especially prevalent in North America, is brought about by Cochliobolus heterostrophus. Taphrina deformans causes leaf curl of peach. Uncinula necator is responsible for the disease powdery mildew, which attacks grapevines. Species of Monilinia cause brown rot of stone fruit such as peaches (Prunus persica) and sour cherries (Prunus ceranus). Members of the Ascomycota such as Stachybotrys chartarum are responsible for fading of woolen textiles, which is a common problem especially in the tropics. Blue-green, red and brown molds attack and spoil foodstuffs – for instance Penicillium italicum rots oranges. Cereals infected with Fusarium graminearum contain mycotoxins like deoxynivalenol (DON), which causes Fusarium ear blight and skin and mucous membrane lesions when eaten by pigs. Human disease interactions Aspergillus fumigatus, the most common cause of fungal infection in the lungs of immune-compromised patients often resulting in death. Also the most frequent cause of Allergic bronchopulmonary aspergillosis, which often occurs in patients with Cystic fibrosis as well as Asthma. Candida albicans, a yeast that attacks the mucous membranes, can cause an infection of the mouth or vagina called thrush or candidiasis, and is also blamed for "yeast allergies". Fungi like Epidermophyton cause skin infections but are not very dangerous for people with healthy immune systems. However, if the immune system is damaged they can be life-threatening; for instance, Pneumocystis jirovecii is responsible for severe lung infections that occur in AIDS patients. Ergot (Claviceps purpurea) is a direct menace to humans when it attacks wheat or rye and produces highly poisonous alkaloids, causing ergotism if consumed. Symptoms include hallucinations, stomach cramps, and a burning sensation in the limbs ("Saint Anthony's Fire"). Aspergillus flavus, which grows on peanuts and other hosts, generates aflatoxin, which damages the liver and is highly carcinogenic. Histoplasma capsulatum causes histoplasmosis, which affects immunocompromised patients. Blastomyces dermatitidis is the causal agent of blastomycosis, an invasive and often serious fungal infection found occasionally in humans and other animals in regions where the fungus is endemic. Paracoccidioides brasiliensis and Paracoccidioides lutzii are the causal agents of paracoccidioidomycosis. Coccidioides immitis and Coccidioides posadasii are the causative agent of coccidioidomycosis (valley fever). Talaromyces marneffei, formerly called Penicillium marneffei causes talaromycosis Beneficial effects for humans On the other hand, ascus fungi have brought some significant benefits to humanity. The most famous case may be that of the mold Penicillium chrysogenum (formerly Penicillium notatum), which, probably to attack competing bacteria, produces an antibiotic that, under the name of penicillin, triggered a revolution in the treatment of bacterial infectious diseases in the 20th century. The medical importance of Tolypocladium niveum as an immunosuppressor can hardly be exaggerated. It excretes Ciclosporin, which, as well as being given during Organ transplantation to prevent rejection, is also prescribed for auto-immune diseases such as multiple sclerosis. However, there is some doubt over the long-term side effects of the treatment. Some ascomycete fungi can be easily altered through genetic engineering procedures. They can then produce useful proteins such as insulin, human growth hormone, or TPa, which is employed to dissolve blood clots. Several species are common model organisms in biology, including Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Neurospora crassa. The genomes of some ascomycete fungi have been fully sequenced. Baker's Yeast (Saccharomyces cerevisiae) is used to make bread, beer and wine, during which process sugars such as glucose or sucrose are fermented to make ethanol and carbon dioxide. Bakers use the yeast for carbon dioxide production, causing the bread to rise, with the ethanol boiling off during cooking. Most vintners use it for ethanol production, releasing carbon dioxide into the atmosphere during fermentation. Brewers and traditional producers of sparkling wine use both, with a primary fermentation for the alcohol and a secondary one to produce the carbon dioxide bubbles that provide the drinks with a "sparkling" texture in the case of wine and the desirable foam in the case of beer. Enzymes of Penicillium camemberti play a role in the manufacture of the cheeses Camembert and Brie, while those of Penicillium roqueforti do the same for Gorgonzola, Roquefort and Stilton. In Asia, Aspergillus oryzae is added to a pulp of soaked soya beans to make soy sauce and is used to break down starch in rice and other grains into simple sugars for fermentation into East Asian alcoholic beverages such as huangjiu and sake. Finally, some members of the Ascomycota are choice edibles; morels (Morchella spp.), truffles (Tuber spp.), and lobster mushroom (Hypomyces lactifluorum) are some of the most sought-after fungal delicacies. Cordyceps militaris is known for its numerous medicinal benefits, including supporting the immune system, reducing inflammation, providing antioxidant effects, enhancing metabolic health, improving athletic performance, and promoting respiratory health. It contains bioactive compounds such as cordycepin, cordycepic acid, adenosine, and polysaccharides, beta-glucans, and ergosterol. See also List of Ascomycota families incertae sedis List of Ascomycota genera incertae sedis Notes Cited texts Mycology Fungus phyla
Ascomycota
Biology
9,299
34,267,681
https://en.wikipedia.org/wiki/Diversity%20and%20Distributions
Diversity and Distributions is a bimonthly peer-reviewed scientific journal on conservation biogeography. It was established in 1993 as Biodiversity Letters. The journal covers the applications of biogeographical principles, theories, and analyses to problems concerning the conservation of biodiversity. The editors-in-chief are K. C. Burns, Luca Santini, and Aibin Zhan, who took over from Janet Franklin in 2019. After over two decades as editor-in-chief, David M. Richardson stepped down from the role in December 2015. According to the Journal Citation Reports, the journal has a 2018 impact factor of 4.092, ranking it 2nd out of 37 journals in the category "Biodiversity Conservation" and 20th out of 134 journals in the category "Ecology". 2018 resignation of the editorial board A majority of the editorial board of the journal resigned in 2018 after Wiley allegedly blocked the publication of a letter protesting the publisher's decision to make the journal entirely open access. References External links Wiley-Blackwell academic journals Ecology journals Bimonthly journals Academic journals established in 1993 English-language journals
Diversity and Distributions
Environmental_science
221
38,731,424
https://en.wikipedia.org/wiki/Cantic%20order-4%20hexagonal%20tiling
In geometry, the cantic order-4 hexagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{(4,4,3)} or h2{6,4}. Related polyhedra and tiling References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Hyperbolic space Square tiling Uniform tilings in hyperbolic plane List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hexagonal tilings Hyperbolic tilings Isogonal tilings Uniform tilings
Cantic order-4 hexagonal tiling
Physics
179
7,944,108
https://en.wikipedia.org/wiki/Fixed-asset%20turnover
Fixed-asset turnover is the ratio of sales (on the profit and loss account) to the value of fixed assets (on the balance sheet). It indicates how well the business is using its fixed assets to generate sales. Generally speaking, the higher the ratio, the better, because a high ratio indicates the business has less money tied up in fixed assets for each unit of currency of sales revenue. A declining ratio may indicate that the business is over-invested in plant, equipment, or other fixed assets. In A.A.T. assessments this financial measure is calculated in two different ways. 1. Total Asset Turnover Ratio = Revenue / Total Assets 2. Net Asset Turnover Ratio = Revenue / (Total Assets - Current Liabilities) References External links http://www.investopedia.com/terms/f/fixed-asset-turnover.asp http://www.businessdictionary.com/definition/fixed-asset-turnover-ratio.html http://www.investopedia.com/university/ratios/operating-performance/ratio1.asp Marketing analytics Financial ratios Fixed asset Corporate development
Fixed-asset turnover
Mathematics
234
36,222,895
https://en.wikipedia.org/wiki/Ruprecht%20147
Ruprecht 147 or NGC 6774 is a dispersed star cluster in the Milky Way galaxy. It is about 1,000 light years away, which is close to Earth in comparison with other such clusters. In late summer, it can be seen with binoculars in the constellation of Sagittarius. The stars, bound by gravity, are about 2.5 to 3.25 billion years old. The cluster, discovered in 1830 by John Herschel, was sometimes thought to be an asterism (a random collection of stars) due to its sparseness and location against the background of the richest part of the Milky Way, and also since the brightest stars in this old cluster perished long ago. In 1966 the Czech astronomer Jaroslav Ruprecht classified it as a type III 2 m open cluster under the Trumpler scheme. It received otherwise little attention until 2012, when it was identified as a potentially important reference gauge for stellar and Galactic astrophysics research, particularly the research of Sun-like stars. Ruprecht 147 has five detached eclipsing binary stars that are relatively bright, and thus easy to observe. Additionally, there is a transiting brown dwarf around the star EPIC 219388192 (CWW 89A), and a transiting planet around the star K2-231. References Open clusters Sagittarius (constellation) NGC objects
Ruprecht 147
Astronomy
277
242,282
https://en.wikipedia.org/wiki/Nucleocosmochronology
Nucleocosmochronology, or nuclear cosmochronology, is a technique used to determine timescales for astrophysical objects and events based on observed ratios of radioactive heavy elements and their decay products. It is similar in many respects to radiometric dating, in which trace radioactive impurities were selectively incorporated into materials when they were formed. To calculate the age of formation of astronomical objects, the observed ratios of abundances of heavy radioactive and stable nuclides are compared to the primordial ratios predicted by nucleosynthesis theory. Both radioactive elements and their decay products matter, and some important elements include the long-lived radioactive nuclei Th-232, U-235, and U-238, all formed by the r-process. The process has been compared to radiocarbon dating. The age of the objects are determined by placing constraints on the duration of nucleosynthesis in the galaxy. Nucleocosmochronology has been employed to determine the age of the Sun ( billion years) and of the Galactic thin disk ( billion years), among other objects. It has also been used to estimate the age of the Milky Way itself by studying Cayrel's Star in the Galactic halo, which due to its low metallicity, is believed to have formed early in the history of the Galaxy. Limiting factors in its precision are the quality of observations of faint stars and the uncertainty of the primordial abundances of r-process elements. History The first use of nuclear cosmochronology was in 1929, by Ernest Rutherford, who, shortly after the discovery that uranium has two naturally occurring radioactive isotopes with different half-lives, attempted to use the ratio to determine when the uranium had been produced. He suggested that both had been produced in equal abundances, assuming they had been produced in a single moment in time, and applied an argument based on incorrect assumptions about astrophysics to derive an incorrect age of about 6 billion years. He pioneered the idea that age could be calculated by the ratio of abundances of radioactive parent elements and their stable decay products. According to a tribute written by colleagues, a large part of the modern science of nuclear cosmochronology grew out of work by John Reynolds and his students. Model-independent techniques were developed in 1970. Technique It is necessarily to know the initial ratios by which nucleosynthesis produce radioactive parent elements in comparison to the stable elements they decay to, before decay occurs. These are the abundances which the elements would have if the radioactive parent elements were stable, and not producing daughter nuclei. The ratio of the abundance of radioactive elements to the abundance they would have if they were stable is called the remainder. Measurement of the current abundances of elements in objects, combined with nucleosynthesis theory, determines the remainders. See also Astrochemistry Astronomical chronology Geochronology Gyrochronology References Dating methods Astrophysics Nuclear physics
Nucleocosmochronology
Physics,Astronomy
598